code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
---|---|
r None
`windows` Windows Graphics Devices
-----------------------------------
### Description
Available only on Windows. A graphics device is opened. For `windows`, `win.graph`, `x11` and `X11` this is a window on the current Windows display: the multiple names are for compatibility with other systems. `win.metafile` prints to a file and `win.print` to the Windows print system.
### Usage
```
windows(width, height, pointsize, record, rescale, xpinch, ypinch,
bg, canvas, gamma, xpos, ypos, buffered, title,
restoreConsole, clickToConfirm, fillOddEven,
family, antialias)
win.graph(width, height, pointsize)
win.metafile(filename = "", width = 7, height = 7, pointsize = 12,
family, restoreConsole = TRUE)
win.print(width = 7, height = 7, pointsize = 12, printer = "",
family, antialias, restoreConsole = TRUE)
```
### Arguments
| | |
| --- | --- |
| `width, height` | the (nominal) width and height of the canvas of the plotting window in inches. Default `7`. |
| `pointsize` | the default pointsize of plotted text, interpreted as big points (1/72 inch). Values are rounded to the nearest integer: values less than or equal to zero are reset to `12`, the default. |
| `record` | logical: sets the initial state of the flag for recording plots. Default `FALSE`. |
| `rescale` | character, one of `c("R", "fit", "fixed")`. Controls the action for resizing of the device. Default `"R"`. See the ‘Resizing options’ section. |
| `xpinch, ypinch` | double. Pixels per inch, horizontally and vertically. Default `NA_real_`, which means to take the value from Windows. |
| `bg` | color. The initial background color. Default `"transparent"`. |
| `canvas` | color. The color of the canvas which is visible when the background color is transparent. Should be a solid color (and any alpha value will be ignored). Default `"white"`. |
| `gamma` | gamma correction fudge factor. Colours in R are sRGB; if your monitor does not conform to sRGB, you might be able to improve things by tweaking this parameter to apply additional gamma correction to the RGB channels. By default 1 (no additional gamma correction). |
| `xpos, ypos` | integer. Position of the top left of the window, in pixels. Negative values are taken from the opposite edge of the monitor. Missing values (the default) mean take the default from the ‘[Rconsole](../../utils/html/rconsole)’ file, which in turn defaults to `xpos = -25, ypos = 0`: this puts the right edge of the window 25 pixels from the right edge of the monitor. |
| `buffered` | logical. Should the screen output be double-buffered? Default `TRUE`. |
| `title` | character string, up to 100 bytes. With the default `""`, a suitable title is created internally. A C-style format for an integer will be substituted by the device number. |
| `filename` | the name of the output file: it will be an enhanced Windows metafile, usually given extension ‘.emf’ or ‘.wmf’. Up to 511 characters are allowed. The page number is substituted if an integer format is included in the character string (see `<postscript>` for further details) and tilde-expansion (see `[path.expand](../../base/html/path.expand)`) is performed. (The result must be less than 600 characters long.) The default, `""`, means the clipboard. |
| `printer` | The name of a printer as known to Windows. The default causes a dialog box to come up for the user to choose a printer. |
| `restoreConsole` | logical: see the ‘Details’ below. Defaults to `FALSE` for screen devices. |
| `clickToConfirm` | logical: if true confirmation of a new frame will be by clicking on the device rather than answering a problem in the console. Default `TRUE`. |
| `fillOddEven` | logical controlling the polygon fill mode: see `[polygon](../../graphics/html/polygon)` for details. Default `TRUE`. |
| `family` | A length-one character vector specifying the default font family. See section ‘Fonts’. |
| `antialias` | A length-one character vector, requesting control over font antialiasing. This is partially matched to `"default"`, `"none"`, `"cleartype"` or `"gray"`. See the ‘Fonts’ section. |
### Details
All these devices are implemented as variants of the same device.
All arguments of `windows` have defaults set by `<windows.options>`: the defaults given in the arguments section are the defaults for the defaults. These defaults also apply to the internal values of `gamma`, `xpinch`, `ypinch`, `buffered`, `restoreConsole` and `antialias` for `win.graph`, `x11` and `X11`.
The size of a window is computed from information provided about the display: it depends on the system being configured accurately. By default a screen device asks Windows for the number of pixels per inch. This can be overridden (it is often wrong) by specifying `xpinch` and `ypinch`, most conveniently *via* `<windows.options>`. For example, a 13.3 inch 1280x800 screen (a typical laptop display) was reported as 96 dpi even though it is physically about 114 dpi.
The different colours need to be distinguished carefully. Areas outside the device region are coloured in the Windows application background colour. The device region is coloured in the canvas colour. This is over-painted by the background colour of a plot when a new page is called for, but that background colour can be transparent (and is by default). One difference between setting the canvas colour and the background colour is that when a plot is saved the background colour is copied but the canvas colour is not. The argument `bg` sets the initial value of `[par](../../graphics/html/par)("bg")` in base graphics and `[gpar](../../grid/html/gpar)("fill")` in grid graphics
Recorded plot histories are of class `"SavedPlots"`. They have a `print` method, and a subset method. As the individual plots are of class `"recordedplot"` they can be replayed by printing them: see `[recordPlot](recordplot)`. The active plot history is stored in variable `.SavedPlots` in the workspace.
When a screen device is double-buffered (the default) the screen is updated 100ms after last plotting call or every 500ms during continuous plotting. These times can be altered by setting `options("windowsTimeout")` to a vector of two integers before opening the device.
Line widths as controlled by `par(lwd =)` are in multiples of 1/96inch. Multiples less than 1 are allowed, down to one pixel width.
For `win.metafile` only one plot is allowed per file, and Windows seems to disallow reusing the file. So the *only* way to allow multiple plots is to use a parametrized `filename` as in the example. If the `filename` is omitted (or specified as `""`), the output is copied to the clipboard when the device is closed.
The `restoreConsole` argument is a temporary fix for a problem in the current implementation of several Windows graphics devices, and is likely to be removed in an upcoming release. If set to `FALSE`, the console will not receive the focus after the new device is opened.
There is support for semi-transparent colours of lines, fills and text on the screen devices. These work for saving (from the ‘File’ menu) to PDF, PNG, BMP, JPEG and TIFF, but will be ignored if saving to Metafile and PostScript. Limitations in the underlying Windows API mean that a semi-transparent object must be contained strictly within the device region (allowing for line widths and joins).
### Value
A plot device is opened: nothing is returned to the **R** interpreter.
### Resizing options
If a screen device is re-sized, the default behaviour (`"R"`) is to redraw the plot(s) as if the new size had been specified originally. Using `"fit"` will rescale the existing plot(s) to fit the new device region, preserving the aspect ratio. Using `"fixed"` will leave the plot size unchanged, adding scrollbars if part of the plot is obscured.
A graphics window will never be created at more than 85% of the screen width or height, but can be resized to a larger size. For the first two `rescale` options the width and height are rescaled proportionally if necessary, and if `rescale = "fit"` the plot(s) are rescaled accordingly. If `rescale = "fixed"` the initially displayed portion is selected within these constraints, separately for width and height. In MDI mode, the limit is 85% of the MDI client region.
Using `[strwidth](../../graphics/html/strwidth)` or `[strheight](../../graphics/html/strwidth)` after a window has been rescaled (when using `"fit"`) gives dimensions in the original units, but only approximately as they are derived from the metrics of the rescaled fonts (which are in integer sizes)
The displayed region may be bigger than the ‘paper’ size, and area(s) outside the ‘paper’ are coloured in the Windows application background colour. Graphics parameters such as `"din"` refer to the scaled plot if rescaling is in effect.
### Fonts
The fonts used for text drawn in a Windows device may be controlled in two ways. The file `R_HOME\etc\[Rdevga](../../utils/html/rconsole)` can be used to specify mappings for `par(font =)` (or the grid equivalent). Alternatively, a font family can be specified by a non-empty `family` argument (or by e.g. `par(family =)` in the graphics package) and this will be used for fonts 1:4 via the Windows font database (see `[windowsFonts](windowsfonts)`).
How the fonts look depends on the antialiasing settings, both through the `antialias` argument and the machine settings. These are hints to Windows GDI that may not be able to be followed, but `antialias = "none"` should ensure that no antialiasing is used. For a screen device the default depends on the machine settings: it will be `"cleartype"` if that has been enabled. Note that the greyscale antialiasing that is used only for small fonts (below about 9 pixels, around 7 points on a typical display).
When accessing a system through Remote Desktop, both the Remote Desktop settings *and* the user's local account settings are relevant to whether antialiasing is used.
Some fonts are intended only to be used with ClearType antialiasing, for example the `Meiryo` Japanese font.
### Conventions
This section describes the implementation of the conventions for graphics devices set out in the ‘R Internals’ manual.
* The default device size is 7 inches square, although this is often incorrectly implemented by Windows: see ‘Details’.
* Font sizes are in big points.
* The default font family is Arial.
* Line widths are as a multiple of 1/96 inch, with a minimum of one pixel.
* The minimum radius of a circle is 1 pixel.
* `pch = "."` with `cex = 1` corresponds to a rectangle of sides the larger of one pixel and 0.01 inch.
* Colours are interpreted via the unprofiled colour mapping of the graphics card – this is *assumed* to conform to sRGB.
### Note
`x11()`, `X11()` and `win.graph()` are simple wrappers calling `windows()`, and mainly exist for compatibility reasons.
Further, `<x11>()` and `X11()` have their own help page for Unix-alikes (where they also have more arguments).
### See Also
`[windowsFonts](windowsfonts)`, `[savePlot](saveplot)`, `[bringToTop](bringtotop)`, `[Devices](devices)`, `<postscript>`, `<x11>` for Unix-alikes.
### Examples
```
## Not run: ## A series of plots written to a sequence of metafiles
if(.Platform$OS.type == "windows")
win.metafile("Rplot%02d.wmf", pointsize = 10)
## End(Not run)
```
r None
`quartzFonts` Quartz Fonts Setup
---------------------------------
### Description
These functions handle the translation of a device-independent **R** graphics font family name to a `<quartz>` font description.
They are only available on Unix-alikes, i.e, not on Windows, and typically used on the Mac.
### Usage
```
quartzFont(family)
quartzFonts(...)
```
### Arguments
| | |
| --- | --- |
| `family` | a character vector containing the four PostScript font names for plain, bold, italic, and bolditalic versions of a font family. |
| `...` | either character strings naming mappings to display, or new (named) mappings to define. |
### Details
A quartz device is created with a default font (see the documentation for `quartz`), but it is also possible to specify a font family when drawing to the device (for example, see the documentation for `[gpar](../../grid/html/gpar)` in the grid package).
The font family sent to the device is a simple string name, which must be mapped to something more specific to quartz fonts. A list of mappings is maintained and can be modified by the user.
The `quartzFonts` function can be used to list existing mappings and to define new mappings. The `quartzFont` function can be used to create a new mapping.
Default mappings are provided for three device-independent font family names: `"sans"` for a sans-serif font, `"serif"` for a serif font and `"mono"` for a monospaced font.
### See Also
`<quartz>` for the default Mac graphics device.
### Examples
```
if(.Platform$OS.type == "unix") { # includes Mac
utils::str( quartzFonts() ) # a list(serif = .., sans = .., mono = ..)
quartzFonts("mono") # the list(mono = ..) sublist of quartzFonts()
## Not run:
## for East Asian locales you can use something like
quartzFonts(sans = quartzFont(rep("AppleGothic", 4)),
serif = quartzFont(rep("AppleMyungjp", 4)))
## since the default fonts may well not have the glyphs needed
## End(Not run)
}
```
r None
`Hershey` Hershey Vector Fonts in R
------------------------------------
### Description
If the `family` graphical parameter (see `[par](../../graphics/html/par)`) has been set to one of the Hershey fonts (see ‘Details’) Hershey vector fonts are used to render text.
When using the `[text](../../graphics/html/text)` and `[contour](../../graphics/html/contour)` functions Hershey fonts may be selected via the `vfont` argument, which is a character vector of length 2 (see ‘Details’ for valid values). This allows Cyrillic to be selected, which is not available via the font families.
### Usage
```
Hershey
```
### Details
The Hershey fonts have two advantages:
1. vector fonts describe each character in terms of a set of points; **R** renders the character by joining up the points with straight lines. This intimate knowledge of the outline of each character means that **R** can arbitrarily transform the characters, which can mean that the vector fonts look better for rotated text.
2. this implementation was adapted from the GNU libplot library which provides support for non-ASCII and non-English fonts. This means that it is possible, for example, to produce weird plotting symbols and Japanese characters.
Drawback:
You cannot use mathematical expressions (`<plotmath>`) with Hershey fonts.
The Hershey characters are organised into a set of fonts. A particular font is selected by specifying one of the following font families via `par(family)` and specifying the desired font face (plain, bold, italic, bold-italic) via `par(font)`.
| | |
| --- | --- |
| family | faces available |
| `"HersheySerif"` | plain, bold, italic, bold-italic |
| `"HersheySans"` | plain, bold, italic, bold-italic |
| `"HersheyScript"` | plain, bold |
| `"HersheyGothicEnglish"` | plain |
| `"HersheyGothicGerman"` | plain |
| `"HersheyGothicItalian"` | plain |
| `"HersheySymbol"` | plain, bold, italic, bold-italic |
| `"HersheySansSymbol"` | plain, italic |
| |
In the `vfont` specification for the `text` and `contour` functions, the Hershey font is specified by a typeface (e.g., `serif` or `sans serif`) and a fontindex or ‘style’ (e.g., `plain` or `italic`). The first element of `vfont` specifies the typeface and the second element specifies the fontindex. The first table produced by `demo(Hershey)` shows the character `a` produced by each of the different fonts.
The available `typeface` and `fontindex` values are available as list components of the variable `Hershey`. The allowed pairs for `(typeface, fontindex)` are:
| | |
| --- | --- |
| serif | plain |
| serif | italic |
| serif | bold |
| serif | bold italic |
| serif | cyrillic |
| serif | oblique cyrillic |
| serif | EUC |
| sans serif | plain |
| sans serif | italic |
| sans serif | bold |
| sans serif | bold italic |
| script | plain |
| script | italic |
| script | bold |
| gothic english | plain |
| gothic german | plain |
| gothic italian | plain |
| serif symbol | plain |
| serif symbol | italic |
| serif symbol | bold |
| serif symbol | bold italic |
| sans serif symbol | plain |
| sans serif symbol | italic |
| |
and the indices of these are available as `Hershey$allowed`.
Escape sequences:
The string to be drawn can include escape sequences, which all begin with a \. When **R** encounters a \, rather than drawing the \, it treats the subsequent character(s) as a coded description of what to draw.
One useful escape sequence (in the current context) is of the form: \123. The three digits following the \ specify an octal code for a character. For example, the octal code for `p` is 160 so the strings `"p"` and `"\160"` are equivalent. This is useful for producing characters when there is not an appropriate key on your keyboard.
The other useful escape sequences all begin with \\. These are described below. Remember that backslashes have to be doubled in **R** character strings, so they need to be entered with *four* backslashes.
Symbols:
an entire string of Greek symbols can be produced by selecting the HersheySymbol or HersheySansSymbol family or the Serif Symbol or Sans Serif Symbol typeface. To allow Greek symbols to be embedded in a string which uses a non-symbol typeface, there are a set of symbol escape sequences of the form \\ab. For example, the escape sequence \\\*a produces a Greek alpha. The second table in `demo(Hershey)` shows all of the symbol escape sequences and the symbols that they produce.
ISO Latin-1:
further escape sequences of the form \\ab are provided for producing ISO Latin-1 characters. Another option is to use the appropriate octal code. The (non-ASCII) ISO Latin-1 characters are in the range 241...377. For example, \366 produces the character o with an umlaut. The third table in `demo(Hershey)` shows all of the ISO Latin-1 escape sequences.
These characters can be used directly. (Characters not in Latin-1 are replaced by a dot.)
Several characters are missing, c-cedilla has no cedilla and ‘sharp s’ (U+00DF, also known as ‘esszett’) is rendered as `ss`.
Special Characters:
a set of characters are provided which do not fall into any standard font. These can only be accessed by escape sequence. For example, \\LI produces the zodiac sign for Libra, and \\JU produces the astronomical sign for Jupiter. The fourth table in `demo(Hershey)` shows all of the special character escape sequences.
Cyrillic Characters:
cyrillic characters are implemented according to the K018-R encoding, and can be used directly in such a locale using the Serif typeface and Cyrillic (or Oblique Cyrillic) fontindex. Alternatively they can be specified via an octal code in the range 300 to 337 for lower case characters or 340 to 377 for upper case characters. The fifth table in `demo(Hershey)` shows the octal codes for the available Cyrillic characters.
Cyrillic has to be selected via a `("serif", fontindex)` pair rather than via a font family.
Japanese Characters:
83 Hiragana, 86 Katakana, and 603 Kanji characters are implemented according to the EUC-JP (Extended Unix Code) encoding. Each character is identified by a unique hexadecimal code. The Hiragana characters are in the range 0x2421 to 0x2473, Katakana are in the range 0x2521 to 0x2576, and Kanji are (scattered about) in the range 0x3021 to 0x6d55.
When using the Serif typeface and EUC fontindex, these characters can be produced by a *pair* of octal codes. Given the hexadecimal code (e.g., 0x2421), take the first two digits and add 0x80 and do the same to the second two digits (e.g., 0x21 and 0x24 become 0xa4 and 0xa1), then convert both to octal (e.g., 0xa4 and 0xa1 become 244 and 241). For example, the first Hiragana character is produced by \244\241.
It is also possible to use the hexadecimal code directly. This works for all non-EUC fonts by specifying an escape sequence of the form \\#J1234. For example, the first Hiragana character is produced by \\#J2421.
The Kanji characters may be specified in a third way, using the so-called "Nelson Index", by specifying an escape sequence of the form \\#N1234. For example, the (obsolete) Kanji for ‘one’ is produced by \\#N0001.
`demo(Japanese)` shows the available Japanese characters.
Raw Hershey Glyphs:
all of the characters in the Hershey fonts are stored in a large array. Some characters are not accessible in any of the Hershey fonts. These characters can only be accessed via an escape sequence of the form \\#H1234. For example, the fleur-de-lys is produced by \\#H0746. The sixth and seventh tables of `demo(Hershey)` shows all of the available raw glyphs.
### References
<https://www.gnu.org/software/plotutils/plotutils.html>.
### See Also
`[demo](../../utils/html/demo)(Hershey)`, `[par](../../graphics/html/par)`, `[text](../../graphics/html/text)`, `[contour](../../graphics/html/contour)`.
`[Japanese](japanese)` for the Japanese characters in the Hershey fonts.
### Examples
```
Hershey
## for tables of examples, see demo(Hershey)
```
| programming_docs |
r None
`dev.capture` Capture device output as a raster image
------------------------------------------------------
### Description
`dev.capture` captures the current contents of a graphics device as a raster (bitmap) image.
### Usage
```
dev.capture(native = FALSE)
```
### Arguments
| | |
| --- | --- |
| `native` | Logical. If `FALSE` the result is a matrix of R color names, if `TRUE` the output is returned as a `nativeRaster` object which is more efficient for plotting, but not portable. |
### Details
Not all devices support capture of the output as raster bitmaps. Typically, only image-based devices do and even not all of them.
### Value
`NULL` if the device does not support capture, otherwise a matrix of color names (for `native = FALSE`) or a `nativeRaster` object (for `native = TRUE`).
r None
`Type1Font` Type 1 and CID Fonts
---------------------------------
### Description
These functions are used to define the translation of a **R** graphics font family name to a Type 1 or CID font descriptions, used by both the `<postscript>` and `<pdf>` graphics devices.
### Usage
```
Type1Font(family, metrics, encoding = "default")
CIDFont(family, cmap, cmapEncoding, pdfresource = "")
```
### Arguments
| | |
| --- | --- |
| `family` | a character string giving the name to be used internally for a Type 1 or CID-keyed font family. This needs to uniquely identify each family, so if you modify a family which is in use (see `[postscriptFonts](postscriptfonts)`) you need to change the family name. |
| `metrics` | a character vector of four or five strings giving paths to the afm (Adobe Font Metric) files for the font. |
| `cmap` | the name of a CMap file for a CID-keyed font. |
| `encoding` | for `Type1Font`, the name of an encoding file. Defaults to `"default"`, which maps on Unix-alikes to `"ISOLatin1.enc"` and on Windows to `"WinAnsi.enc"`. Otherwise, a file name in the ‘enc’ directory of the grDevices package, which is used if the path does not contain a path separator. An extension `".enc"` can be omitted. |
| `cmapEncoding` | The name of a character encoding to be used with the named CMap file: strings will be translated to this encoding when written to the file. |
| `pdfresource` | A chunk of PDF code; only required for using a CID-keyed font on `pdf`; users should not be expected to provide this. |
### Details
For `Type1Fonts`, if four ‘.afm’ files are supplied the fifth is taken to be `"Symbol.afm"`. Relative paths are taken relative to the directory ‘[R\_HOME](../../base/html/rhome)/library/grDevices/afm’. The fifth (symbol) font must be in `AdobeSym` encoding. However, the glyphs in the first four fonts are referenced by name and any encoding given within the ‘.afm’ files is not used.
The ‘.afm’ files may be compressed with (or without) final extension ‘.gz’: the files which ship with **R** are installed as compressed files with this extension.
Glyphs in CID-keyed fonts are accessed by ID (number) and not by name. The CMap file maps encoded strings (usually in a MBCS) to IDs, so `cmap` and `cmapEncoding` specifications must match. There are no real bold or italic versions of CID fonts (bold/italic were very rarely used in traditional East Asian topography), and for the `<pdf>` device all four font faces will be identical. However, for the `<postscript>` device, bold and italic (and bold italic) are emulated.
CID-keyed fonts are intended only for use for the glyphs of East Asian languages, which are all monospaced and are all treated as filling the same bounding box. (Thus `<plotmath>` will work with such characters, but the spacing will be less carefully controlled than with Western glyphs.) The CID-keyed fonts do contain other characters, including a Latin alphabet: non-East-Asian glyphs are regarded as monospaced with half the width of East Asian glyphs. This is often the case, but sometimes Latin glyphs designed for proportional spacing are used (and may look odd). We strongly recommend that CID-keyed fonts are **only** used for East Asian glyphs.
### Value
A list of class `"Type1Font"` or `"CIDFont"`.
### See Also
`<postscript>`, `<pdf>`, `[postscriptFonts](postscriptfonts)`, and `[pdfFonts](postscriptfonts)`.
### Examples
```
## This duplicates "ComputerModernItalic".
CMitalic <- Type1Font("ComputerModern2",
c("CM_regular_10.afm", "CM_boldx_10.afm",
"cmti10.afm", "cmbxti10.afm",
"CM_symbol_10.afm"),
encoding = "TeXtext.enc")
## Not run:
## This could be used by
postscript(family = CMitalic)
## or
postscriptFonts(CMitalic = CMitalic) # once in a session
postscript(family = "CMitalic", encoding = "TeXtext.enc")
## End(Not run)
```
r None
`windows.options` Auxiliary Function to Set/View Defaults for Arguments of windows()
-------------------------------------------------------------------------------------
### Description
The auxiliary function `windows.options` can be used to set or view (if called without arguments) the default values for the arguments of `<windows>`.
`windows.options` needs to be called before calling `windows`, and the default values it sets can be overridden by supplying arguments to `windows`.
### Usage
```
windows.options(..., reset = FALSE)
```
### Arguments
| | |
| --- | --- |
| `...` | arguments `width`, `height`, `pointsize`, `record`, `rescale`, `xpinch`, `ypinch`, `bg`, `canvas`, `gamma`, `xpos`, `ypos`, `buffered`, `restoreConsole`, `clickToConfirm`, `title`, `fillOddEven` and `antialias` can be supplied. |
| `reset` | logical: should the defaults be reset to their ‘factory-fresh’ values? |
### Details
If both `reset = TRUE` and `...` are supplied the defaults are first reset to the ‘factory-fresh’ values and then the new values are applied.
Option `antialias` applies to screen devices (`windows`, `win.graph`, `X11` and `x11`). There is a separate option, `bitmap.aa.win`, for bitmap devices with `type = "windows"`.
### Value
A named list of all the defaults. If any arguments are supplied the returned values are the old values and the result has the visibility flag turned off.
### See Also
`<windows>`, `<ps.options>`.
### Examples
```
## Not run:
## put something like this is your .Rprofile to customize the defaults
setHook(packageEvent("grDevices", "onLoad"),
function(...)
grDevices::windows.options(width = 8, height = 6,
xpos = 0, pointsize = 10,
bitmap.aa.win = "cleartype"))
## End(Not run)
```
r None
`windowsFonts` Windows Fonts
-----------------------------
### Description
These functions handle the translation of a device-independent R graphics font family name to a windows font description and are available only on Windows.
### Usage
```
windowsFont(family)
windowsFonts(...)
```
### Arguments
| | |
| --- | --- |
| `family` | a character vector containing the font family name (`"TT"` as the first two characters indicates a TrueType font). |
| `...` | either character strings naming mappings to display, or new (named) mappings to define. |
### Details
A windows device is created with a default font (see the documentation for `windows`), but it is also possible to specify a font family when drawing to the device (for example, see the documentation for `"family"` in `[par](../../graphics/html/par)` and for `"fontfamily"` in `[gpar](../../grid/html/gpar)` in the grid package).
The font family sent to the device is a simple string name, which must be mapped to something more specific to windows fonts. A list of mappings is maintained and can be modified by the user.
The `windowsFonts` function can be used to list existing mappings and to define new mappings. The `windowsFont` function can be used to create a new mapping.
Default mappings are provided for three device-independent font family names: `"sans"` for a sans-serif font, `"serif"` for a serif font and `"mono"` for a monospaced font.
These mappings will only be used if the current font face is 1 (plain), 2 (bold), 3 (italic), or 4 (bolditalic).
### See Also
`<windows>`
### Examples
```
if(.Platform$OS.type == "windows") withAutoprint({
windowsFonts()
windowsFonts("mono")
})
## Not run: ## set up for Japanese: needs the fonts installed
windows() # make sure we have the right device type (available on Windows only)
Sys.setlocale("LC_ALL", "ja")
windowsFonts(JP1 = windowsFont("MS Mincho"),
JP2 = windowsFont("MS Gothic"),
JP3 = windowsFont("Arial Unicode MS"))
plot(1:10)
text(5, 2, "\u{4E10}\u{4E00}\u{4E01}", family = "JP1")
text(7, 2, "\u{4E10}\u{4E00}\u{4E01}", family = "JP1", font = 2)
text(5, 1.5, "\u{4E10}\u{4E00}\u{4E01}", family = "JP2")
text(9, 2, "\u{5100}", family = "JP3")
## End(Not run)
```
r None
`pdf` PDF Graphics Device
--------------------------
### Description
`pdf` starts the graphics device driver for producing PDF graphics.
### Usage
```
pdf(file = if(onefile) "Rplots.pdf" else "Rplot%03d.pdf",
width, height, onefile, family, title, fonts, version,
paper, encoding, bg, fg, pointsize, pagecentre, colormodel,
useDingbats, useKerning, fillOddEven, compress)
```
### Arguments
| | |
| --- | --- |
| `file` | a character string giving the file path. If it is of the form `"|cmd"`, the output is piped to the command given by `cmd`. If it is `NULL`, then no external file is created (effectively, no drawing occurs), but the device may still be queried (e.g., for size of text). For use with `onefile = FALSE` give a C integer format such as `"Rplot%03d.pdf"` (the default in that case). (See `<postscript>` for further details.) Tilde expansion (see `[path.expand](../../base/html/path.expand)`) is done. An input with a marked encoding is converted to the native encoding or an error is given. |
| `width, height` | the width and height of the graphics region in inches. The default values are `7`. |
| `onefile` | logical: if true (the default) allow multiple figures in one file. If false, generate a file with name containing the page number for each page. Defaults to `TRUE`, and forced to true if `file` is a pipe. |
| `family` | the font family to be used, see `<postscript>`. Defaults to `"Helvetica"`. |
| `title` | title string to embed as the /Title field in the file. Defaults to `"R Graphics Output"`. |
| `fonts` | a character vector specifying **R** graphics font family names for additional fonts which will be included in the PDF file. Defaults to `NULL`. |
| `version` | a string describing the PDF version that will be required to view the output. This is a minimum, and will be increased (with a warning) if necessary. Defaults to `"1.4"`, but see ‘Details’. |
| `paper` | the target paper size. The choices are `"a4"`, `"letter"`, `"legal"` (or `"us"`) and `"executive"` (and these can be capitalized), or `"a4r"` and `"USr"` for rotated (‘landscape’). The default is `"special"`, which means that the `width` and `height` specify the paper size. A further choice is `"default"`; if this is selected, the papersize is taken from the option `"papersize"` if that is set and as `"a4"` if it is unset or empty. Defaults to `"special"`. |
| `encoding` | the name of an encoding file. See `<postscript>` for details. Defaults to `"default"`. |
| `bg` | the initial background color to be used. Defaults to `"transparent"`. |
| `fg` | the initial foreground color to be used. Defaults to `"black"`. |
| `pointsize` | the default point size to be used. Strictly speaking, in bp, that is 1/72 of an inch, but approximately in points. Defaults to `12`. |
| `pagecentre` | logical: should the device region be centred on the page? – is only relevant for `paper != "special"`. Defaults to `TRUE`. |
| `colormodel` | a character string describing the color model: currently allowed values are `"srgb"`, `"gray"` (or `"grey"`) and `"cmyk"`. Defaults to `"srgb"`. See section ‘Color models’. |
| `useDingbats` | logical. Should small circles be rendered *via* the Dingbats font? Defaults to `FALSE`. If `TRUE`, this can produce smaller and better output, but there can font display problems in broken PDF viewers: although this font is one of the 14 guaranteed to be available in all PDF viewers, that guarantee is not always honoured. For Unix-alikes (including macOS) see the ‘Note’ for a possible fix for some viewers. |
| `useKerning` | logical. Should kerning corrections be included in setting text and calculating string widths? Defaults to `TRUE`. |
| `fillOddEven` | logical controlling the polygon fill mode: see `[polygon](../../graphics/html/polygon)` for details. Defaults to `FALSE`. |
| `compress` | logical. Should PDF streams be generated with Flate compression? Defaults to `TRUE`. |
### Details
All arguments except `file` default to values given by `<pdf.options>()`. The ultimate defaults are quoted in the arguments section.
`pdf()` opens the file `file` and the PDF commands needed to plot any graphics requested are sent to that file.
The `file` argument is interpreted as a C integer format as used by `[sprintf](../../base/html/sprintf)`, with integer argument the page number. The default gives files ‘Rplot001.pdf’, ..., ‘Rplot999.pdf’, ‘Rplot1000.pdf’, ....
The `family` argument can be used to specify a PDF-specific font family as the initial/default font for the device. If additional font families are to be used they should be included in the `fonts` argument.
If a device-independent **R** graphics font family is specified (e.g., via `par(family = )` in the graphics package), the PDF device makes use of the PostScript font mappings to convert the **R** graphics font family to a PDF-specific font family description. (See the documentation for `[pdfFonts](postscriptfonts)`.)
This device does *not* embed fonts in the PDF file, so it is only straightforward to use mappings to the font families that can be assumed to be available in any PDF viewer: `"Times"` (equivalently `"serif"`), `"Helvetica"` (equivalently `"sans"`) and `"Courier"` (equivalently `"mono"`). Other families may be specified, but it is the user's responsibility to ensure that these fonts are available on the system and third-party software (e.g., Ghostscript) may be required to embed the fonts so that the PDF can be included in other documents (e.g., LaTeX): see `[embedFonts](embedfonts)`. The URW-based families described for `<postscript>` can be used with viewers, platform dependently:
on Unix-alikes
viewers set up to use URW fonts, which is usual with those based on `xpdf` or Ghostscript.
on Windows
viewers such as GSView which utilise URW fonts.
Since `[embedFonts](embedfonts)` makes use of Ghostscript, it should be able to embed the URW-based families for use with other viewers.
See `<postscript>` for details of encodings, as the internal code is shared between the drivers. The native PDF encoding is given in file ‘PDFDoc.enc’.
The PDF produced is fairly simple, with each page being represented as a single stream (by default compressed and possibly with references to raster images). The **R** graphics model does not distinguish graphics objects at the level of the driver interface.
The `version` argument declares the version of PDF that gets produced. The version must be at least 1.2 when compression is used, 1.4 for semi-transparent output to be understood, and at least 1.3 if CID fonts are to be used: if any of these features are used the version number will be increased (with a warning). (PDF 1.4 was first supported by Acrobat 5 in 2001; it is very unlikely not to be supported in a current viewer.)
Line widths as controlled by `par(lwd = )` are in multiples of 1/96 inch. Multiples less than 1 are allowed. `pch = "."` with `cex = 1` corresponds to a square of side 1/72 inch, which is also the ‘pixel’ size assumed for graphics parameters such as `"cra"`.
The `paper` argument sets the /MediaBox entry in the file, which defaults to `width` by `height`. If it is set to something other than `"special"`, a device region of the specified size is (by default) centred on the rectangle given by the paper size: if either `width` or `height` is less than `0.1` or too large to give a total margin of 0.5 inch, it is reset to the corresponding paper dimension minus 0.5. Thus if you want the default behaviour of `<postscript>` use `pdf(paper = "a4r", width = 0, height = 0)` to centre the device region on a landscape A4 page with 0.25 inch margins.
When the background colour is fully transparent (as is the initial default value), the PDF produced does not paint the background. Most PDF viewers will use a white canvas so the visual effect is if the background were white. This will not be the case when printing onto coloured paper, though.
### Color models
The default color model (`"srgb"`) is sRGB. Model `"gray"` (or `"grey"`) maps sRGB colors to greyscale using perceived luminosity (biased towards green). `"cmyk"` outputs in CMYK colorspace. The simplest possible conversion from sRGB to CMYK is used (<https://en.wikipedia.org/wiki/CMYK_color_model#Mapping_RGB_to_CMYK>), and raster images are output in RGB.
Also available for backwards compatibility is model `"rgb"` which uses uncalibrated RGB and corresponds to the model used with that name in **R** prior to 2.13.0. Some viewers may render some plots in that colorspace faster than in sRGB, and the plot files will be smaller.
### Conventions
This section describes the implementation of the conventions for graphics devices set out in the ‘R Internals’ manual.
* The default device size is 7 inches square.
* Font sizes are in big points.
* The default font family is Helvetica.
* Line widths are as a multiple of 1/96 inch, with a minimum of 0.01 enforced.
* Circles of any radius are allowed. If `useDingbats =
TRUE`, opaque circles of less than 10 big points radius are rendered using char 108 in the Dingbats font: all semi-transparent and larger circles using a Bézier curve for each quadrant.
* Colours are by default specified as sRGB.
At very small line widths, the line type may be forced to solid.
### Printing
Except on Windows it is possible to print directly from `pdf` by something like (this is appropriate for a CUPS printing system):
```
pdf("|lp -o landscape", paper = "a4r")
```
This forces `onefile = TRUE`.
### Note
If you see problems with PDF output, do remember that the problem is much more likely to be in your viewer than in **R**. Try another viewer if possible. Symptoms for which the viewer has been at fault are apparent grids on image plots (turn off graphics anti-aliasing in your viewer if you can) and missing or incorrect glyphs in text (viewers silently doing font substitution).
Unfortunately the default viewers on most Linux and macOS systems have these problems, and no obvious way to turn off graphics anti-aliasing.
Acrobat Reader does not use the fonts specified but rather emulates them from multiple-master fonts. This can be seen in imprecise centering of characters, for example the multiply and divide signs in Helvetica. This can be circumvented by embedding fonts where possible. Most other viewers substitute fonts, e.g. URW fonts for the standard Helvetica and Times fonts, and these too often have different font metrics from the true fonts.
Acrobat Reader can be extended by ‘font packs’, and these will be needed for the full use of encodings other than Latin-1 (although they may be offered for download as needed).
On some Unix-alike systems:
If `useDingbats = TRUE`, the default plotting character `pch = 1` was displayed in some PDF viewers incorrectly as a `"q"` character. (These seem to be viewers based on the poppler PDF rendering library). This may be due to incorrect or incomplete mapping of font names to those used by the system. Adding the following lines to ‘~/.fonts.conf’ or ‘/etc/fonts/local.conf’ may circumvent this problem, although this has largely been corrected on the affected systems.
```
<fontconfig>
<alias binding="same">
<family>ZapfDingbats</family>
<accept><family>Dingbats</family></accept>
</alias>
</fontconfig>
```
Some further workarounds for problems with symbol fonts on viewers using ‘fontconfig’ are given in the ‘Cairo Fonts’ section of the help for `[X11](x11)`.
On Windows:
The TeXworks PDF viewer was one of those which has been seen to fail to display Dingbats (used by e.g. `pch = 1`) correctly. Whereas on other platforms the problems seen were incorrect output, on Windows points were silently omitted: however recent versions seem to manage to display Dingbats.
There was a different font bug in the `pdf.js` viewer included in Firefox: that mapped Dingbats to the Symbol font and so displayed symbols such `pch = 1` as lambda.
### See Also
`[pdfFonts](postscriptfonts)`, `<pdf.options>`, `[embedFonts](embedfonts)`, `[Devices](devices)`, `<postscript>`.
`[cairo\_pdf](cairo)` and (on macOS only) `<quartz>` for other devices that can produce PDF.
More details of font families and encodings and especially handling text in a non-Latin-1 encoding and embedding fonts can be found in
Paul Murrell and Brian Ripley (2006). “Non-standard fonts in PostScript and PDF graphics.” *R News*, **6**(2), 41–47. <https://www.r-project.org/doc/Rnews/Rnews_2006-2.pdf>.
### Examples
```
## Test function for encodings
TestChars <- function(encoding = "ISOLatin1", ...)
{
pdf(encoding = encoding, ...)
par(pty = "s")
plot(c(-1,16), c(-1,16), type = "n", xlab = "", ylab = "",
xaxs = "i", yaxs = "i")
title(paste("Centred chars in encoding", encoding))
grid(17, 17, lty = 1)
for(i in c(32:255)) {
x <- i %% 16
y <- i %/% 16
points(x, y, pch = i)
}
dev.off()
}
## there will be many warnings.
TestChars("ISOLatin2")
## this does not view properly in older viewers.
TestChars("ISOLatin2", family = "URWHelvetica")
## works well for viewing in gs-based viewers, and often in xpdf.
```
| programming_docs |
r None
`trans3d` 3D to 2D Transformation for Perspective Plots
--------------------------------------------------------
### Description
Projection of 3-dimensional to 2-dimensional points using a 4x4 viewing transformation matrix. Mainly for adding to perspective plots such as `[persp](../../graphics/html/persp)`.
### Usage
```
trans3d(x, y, z, pmat)
```
### Arguments
| | |
| --- | --- |
| `x, y, z` | numeric vectors of equal length, specifying points in 3D space. |
| `pmat` | a *4 x 4* *viewing transformation matrix*, suitable for projecting the 3D coordinates *(x,y,z)* into the 2D plane using homogeneous 4D coordinates *(x,y,z,t)*; such matrices are returned by `[persp](../../graphics/html/persp)()`. |
### Value
a list with two components
| | |
| --- | --- |
| `x,y` | the projected 2d coordinates of the 3d input `(x,y,z)`. |
### See Also
`[persp](../../graphics/html/persp)`
### Examples
```
## See help(persp) {after attaching the 'graphics' package}
## -----------
```
r None
`palette` Set or View the Graphics Palette
-------------------------------------------
### Description
View or manipulate the color palette which is used when `col=` has a numeric index and supporting functions.
### Usage
```
palette(value)
palette.pals()
palette.colors(n = NULL, palette = "Okabe-Ito", alpha, recycle = FALSE)
```
### Arguments
| | |
| --- | --- |
| `value` | an optional character vector specifying a new palette (see Details). |
| `n` | the number of colors to select from a palette. The default `[NULL](../../base/html/null)` selects all colors of the given palette. |
| `palette` | a valid palette name (one of `palette.pals()`). The name is matched to the list of available palettes, ignoring upper vs. lower case, spaces, dashes, etc. in the matching. |
| `alpha` | an alpha-transparency level in the range [0,1] (0 means transparent and 1 means opaque). |
| `recycle` | logical indicating what happens in case `n >
length(palette(.))`. By default (`recycle = FALSE`), the result is as for `n = NULL`, but with a warning. |
### Details
The `palette()` function gets or sets the current palette, the `palette.pals()` function lists the available predefined palettes, and the `palette.colors()` function selects colors from the predefined palettes.
The color palette and referring to colors by number (see e.g. `[par](../../graphics/html/par)`) was provided for compatibility with S. **R** extends and improves on the available set of palettes.
If `value` has length 1, it is taken to be the name of a built-in color palette. The available palette names are returned by `palette.pals()`. It is also possible to specify `"default"`.
If `value` has length greater than 1 it is assumed to contain a description of the colors which are to make up the new palette. The maximum size for a palette is 1024 entries.
If `value` is omitted, no change is made to the current palette.
There is only one palette setting for all devices in an **R** session. If the palette is changed, the new palette applies to all subsequent plotting.
The current palette also applies to re-plotting (for example if an on-screen device is resized or `[dev.copy](dev2)` or `[replayPlot](recordplot)` is used). The palette is recorded on the displaylist at the start of each page and when it is changed.
### Value
`palette()` returns a character vector giving the colors from the palette which *was* in effect. This is `[invisible](../../base/html/invisible)` unless the argument is omitted.
`palette.pals()` returns a character vector giving the names of predefined palettes.
`palette.colors()` returns a vector of R colors.
### See Also
`<colors>` for the vector of built-in named colors; `<hsv>`, `<gray>`, `[hcl.colors](palettes)`, ... to construct colors.
`<adjustcolor>`, e.g., for tweaking existing palettes; `[colorRamp](colorramp)` to interpolate colors, making custom palettes; `<col2rgb>` for translating colors to RGB 3-vectors.
### Examples
```
require(graphics)
palette() # obtain the current palette
palette("R3");palette() # old default palette
palette("ggplot2") # ggplot2-style palette
palette()
palette(hcl.colors(8, "viridis"))
(palette(gray(seq(0,.9,length.out = 25)))) # gray scales; print old palette
matplot(outer(1:100, 1:30), type = "l", lty = 1,lwd = 2, col = 1:30,
main = "Gray Scales Palette",
sub = "palette(gray(seq(0, .9, len=25)))")
palette("default") # reset back to the default
## on a device where alpha transparency is supported,
## use 'alpha = 0.3' transparency with the default palette :
mycols <- adjustcolor(palette(), alpha.f = 0.3)
opal <- palette(mycols)
x <- rnorm(1000); xy <- cbind(x, 3*x + rnorm(1000))
plot (xy, lwd = 2,
main = "Alpha-Transparency Palette\n alpha = 0.3")
xy[,1] <- -xy[,1]
points(xy, col = 8, pch = 16, cex = 1.5)
palette("default")
## List available built-in palettes
palette.pals()
## Demonstrate the colors 1:8 in different palettes using a custom matplot()
sinplot <- function(main=NULL) {
x <- outer(
seq(-pi, pi, length.out = 50),
seq(0, pi, length.out = 8),
function(x, y) sin(x - y)
)
matplot(x, type = "l", lwd = 4, lty = 1, col = 1:8, ylab = "", main=main)
}
sinplot("default palette")
palette("R3"); sinplot("R3")
palette("Okabe-Ito"); sinplot("Okabe-Ito")
palette("Tableau") ; sinplot("Tableau")
palette("default") # reset
## color swatches for palette.colors()
palette.swatch <- function(palette = palette.pals(), n = 8, nrow = 8,
border = "black", cex = 1, ...)
{
cols <- sapply(palette, palette.colors, n = n, recycle = TRUE)
ncol <- ncol(cols)
nswatch <- min(ncol, nrow)
op <- par(mar = rep(0.1, 4),
mfrow = c(1, min(5, ceiling(ncol/nrow))),
cex = cex, ...)
on.exit(par(op))
while (length(palette)) {
subset <- seq_len(min(nrow, ncol(cols)))
plot.new()
plot.window(c(0, n), c(0.25, nrow + 0.25))
y <- rev(subset)
text(0, y + 0.1, palette[subset], adj = c(0, 0))
y <- rep(y, each = n)
rect(rep(0:(n-1), n), y, rep(1:n, n), y - 0.5,
col = cols[, subset], border = border)
palette <- palette[-subset]
cols <- cols [, -subset, drop = FALSE]
}
}
palette.swatch()
palette.swatch(n = 26) # show full "Alphabet"; recycle most others
```
r None
`colorRamp` Color interpolation
--------------------------------
### Description
These functions return functions that interpolate a set of given colors to create new color palettes (like `[topo.colors](palettes)`) and color ramps, functions that map the interval *[0, 1]* to colors (like `[grey](gray)`).
### Usage
```
colorRamp(colors, bias = 1, space = c("rgb", "Lab"),
interpolate = c("linear", "spline"), alpha = FALSE)
colorRampPalette(colors, ...)
```
### Arguments
| | |
| --- | --- |
| `colors` | colors to interpolate; must be a valid argument to `<col2rgb>()`. |
| `bias` | a positive number. Higher values give more widely spaced colors at the high end. |
| `space` | a character string; interpolation in RGB or CIE Lab color spaces. |
| `interpolate` | use spline or linear interpolation. |
| `alpha` | logical: should alpha channel (opacity) values be returned? It is an error to give a true value if `space` is specified. |
| `...` | arguments to pass to `colorRamp`. |
### Details
The CIE Lab color space is approximately perceptually uniform, and so gives smoother and more uniform color ramps. On the other hand, palettes that vary from one hue to another via white may have a more symmetrical appearance in RGB space.
The conversion formulas in this function do not appear to be completely accurate and the color ramp may not reach the extreme values in Lab space. Future changes in the **R** color model may change the colors produced with `space = "Lab"`.
### Value
`colorRamp` returns a `[function](../../base/html/function)` with argument a vector of values between 0 and 1 that are mapped to a numeric matrix of RGB color values with one row per color and 3 or 4 columns.
`colorRampPalette` returns a function that takes an integer argument (the required number of colors) and returns a character vector of colors (see `<rgb>`) interpolating the given sequence (similar to `[heat.colors](palettes)` or `[terrain.colors](palettes)`).
### See Also
Good starting points for interpolation are the “sequential” and “diverging” ColorBrewer palettes in the [RColorBrewer](https://CRAN.R-project.org/package=RColorBrewer) package.
`[splinefun](../../stats/html/splinefun)` or `[approxfun](../../stats/html/approxfun)` are used for interpolation.
### Examples
```
## Both return a *function* :
colorRamp(c("red", "green"))( (0:4)/4 ) ## (x) , x in [0,1]
colorRampPalette(c("blue", "red"))( 4 ) ## (n)
## a ramp in opacity of blue values
colorRampPalette(c(rgb(0,0,1,1), rgb(0,0,1,0)), alpha = TRUE)(8)
require(graphics)
## Here space="rgb" gives palettes that vary only in saturation,
## as intended.
## With space="Lab" the steps are more uniform, but the hues
## are slightly purple.
filled.contour(volcano,
color.palette =
colorRampPalette(c("red", "white", "blue")),
asp = 1)
filled.contour(volcano,
color.palette =
colorRampPalette(c("red", "white", "blue"),
space = "Lab"),
asp = 1)
## Interpolating a 'sequential' ColorBrewer palette
YlOrBr <- c("#FFFFD4", "#FED98E", "#FE9929", "#D95F0E", "#993404")
filled.contour(volcano,
color.palette = colorRampPalette(YlOrBr, space = "Lab"),
asp = 1)
filled.contour(volcano,
color.palette = colorRampPalette(YlOrBr, space = "Lab",
bias = 0.5),
asp = 1)
## 'jet.colors' is "as in Matlab"
## (and hurting the eyes by over-saturation)
jet.colors <-
colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan",
"#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000"))
filled.contour(volcano, color.palette = jet.colors, asp = 1)
## space="Lab" helps when colors don't form a natural sequence
m <- outer(1:20,1:20,function(x,y) sin(sqrt(x*y)/3))
rgb.palette <- colorRampPalette(c("red", "orange", "blue"),
space = "rgb")
Lab.palette <- colorRampPalette(c("red", "orange", "blue"),
space = "Lab")
filled.contour(m, col = rgb.palette(20))
filled.contour(m, col = Lab.palette(20))
```
r None
`getGraphicsEvent` Wait for a mouse or keyboard event from a graphics window
-----------------------------------------------------------------------------
### Description
This function waits for input from a graphics window in the form of a mouse or keyboard event.
### Usage
```
getGraphicsEvent(prompt = "Waiting for input",
onMouseDown = NULL, onMouseMove = NULL,
onMouseUp = NULL, onKeybd = NULL,
onIdle = NULL,
consolePrompt = prompt)
setGraphicsEventHandlers(which = dev.cur(), ...)
getGraphicsEventEnv(which = dev.cur())
setGraphicsEventEnv(which = dev.cur(), env)
```
### Arguments
| | |
| --- | --- |
| `prompt` | prompt to be displayed to the user in the graphics window |
| `onMouseDown` | a function to respond to mouse clicks |
| `onMouseMove` | a function to respond to mouse movement |
| `onMouseUp` | a function to respond to mouse button releases |
| `onKeybd` | a function to respond to key presses |
| `onIdle` | a function to call when no events are pending |
| `consolePrompt` | prompt to be displayed to the user in the console |
| `which` | which graphics device does the call apply to? |
| `...` | items including handlers to be placed in the event environment |
| `env` | an environment to be used as the event environment |
### Details
These functions allow user input from some graphics devices (currently only the `windows()`, `X11(type = "Xlib")` and `X11(type = "cairo")` screen displays in base **R**). Event handlers may be installed to respond to events involving the mouse or keyboard.
The functions are related as follows. If any of the first six arguments to `getGraphicsEvent` are given, then it uses those in a call to `setGraphicsEventHandlers` to replace any existing handlers in the current device. This is for compatibility with pre-2.12.0 **R** versions. The current normal way to set up event handlers is to set them using `setGraphicsEventHandlers` or `setGraphicsEventEnv` on one or more graphics devices, and then use `getGraphicsEvent()` with no arguments to retrieve event data. `getGraphicsEventEnv()` may be used to save the event environment for use later.
The names of the arguments in `getGraphicsEvent` are special. When handling events, the graphics system will look through the event environment for functions named `onMouseDown`, `onMouseMove`, `onMouseUp`, `onKeybd`, and `onIdle`, and use them as event handlers. It will use `prompt` for a label on the graphics device. Two other special names are `which`, which will identify the graphics device, and `result`, where the result of the last event handler will be stored before being returned by `getGraphicsEvent()`.
The mouse event handlers should be functions with header `function(buttons, x, y)`. The coordinates `x` and `y` will be passed to mouse event handlers in device independent coordinates (i.e., the lower left corner of the window is `(0,0)`, the upper right is `(1,1)`). The `buttons` argument will be a vector listing the buttons that are pressed at the time of the event, with 0 for left, 1 for middle, and 2 for right.
The keyboard event handler should be a function with header `function(key)`. A single element character vector will be passed to this handler, corresponding to the key press. Shift and other modifier keys will have been processed, so `shift-a` will be passed as `"A"`. The following special keys may also be passed to the handler:
* Control keys, passed as `"Ctrl-A"`, etc.
* Navigation keys, passed as one of
`"Left", "Up", "Right", "Down", "PgUp", "PgDn", "End", "Home"`
* Edit keys, passed as one of `"Ins", "Del"`
* Function keys, passed as one of `"F1", "F2", ...`
The idle event handler `onIdle` should be a function with no arguments. If the function is undefined or `NULL`, then R will typically call a system function which (efficiently) waits for the next event to appear on a filehandle. Otherwise, the idle event handler will be called whenever the event queue of the graphics device was found to be empty, i.e. in an infinite loop. This feature is intended to allow animations to respond to user input, and could be CPU-intensive. Currently, `onIdle` is only implemented for `X11()` devices.
Note that calling `Sys.sleep()` is not recommended within an idle handler - `Sys.sleep()` removes pending graphics events in order to allow users to move, close, or resize windows while it is executing. Events such as mouse and keyboard events occurring during `Sys.sleep()` are lost, and currently do not trigger the event handlers registered via `getGraphicsEvent` or `setGraphicsEventHandlers`.
The event handlers are standard R functions, and will be executed as though called from the event environment.
In an interactive session, events will be processed until
* one of the event handlers returns a non-`NULL` value which will be returned as the value of `getGraphicsEvent`, or
* the user interrupts the function from the console.
### Value
When run interactively, `getGraphicsEvent` returns a non-`NULL` value returned from one of the event handlers. In a non-interactive session, `getGraphicsEvent` will return `NULL` immediately. It will also return `NULL` if the user closes the last window that has graphics handlers.
`getGraphicsEventEnv` returns the current event environment for the graphics device, or `NULL` if none has been set.
`setGraphicsEventEnv` and `setGraphicsEventHandlers` return the previous event environment for the graphics device.
### Author(s)
Duncan Murdoch
### Examples
```
# This currently only works on the Windows, X11(type = "Xlib"), and
# X11(type = "cairo") screen devices...
## Not run:
savepar <- par(ask = FALSE)
dragplot <- function(..., xlim = NULL, ylim = NULL, xaxs = "r", yaxs = "r") {
plot(..., xlim = xlim, ylim = ylim, xaxs = xaxs, yaxs = yaxs)
startx <- NULL
starty <- NULL
prevx <- NULL
prevy <- NULL
usr <- NULL
devset <- function()
if (dev.cur() != eventEnv$which) dev.set(eventEnv$which)
dragmousedown <- function(buttons, x, y) {
startx <<- x
starty <<- y
prevx <<- 0
prevy <<- 0
devset()
usr <<- par("usr")
eventEnv$onMouseMove <- dragmousemove
NULL
}
dragmousemove <- function(buttons, x, y) {
devset()
deltax <- diff(grconvertX(c(startx, x), "ndc", "user"))
deltay <- diff(grconvertY(c(starty, y), "ndc", "user"))
if (abs(deltax-prevx) + abs(deltay-prevy) > 0) {
plot(..., xlim = usr[1:2]-deltax, xaxs = "i",
ylim = usr[3:4]-deltay, yaxs = "i")
prevx <<- deltax
prevy <<- deltay
}
NULL
}
mouseup <- function(buttons, x, y) {
eventEnv$onMouseMove <- NULL
}
keydown <- function(key) {
if (key == "q") return(invisible(1))
eventEnv$onMouseMove <- NULL
NULL
}
setGraphicsEventHandlers(prompt = "Click and drag, hit q to quit",
onMouseDown = dragmousedown,
onMouseUp = mouseup,
onKeybd = keydown)
eventEnv <- getGraphicsEventEnv()
}
dragplot(rnorm(1000), rnorm(1000))
getGraphicsEvent()
par(savepar)
## End(Not run)
```
r None
`dev2` Copy Graphics Between Multiple Devices
----------------------------------------------
### Description
`dev.copy` copies the graphics contents of the current device to the device specified by `which` or to a new device which has been created by the function specified by `device` (it is an error to specify both `which` and `device`). (If recording is off on the current device, there are no contents to copy: this will result in no plot or an empty plot.) The device copied to becomes the current device.
`dev.print` copies the graphics contents of the current device to a new device which has been created by the function specified by `device` and then shuts the new device.
`dev.copy2eps` is similar to `dev.print` but produces an EPSF output file in portrait orientation (`horizontal = FALSE`). `dev.copy2pdf` is the analogue for PDF output.
`dev.control` allows the user to control the recording of graphics operations in a device. If `displaylist` is `"inhibit"` (`"enable"`) then recording is turned off (on). It is only safe to change this at the beginning of a plot (just before or just after a new page). Initially recording is on for screen devices, and off for print devices.
### Usage
```
dev.copy(device, ..., which = dev.next())
dev.print(device = postscript, ...)
dev.copy2eps(...)
dev.copy2pdf(..., out.type = "pdf")
dev.control(displaylist = c("inhibit", "enable"))
```
### Arguments
| | |
| --- | --- |
| `device` | A device function (e.g., `x11`, `postscript`, ...) |
| `...` | Arguments to the `device` function above: for `dev.copy2eps` arguments to `<postscript>` and for `dev.copy2pdf`, arguments to `<pdf>`. For `dev.print`, this includes `which` and by default any `<postscript>` arguments. |
| `which` | A device number specifying the device to copy to. |
| `out.type` | The name of the output device: can be `"pdf"`, or `"quartz"` (some macOS builds) or `"cairo"` (Windows and some Unix-alikes, see `[cairo\_pdf](cairo)`). |
| `displaylist` | A character string: the only valid values are `"inhibit"` and `"enable"`. |
### Details
Note that these functions copy the *device region* and not a plot: the background colour of the device surface is part of what is copied. Most screen devices default to a transparent background, which is probably not what is needed when copying to a device such as `<png>`.
For `dev.copy2eps` and `dev.copy2pdf`, `width` and `height` are taken from the current device unless otherwise specified. If just one of `width` and `height` is specified, the other is adjusted to preserve the aspect ratio of the device being copied. The default file name is `Rplot.eps` or `Rplot.pdf`, and can be overridden by specifying a `file` argument.
Copying to devices such as `<postscript>` and `<pdf>` which need font families pre-specified needs extra care – **R** is unaware of which families were used in a plot and so they will need to manually specified by the `fonts` argument passed as part of `...`. Similarly, if the device to be copied from was opened with a `family` argument, a suitable `family` argument will need to be included in `...`.
The default for `dev.print` is to produce and print a postscript copy. This will not work unless `[options](../../base/html/options)("printcmd")` is set suitably and you have a PostScript printing system: see `<postscript>` for how to set this up. Windows users may prefer to use `dev.print(win.print)`.
`dev.print` is most useful for producing a postscript print (its default) when the following applies. Unless `file` is specified, the plot will be printed. Unless `width`, `height` and `pointsize` are specified the plot dimensions will be taken from the current device, shrunk if necessary to fit on the paper. (`pointsize` is rescaled if the plot is shrunk.) If `horizontal` is not specified and the plot can be printed at full size by switching its value this is done instead of shrinking the plot region.
If `dev.print` is used with a specified `device` (even `postscript`) it sets the width and height in the same way as `dev.copy2eps`. This will not be appropriate unless the device specifies dimensions in inches, in particular not for `png`, `jpeg`, `tiff` and `bmp` unless `units = "inches"` is specified.
### Value
`dev.copy` returns the name and number of the device which has been copied to.
`dev.print`, `dev.copy2eps` and `dev.copy2pdf` return the name and number of the device which has been copied from.
### Note
Most devices (including all screen devices) have a display list which records all of the graphics operations that occur in the device. `dev.copy` copies graphics contents by copying the display list from one device to another device. Also, automatic redrawing of graphics contents following the resizing of a device depends on the contents of the display list.
After the command `dev.control("inhibit")`, graphics operations are not recorded in the display list so that `dev.copy` and `dev.print` will not copy anything and the contents of a device will not be redrawn automatically if the device is resized.
The recording of graphics operations is relatively expensive in terms of memory so the command `dev.control("inhibit")` can be useful if memory usage is an issue.
### See Also
`[dev.cur](dev)` and other `dev.xxx` functions.
### Examples
```
## Not run:
x11() # on a Unix-alike
plot(rnorm(10), main = "Plot 1")
dev.copy(device = x11)
mtext("Copy 1", 3)
dev.print(width = 6, height = 6, horizontal = FALSE) # prints it
dev.off(dev.prev())
dev.off()
## End(Not run)
```
| programming_docs |
r None
`pretty.Date` Pretty Breakpoints for Date-Time Classes
-------------------------------------------------------
### Description
Compute a sequence of about `n+1` equally spaced ‘nice’ values which cover the range of the values in `x`, possibly of length one, when `min.n = 0` and there is only one unique `x`.
### Usage
```
## S3 method for class 'Date'
pretty(x, n = 5, min.n = n %/% 2, sep = " ", ...)
## S3 method for class 'POSIXt'
pretty(x, n = 5, min.n = n %/% 2, sep = " ", ...)
```
### Arguments
| | |
| --- | --- |
| `x` | an object of class `"Date"` or `"POSIXt"` (i.e., `"POSIXct"` or `"POSIXlt"`). |
| `n` | integer giving the *desired* number of intervals. |
| `min.n` | nonnegative integer giving the *minimal* number of intervals. |
| `sep` | character string, serving as a separator for certain formats (e.g., between month and year). |
| `...` | further arguments for compatibility with the generic, ignored. |
### Value
A vector (of the suitable class) of locations, with attribute `"labels"` giving corresponding formatted character labels.
### See Also
`[pretty](../../base/html/pretty)` for the default method.
### Examples
```
pretty(Sys.Date())
pretty(Sys.time(), n = 10)
pretty(as.Date("2000-03-01")) # R 1.0.0 came in a leap year
## time ranges in diverse scales:% also in ../../../../tests/reg-tests-1c.R
require(stats)
steps <- setNames(,
c("10 secs", "1 min", "5 mins", "30 mins", "6 hours", "12 hours",
"1 DSTday", "2 weeks", "1 month", "6 months", "1 year",
"10 years", "50 years", "1000 years"))
x <- as.POSIXct("2002-02-02 02:02")
lapply(steps,
function(s) {
at <- pretty(seq(x, by = s, length.out = 2), n = 5)
attr(at, "labels")
})
```
r None
`as.graphicsAnnot` Coerce an Object for Graphics Annotation
------------------------------------------------------------
### Description
Coerce an **R** object into a form suitable for graphics annotation.
### Usage
```
as.graphicsAnnot(x)
```
### Arguments
| | |
| --- | --- |
| `x` | an **R** object |
### Details
Expressions, calls and names (as used by <plotmath>) are passed through unchanged. All other objects with an explicit class (as determined by `[is.object](../../base/html/is.object)`) are coerced by `[as.character](../../base/html/character)` to character vectors.
All the graphics and grid functions which use this coerce calls and names to expressions internally.
### Value
A language object or a character vector.
r None
`dev` Control Multiple Devices
-------------------------------
### Description
These functions provide control over multiple graphics devices.
### Usage
```
dev.cur()
dev.list()
dev.next(which = dev.cur())
dev.prev(which = dev.cur())
dev.off(which = dev.cur())
dev.set(which = dev.next())
dev.new(..., noRStudioGD = FALSE)
graphics.off()
```
### Arguments
| | |
| --- | --- |
| `which` | An integer specifying a device number. |
| `...` | arguments to be passed to the device selected. |
| `noRStudioGD` | Do not use the RStudio graphics device even if specified as the default device: it does not accept arguments such as `width` and `height`. |
### Details
Only one device is the ‘active’ device: this is the device in which all graphics operations occur. There is a `"null device"` which is always open but is really a placeholder: any attempt to use it will open a new device specified by `[getOption](../../base/html/options)("device")`.
Devices are associated with a name (e.g., `"X11"` or `"postscript"`) and a number in the range 1 to 63; the `"null device"` is always device 1. Once a device has been opened the null device is not considered as a possible active device. There is a list of open devices, and this is considered as a circular list not including the null device. `dev.next` and `dev.prev` select the next open device in the appropriate direction, unless no device is open.
`dev.off` shuts down the specified (by default the current) device. If the current device is shut down and any other devices are open, the next open device is made current. It is an error to attempt to shut down device 1. `graphics.off()` shuts down all open graphics devices. Normal termination of a session runs the internal equivalent of `graphics.off()`.
`dev.set` makes the specified device the active device. If there is no device with that number, it is equivalent to `dev.next`. If `which = 1` it opens a new device and selects that.
`dev.new` opens a new device. Normally **R** will open a new device automatically when needed, but this enables you to open further devices in a platform-independent way. (For which device is used see `[getOption](../../base/html/options)("device")`.) Note that care is needed with file-based devices such as `<pdf>` and `<postscript>` and in that case file names such as ‘Rplots.pdf’, ‘Rplots1.pdf’, ..., ‘Rplots999.pdf’ are tried in turn. Only named arguments are passed to the device, and then only if they match the argument list of the device. Even so, care is needed with the interpretation of e.g. `width`, and for the standard bitmap devices `units = "in", res = 72` is forced if neither is supplied but both `width` and `height` are.
### Value
`dev.cur` returns a length-one named integer vector giving the number and name of the active device, or 1, the null device, if none is active.
`dev.list` returns the numbers of all open devices, except device 1, the null device. This is a numeric vector with a `[names](../../base/html/names)` attribute giving the device names, or `NULL` is there is no open device.
`dev.next` and `dev.prev` return the number and name of the next / previous device in the list of devices. This will be the null device if and only if there are no open devices.
`dev.off` returns the number and name of the new active device (after the specified device has been shut down).
`dev.set` returns the number and name of the new active device.
`dev.new` returns the return value of the device opened, usually invisible `NULL`.
### See Also
`[Devices](devices)`, such as `<postscript>`, etc.
`[layout](../../graphics/html/layout)` and its links for setting up plotting regions on the current device.
### Examples
```
## Not run: ## Unix-specific example
x11()
plot(1:10)
x11()
plot(rnorm(10))
dev.set(dev.prev())
abline(0, 1) # through the 1:10 points
dev.set(dev.next())
abline(h = 0, col = "gray") # for the residual plot
dev.set(dev.prev())
dev.off(); dev.off() #- close the two X devices
## End(Not run)
```
r None
`n2mfrow` Compute Default 'mfrow' From Number of Plots
-------------------------------------------------------
### Description
Easy setup for plotting multiple figures (in a rectangular layout) on one page. This computes a sensible default for `[par](../../graphics/html/par)(mfrow)`.
### Usage
```
n2mfrow(nr.plots, asp = 1)
```
### Arguments
| | |
| --- | --- |
| `nr.plots` | integer; the number of plot figures you'll want to draw. |
| `asp` | positive number; the target aspect ratio (columns / rows) in the output. Was implicitly hardwired to `1`; because of that and back compatibility, there is a somewhat discontinuous behavior when varying `asp` around 1, for `nr.plots <= 12`. |
### Value
A length-two integer vector `(nr, nc)` giving the positive number of rows and columns, fulfilling `nr * nc >= nr.plots`, and currently, for `asp = 1`, `nr >= nc >= 1`.
### Note
Conceptually, this is a quadratic integer optimization problem, with inequality constraints *nr >= 1*, *nc >= 1*, and *nr.plots >= nr\*nc* (and possibly `nr >= asp*nc`), and *two* objective functions which would have to be combined via a tuning weight, say *w*, to, e.g., *(nr.plots - nr\*nc) + w |nr/nc - asp|*.
The current algorithm is simple and not trying to solve one of these optimization problems.
### Author(s)
Martin Maechler; suggestion of `asp` by Michael Chirico.
### See Also
`[par](../../graphics/html/par)`, `[layout](../../graphics/html/layout)`.
### Examples
```
require(graphics)
n2mfrow(8) # 3 x 3
n <- 5 ; x <- seq(-2, 2, length.out = 51)
## suppose now that 'n' is not known {inside function}
op <- par(mfrow = n2mfrow(n))
for (j in 1:n)
plot(x, x^j, main = substitute(x^ exp, list(exp = j)), type = "l",
col = "blue")
sapply(1:14, n2mfrow)
sapply(1:14, n2mfrow, asp=16/9)
```
r None
`gray.colors` Gray Color Palette
---------------------------------
### Description
Create a vector of `n` gamma-corrected gray colors.
### Usage
```
gray.colors(n, start = 0.3, end = 0.9, gamma = 2.2, alpha, rev = FALSE)
grey.colors(n, start = 0.3, end = 0.9, gamma = 2.2, alpha, rev = FALSE)
```
### Arguments
| | |
| --- | --- |
| `n` | the number of gray colors (*≥ 1*) to be in the palette. |
| `start` | starting gray level in the palette (should be between `0` and `1` where zero indicates `"black"` and one indicates `"white"`). |
| `end` | ending gray level in the palette. |
| `gamma` | the gamma correction. |
| `alpha` | the opacity, if specified. |
| `rev` | logical indicating whether the ordering of the colors should be reversed. |
### Details
The function `gray.colors` chooses a series of `n` gamma-corrected gray levels between `start` and `end`: `seq(start^gamma, end^gamma, length = n)^(1/gamma)`. The returned palette contains the corresponding gray colors. This palette is used in `[barplot.default](../../graphics/html/barplot)`.
`grey.colors` is an alias for `gray.colors`.
### Value
A vector of `n` gray colors.
### See Also
`<gray>`, `[rainbow](palettes)`, `<palette>`.
### Examples
```
require(graphics)
pie(rep(1, 12), col = gray.colors(12))
barplot(1:12, col = gray.colors(12))
```
r None
`plotmath` Mathematical Annotation in R
----------------------------------------
### Description
If the `text` argument to one of the text-drawing functions (`[text](../../graphics/html/text)`, `[mtext](../../graphics/html/mtext)`, `[axis](../../graphics/html/axis)`, `[legend](../../graphics/html/legend)`) in **R** is an expression, the argument is interpreted as a mathematical expression and the output will be formatted according to TeX-like rules. Expressions can also be used for titles, subtitles and x- and y-axis labels (but not for axis labels on `persp` plots).
In most cases other language objects (names and calls, including formulas) are coerced to expressions and so can also be used.
### Details
A mathematical expression must obey the normal rules of syntax for any **R** expression, but it is interpreted according to very different rules than for normal **R** expressions.
It is possible to produce many different mathematical symbols, generate sub- or superscripts, produce fractions, etc.
The output from `demo(plotmath)` includes several tables which show the available features. In these tables, the columns of grey text show sample **R** expressions, and the columns of black text show the resulting output.
The available features are also described in the tables below:
| | |
| --- | --- |
| **Syntax** | **Meaning** |
| `x + y` | x plus y |
| `x - y` | x minus y |
| `x*y` | juxtapose x and y |
| `x/y` | x forwardslash y |
| `x %+-% y` | x plus or minus y |
| `x %/% y` | x divided by y |
| `x %*% y` | x times y |
| `x %.% y` | x cdot y |
| `x[i]` | x subscript i |
| `x^2` | x superscript 2 |
| `paste(x, y, z)` | juxtapose x, y, and z |
| `sqrt(x)` | square root of x |
| `sqrt(x, y)` | yth root of x |
| `x == y` | x equals y |
| `x != y` | x is not equal to y |
| `x < y` | x is less than y |
| `x <= y` | x is less than or equal to y |
| `x > y` | x is greater than y |
| `x >= y` | x is greater than or equal to y |
| `!x` | not x |
| `x %~~% y` | x is approximately equal to y |
| `x %=~% y` | x and y are congruent |
| `x %==% y` | x is defined as y |
| `x %prop% y` | x is proportional to y |
| `x %~% y` | x is distributed as y |
| `plain(x)` | draw x in normal font |
| `bold(x)` | draw x in bold font |
| `italic(x)` | draw x in italic font |
| `bolditalic(x)` | draw x in bolditalic font |
| `symbol(x)` | draw x in symbol font |
| `list(x, y, z)` | comma-separated list |
| `...` | ellipsis (height varies) |
| `cdots` | ellipsis (vertically centred) |
| `ldots` | ellipsis (at baseline) |
| `x %subset% y` | x is a proper subset of y |
| `x %subseteq% y` | x is a subset of y |
| `x %notsubset% y` | x is not a subset of y |
| `x %supset% y` | x is a proper superset of y |
| `x %supseteq% y` | x is a superset of y |
| `x %in% y` | x is an element of y |
| `x %notin% y` | x is not an element of y |
| `hat(x)` | x with a circumflex |
| `tilde(x)` | x with a tilde |
| `dot(x)` | x with a dot |
| `ring(x)` | x with a ring |
| `bar(xy)` | xy with bar |
| `widehat(xy)` | xy with a wide circumflex |
| `widetilde(xy)` | xy with a wide tilde |
| `x %<->% y` | x double-arrow y |
| `x %->% y` | x right-arrow y |
| `x %<-% y` | x left-arrow y |
| `x %up% y` | x up-arrow y |
| `x %down% y` | x down-arrow y |
| `x %<=>% y` | x is equivalent to y |
| `x %=>% y` | x implies y |
| `x %<=% y` | y implies x |
| `x %dblup% y` | x double-up-arrow y |
| `x %dbldown% y` | x double-down-arrow y |
| `alpha` -- `omega` | Greek symbols |
| `Alpha` -- `Omega` | uppercase Greek symbols |
| `theta1, phi1, sigma1, omega1` | cursive Greek symbols |
| `Upsilon1` | capital upsilon with hook |
| `aleph` | first letter of Hebrew alphabet |
| `infinity` | infinity symbol |
| `partialdiff` | partial differential symbol |
| `nabla` | nabla, gradient symbol |
| `32*degree` | 32 degrees |
| `60*minute` | 60 minutes of angle |
| `30*second` | 30 seconds of angle |
| `displaystyle(x)` | draw x in normal size (extra spacing) |
| `textstyle(x)` | draw x in normal size |
| `scriptstyle(x)` | draw x in small size |
| `scriptscriptstyle(x)` | draw x in very small size |
| `underline(x)` | draw x underlined |
| `x ~~ y` | put extra space between x and y |
| `x + phantom(0) + y` | leave gap for "0", but don't draw it |
| `x + over(1, phantom(0))` | leave vertical gap for "0" (don't draw) |
| `frac(x, y)` | x over y |
| `over(x, y)` | x over y |
| `atop(x, y)` | x over y (no horizontal bar) |
| `sum(x[i], i==1, n)` | sum x[i] for i equals 1 to n |
| `prod(plain(P)(X==x), x)` | product of P(X=x) for all values of x |
| `integral(f(x)*dx, a, b)` | definite integral of f(x) wrt x |
| `union(A[i], i==1, n)` | union of A[i] for i equals 1 to n |
| `intersect(A[i], i==1, n)` | intersection of A[i] |
| `lim(f(x), x %->% 0)` | limit of f(x) as x tends to 0 |
| `min(g(x), x > 0)` | minimum of g(x) for x greater than 0 |
| `inf(S)` | infimum of S |
| `sup(S)` | supremum of S |
| `x^y + z` | normal operator precedence |
| `x^(y + z)` | visible grouping of operands |
| `x^{y + z}` | invisible grouping of operands |
| `group("(",list(a, b),"]")` | specify left and right delimiters |
| `bgroup("(",atop(x,y),")")` | use scalable delimiters |
| `group(lceil, x, rceil)` | special delimiters |
| `group(lfloor, x, rfloor)` | special delimiters |
| `group(langle, list(x, y), rangle)` | special delimiters |
| |
The supported ‘scalable delimiters’ are `| ( [ {` and their right-hand versions. `"."` is equivalent to `""`: the corresponding delimiter will be omitted. Delimiter `||` is supported but has the same effect as `|`. The special delimiters `lceil`, `lfloor`, `langle` (and their right-hand versions) are not scalable.
The symbol font uses Adobe Symbol encoding so, for example, a lower case mu can be obtained either by the special symbol `mu` or by `symbol("m")`. This provides access to symbols that have no special symbol name, for example, the universal, or forall, symbol is `symbol("\042")`. To see what symbols are available in this way use `TestChars(font=5)` as given in the examples for `[points](../../graphics/html/points)`: some are only available on some devices.
Note to TeX users: TeX's \Upsilon is `Upsilon1`, TeX's \varepsilon is close to `epsilon`, and there is no equivalent of TeX's \epsilon. TeX's \varpi is close to `omega1`. `vartheta`, `varphi` and `varsigma` are allowed as synonyms for `theta1`, `phi1` and `sigma1`.
`sigma1` is also known as `stigma`, its Unicode name.
Control characters (e.g., \n) are not interpreted in character strings in plotmath, unlike normal plotting.
The fonts used are taken from the current font family, and so can be set by `[par](../../graphics/html/par)(family=)` in base graphics, and `[gpar](../../grid/html/gpar)(fontfamily=)` in package grid.
Note that `bold`, `italic` and `bolditalic` do not apply to symbols, and hence not to the Greek *symbols* such as `mu` which are displayed in the symbol font. They also do not apply to numeric constants.
### Other symbols
On many OSes and some graphics devices many other symbols are available as part of the standard text font, and all of the symbols in the Adobe Symbol encoding are in principle available *via* changing the font face or (see ‘Details’) plotmath: see the examples section of `[points](../../graphics/html/points)` for a function to display them. (‘In principle’ because some of the glyphs are missing from some implementations of the symbol font.) Unfortunately, `<postscript>` and `<pdf>` have support for little more than European (not Greek) and CJK characters and the Adobe Symbol encoding (and in a few fonts, also Cyrillic characters).
On Unix-alikes:
In a UTF-8 locale any Unicode character can be entered, perhaps as a \uxxxx or \Uxxxxxxxx escape sequence, but the issue is whether the graphics device is able to display the character. The widest range of characters is likely to be available in the `[X11](x11)` device using cairo: see its help page for how installing additional fonts can help. This can often be used to display Greek *letters* in bold or italic.
In non-UTF-8 locales there is normally no support for symbols not in the languages for which the current encoding was intended.
On Windows:
Any Unicode character can be entered into a text string *via* a \uxxxx escape, or used by number in a call to `[points](../../graphics/html/points)`. The `<windows>` family of devices can display such characters if they are available in the font in use. This can often be used to display Greek *letters* in bold or italic.
A good way to both find out which characters are available in a font and to determine the Unicode number is to use the ‘Character Map’ accessory (usually on the ‘Start’ menu under ‘Accessories->System Tools’). You can also copy-and-paste characters from the ‘Character Map’ window to the `Rgui` console (but not to `Rterm`).
### References
Murrell, P. and Ihaka, R. (2000). An approach to providing mathematical annotation in plots. *Journal of Computational and Graphical Statistics*, **9**, 582–599. doi: [10.2307/1390947](https://doi.org/10.2307/1390947).
The symbol codes can be found in octal in the Adobe reference manuals, e.g. for Postscript <https://www.adobe.com/content/dam/acom/en/devnet/actionscript/articles/PLRM.pdf> or PDF <https://www.adobe.com/content/dam/acom/en/devnet/acrobat/pdfs/pdf_reference_1-7.pdf> and in decimal, octal and hex at <https://www.stat.auckland.ac.nz/~paul/R/CM/AdobeSym.html>.
### See Also
`demo(plotmath)`, `[axis](../../graphics/html/axis)`, `[mtext](../../graphics/html/mtext)`, `[text](../../graphics/html/text)`, `[title](../../graphics/html/title)`, `[substitute](../../base/html/substitute)` `[quote](../../base/html/substitute)`, `[bquote](../../base/html/bquote)`
### Examples
```
require(graphics)
x <- seq(-4, 4, length.out = 101)
y <- cbind(sin(x), cos(x))
matplot(x, y, type = "l", xaxt = "n",
main = expression(paste(plain(sin) * phi, " and ",
plain(cos) * phi)),
ylab = expression("sin" * phi, "cos" * phi), # only 1st is taken
xlab = expression(paste("Phase Angle ", phi)),
col.main = "blue")
axis(1, at = c(-pi, -pi/2, 0, pi/2, pi),
labels = expression(-pi, -pi/2, 0, pi/2, pi))
## How to combine "math" and numeric variables :
plot(1:10, type="n", xlab="", ylab="", main = "plot math & numbers")
theta <- 1.23 ; mtext(bquote(hat(theta) == .(theta)), line= .25)
for(i in 2:9)
text(i, i+1, substitute(list(xi, eta) == group("(",list(x,y),")"),
list(x = i, y = i+1)))
## note that both of these use calls rather than expressions.
##
text(1, 10, "Derivatives:", adj = 0)
text(1, 9.6, expression(
" first: {f * minute}(x) " == {f * minute}(x)), adj = 0)
text(1, 9.0, expression(
" second: {f * second}(x) " == {f * second}(x)), adj = 0)
plot(1:10, 1:10)
text(4, 9, expression(hat(beta) == (X^t * X)^{-1} * X^t * y))
text(4, 8.4, "expression(hat(beta) == (X^t * X)^{-1} * X^t * y)",
cex = .8)
text(4, 7, expression(bar(x) == sum(frac(x[i], n), i==1, n)))
text(4, 6.4, "expression(bar(x) == sum(frac(x[i], n), i==1, n))",
cex = .8)
text(8, 5, expression(paste(frac(1, sigma*sqrt(2*pi)), " ",
plain(e)^{frac(-(x-mu)^2, 2*sigma^2)})),
cex = 1.2)
## some other useful symbols
plot.new(); plot.window(c(0,4), c(15,1))
text(1, 1, "universal", adj = 0); text(2.5, 1, "\\042")
text(3, 1, expression(symbol("\042")))
text(1, 2, "existential", adj = 0); text(2.5, 2, "\\044")
text(3, 2, expression(symbol("\044")))
text(1, 3, "suchthat", adj = 0); text(2.5, 3, "\\047")
text(3, 3, expression(symbol("\047")))
text(1, 4, "therefore", adj = 0); text(2.5, 4, "\\134")
text(3, 4, expression(symbol("\134")))
text(1, 5, "perpendicular", adj = 0); text(2.5, 5, "\\136")
text(3, 5, expression(symbol("\136")))
text(1, 6, "circlemultiply", adj = 0); text(2.5, 6, "\\304")
text(3, 6, expression(symbol("\304")))
text(1, 7, "circleplus", adj = 0); text(2.5, 7, "\\305")
text(3, 7, expression(symbol("\305")))
text(1, 8, "emptyset", adj = 0); text(2.5, 8, "\\306")
text(3, 8, expression(symbol("\306")))
text(1, 9, "angle", adj = 0); text(2.5, 9, "\\320")
text(3, 9, expression(symbol("\320")))
text(1, 10, "leftangle", adj = 0); text(2.5, 10, "\\341")
text(3, 10, expression(symbol("\341")))
text(1, 11, "rightangle", adj = 0); text(2.5, 11, "\\361")
text(3, 11, expression(symbol("\361")))
```
| programming_docs |
r None
`postscript` PostScript Graphics
---------------------------------
### Description
`postscript` starts the graphics device driver for producing PostScript graphics.
### Usage
```
postscript(file = if(onefile) "Rplots.ps" else "Rplot%03d.ps",
onefile, family, title, fonts, encoding, bg, fg,
width, height, horizontal, pointsize,
paper, pagecentre, print.it, command,
colormodel, useKerning, fillOddEven)
```
### Arguments
| | |
| --- | --- |
| `file` | a character string giving the file path. If it is `""`, the output is piped to the command given by the argument `command`. If it is of the form `"|cmd"`, the output is piped to the command given by `cmd`. For use with `onefile = FALSE` give a `printf` format such as `"Rplot%03d.ps"` (the default in that case). The string should not otherwise contain a `%`: if it is really necessary, use `%%` in the string for `%` in the file name. A single integer format matching the [regular expression](../../base/html/regex) `"%[#0 +=-]*[0-9.]*[diouxX]"` is allowed. Tilde expansion (see `[path.expand](../../base/html/path.expand)`) is done. An input with a marked encoding is converted to the native encoding or an error is given. |
| `onefile` | logical: if true (the default) allow multiple figures in one file. If false, generate a file name containing the page number for each page and use an EPSF header and no `DocumentMedia` comment. Defaults to `TRUE`. |
| `family` | the initial font family to be used, normally as a character string. See the section ‘Families’. Defaults to `"Helvetica"`. |
| `title` | title string to embed as the `Title` comment in the file. Defaults to `"R Graphics Output"`. |
| `fonts` | a character vector specifying additional **R** graphics font family names for font families whose declarations will be included in the PostScript file and are available for use with the device. See ‘Families’ below. Defaults to `NULL`. |
| `encoding` | the name of an encoding file. Defaults to `"default"`. The latter is interpreted on Unix-alikes
as ‘"ISOLatin1.enc"’ unless the locale is recognized as corresponding to a language using ISO 8859-{2,5,7,13,15} or KOI8-{R,U}. on Windows
as ‘"CP1250.enc"’ (Central European), `"CP1251.enc"` (Cyrillic), `"CP1253.enc"` (Greek) or `"CP1257.enc"` (Baltic) if one of those codepages is in use, otherwise ‘"WinAnsi.enc"’ (codepage 1252). The file is looked for in the ‘enc’ directory of package grDevices if the path does not contain a path separator. An extension `".enc"` can be omitted. |
| `bg` | the initial background color to be used. If `"transparent"` (or any other non-opaque colour), no background is painted. Defaults to `"transparent"`. |
| `fg` | the initial foreground color to be used. Defaults to `"black"`. |
| `width, height` | the width and height of the graphics region in inches. Default to `0`. If `paper != "special"` and `width` or `height` is less than `0.1` or too large to give a total margin of 0.5 inch, the graphics region is reset to the corresponding paper dimension minus 0.5. |
| `horizontal` | the orientation of the printed image, a logical. Defaults to true, that is landscape orientation on paper sizes with width less than height. |
| `pointsize` | the default point size to be used. Strictly speaking, in bp, that is 1/72 of an inch, but approximately in points. Defaults to `12`. |
| `paper` | the size of paper in the printer. The choices are `"a4"`, `"letter"` (or `"us"`), `"legal"` and `"executive"` (and these can be capitalized). Also, `"special"` can be used, when arguments `width` and `height` specify the paper size. A further choice is `"default"` (the default): If this is selected, the papersize is taken from the option `"papersize"` if that is set and to `"a4"` if it is unset or empty. |
| `pagecentre` | logical: should the device region be centred on the page? Defaults to true. |
| `print.it` | logical: should the file be printed when the device is closed? (This only applies if `file` is a real file name.) Defaults to false. |
| `command` | the command to be used for ‘printing’. Defaults to `"default"`, the value of option `"printcmd"`. The length limit is `2*PATH_MAX`, typically 8096 bytes on Unix-alikes and 520 bytes on Windows. |
| `colormodel` | a character string describing the color model: currently allowed values as `"srgb"`, `"srgb+gray"`, `"rgb"`, `"rgb-nogray"`, `"gray"` (or `"grey")` and `"cmyk"`. Defaults to `"srgb"`. See section ‘Color models’. |
| `useKerning` | logical. Should kerning corrections be included in setting text and calculating string widths? Defaults to `TRUE`. |
| `fillOddEven` | logical controlling the polygon fill mode: see `[polygon](../../graphics/html/polygon)` for details. Default `FALSE`. |
### Details
All arguments except `file` default to values given by `<ps.options>()`. The ultimate defaults are quoted in the arguments section.
`postscript` opens the file `file` and the PostScript commands needed to plot any graphics requested are written to that file. This file can then be printed on a suitable device to obtain hard copy.
The `file` argument is interpreted as a C integer format as used by `[sprintf](../../base/html/sprintf)`, with integer argument the page number. The default gives files ‘Rplot001.ps’, ..., ‘Rplot999.ps’, ‘Rplot1000.ps’, ....
The postscript produced for a single **R** plot is EPS (*Encapsulated PostScript*) compatible, and can be included into other documents, e.g., into LaTeX, using `\includegraphics{<filename>}`. For use in this way you will probably want to use `[setEPS](ps.options)()` to set the defaults as `horizontal = FALSE, onefile = FALSE, paper =
"special"`. Note that the bounding box is for the *device* region: if you find the white space around the plot region excessive, reduce the margins of the figure region via `[par](../../graphics/html/par)(mar = )`.
Most of the PostScript prologue used is taken from the **R** character vector `.ps.prolog`. This is marked in the output, and can be changed by changing that vector. (This is only advisable for PostScript experts: the standard version is in `namespace:grDevices`.)
A PostScript device has a default family, which can be set by the user via `family`. If other font families are to be used when drawing to the PostScript device, these must be declared when the device is created via `fonts`; the font family names for this argument are **R** graphics font family names (see the documentation for `[postscriptFonts](postscriptfonts)`).
Line widths as controlled by `par(lwd = )` are in multiples of 1/96 inch: multiples less than 1 are allowed. `pch = "."` with `cex = 1` corresponds to a square of side 1/72 inch, which is also the ‘pixel’ size assumed for graphics parameters such as `"cra"`.
When the background colour is fully transparent (as is the initial default value), the PostScript produced does not paint the background. Almost all PostScript viewers will use a white canvas so the visual effect is if the background were white. This will not be the case when printing onto coloured paper, though.
### Families
Font families are collections of fonts covering the five font faces, (conventionally plain, bold, italic, bold-italic and symbol) selected by the graphics parameter `[par](../../graphics/html/par)(font = )` or the grid parameter `[gpar](../../grid/html/gpar)(fontface = )`. Font families can be specified either as an an initial/default font family for the device via the `family` argument or after the device is opened by the graphics parameter `[par](../../graphics/html/par)(family = )` or the grid parameter `[gpar](../../grid/html/gpar)(fontfamily = )`. Families which will be used in addition to the initial family must be specified in the `fonts` argument when the device is opened.
Font families are declared via a call to `[postscriptFonts](postscriptfonts)`.
The argument `family` specifies the initial/default font family to be used. In normal use it is one of `"AvantGarde"`, `"Bookman"`, `"Courier"`, `"Helvetica"`, `"Helvetica-Narrow"`, `"NewCenturySchoolbook"`, `"Palatino"` or `"Times"`, and refers to the standard Adobe PostScript fonts families of those names which are included (or cloned) in all common PostScript devices.
Many PostScript emulators (including those based on `ghostscript`) use the URW equivalents of these fonts, which are `"URWGothic"`, `"URWBookman"`, `"NimbusMon"`, `"NimbusSan"`, `"NimbusSanCond"`, `"CenturySch"`, `"URWPalladio"` and `"NimbusRom"` respectively. If your PostScript device is using URW fonts, you will obtain access to more characters and more appropriate metrics by using these names. To make these easier to remember, `"URWHelvetica" == "NimbusSan"` and `"URWTimes" == "NimbusRom"` are also supported.
Another type of family makes use of CID-keyed fonts for East Asian languages – see `[postscriptFonts](postscriptfonts)`.
The `family` argument is normally a character string naming a font family, but family objects generated by `[Type1Font](type1font)` and `[CIDFont](type1font)` are also accepted. For compatibility with earlier versions of **R**, the initial family can also be specified as a vector of four or five afm files.
Note that **R** does not embed the font(s) used in the PostScript output: see `[embedFonts](embedfonts)` for a utility to help do so.
Viewers and embedding applications frequently substitute fonts for those specified in the family, and the substitute will often have slightly different font metrics. `useKerning = TRUE` spaces the letters in the string using kerning corrections for the intended family: this may look uglier than `useKerning = FALSE`.
### Encodings
Encodings describe which glyphs are used to display the character codes (in the range 0–255). Most commonly **R** uses ISOLatin1 encoding, and the examples for `[text](../../graphics/html/text)` are in that encoding. However, the encoding used on machines running **R** may well be different, and by using the `encoding` argument the glyphs can be matched to encoding in use. This suffices for European and Cyrillic languages, but not for East Asian languages. For the latter, composite CID fonts are used. These fonts are useful for other languages: for example they may contain Greek glyphs. (The rest of this section applies only when CID fonts are not used.)
None of this will matter if only ASCII characters (codes 32–126) are used as all the encodings (except `"TeXtext"`) agree over that range. Some encodings are supersets of ISOLatin1, too. However, if accented and special characters do not come out as you expect, you may need to change the encoding. Some other encodings are supplied with **R**: `"WinAnsi.enc"` and `"MacRoman.enc"` correspond to the encodings normally used on Windows and Classic Mac OS (at least by Adobe), and `"PDFDoc.enc"` is the first 256 characters of the Unicode encoding, the standard for PDF. There are also encodings `"ISOLatin2.enc"`, `"CP1250.enc"`, `"ISOLatin7.enc"` (ISO 8859-13), `"CP1257.enc"`, and `"ISOLatin9.enc"` (ISO 8859-15), `"Cyrillic.enc"` (ISO 8859-5), `"KOI8-R.enc"`, `"KOI8-U.enc"`, `"CP1251.enc"`, `"Greek.enc"` (ISO 8859-7) and `"CP1253.enc"`. Note that many glyphs in these encodings are not in the fonts corresponding to the standard families. (The Adobe ones for all but Courier, Helvetica and Times cover little more than Latin-1, whereas the URW ones also cover Latin-2, Latin-7, Latin-9 and Cyrillic but no Greek. The Adobe exceptions cover the Latin character sets, but not the Euro.)
If you specify the encoding, it is your responsibility to ensure that the PostScript font contains the glyphs used. One issue here is the Euro symbol which is in the WinAnsi and MacRoman encodings but may well not be in the PostScript fonts. (It is in the URW variants; it is not in the supplied Adobe Font Metric files.)
There is an exception. Character 45 (`"-"`) is always set as minus (its value in Adobe ISOLatin1) even though it is hyphen in the other encodings. Hyphen is available as character 173 (octal 0255) in all the Latin encodings, Cyrillic and Greek. (This can be entered as `"\uad"` in a UTF-8 locale.) There are some discrepancies in accounts of glyphs 39 and 96: the supplied encodings (except CP1250 and CP1251) treat these as ‘quoteright’ and ‘quoteleft’ (rather than ‘quotesingle’/‘acute’ and ‘grave’ respectively), as they are in the Adobe documentation.
### TeX fonts
TeX has traditionally made use of fonts such as Computer Modern which are encoded rather differently, in a 7-bit encoding. This encoding can be specified by `encoding = "TeXtext.enc"`, taking care that the ASCII characters `< > \ _ { }` are not available in those fonts.
There are supplied families `"ComputerModern"` and `"ComputerModernItalic"` which use this encoding, and which are only supported for `postscript` (and not `pdf`). They are intended to use with the Type 1 versions of the TeX CM fonts. It will normally be possible to include such output in TeX or LaTeX provided it is processed with `dvips -Ppfb -j0` or the equivalent on your system. (`-j0` turns off font subsetting.) When `family =
"ComputerModern"` is used, the italic/bold-italic fonts used are slanted fonts (`cmsl10` and `cmbxsl10`). To use text italic fonts instead, set `family = "ComputerModernItalic"`.
These families use the TeX math italic and symbol fonts for a comprehensive but incomplete coverage of the glyphs covered by the Adobe symbol font in other families. This is achieved by special-casing the postscript code generated from the supplied ‘CM\_symbol\_10.afm’.
### Color models
The default color model (`"srgb"`) is sRGB.
The alternative `"srgb+gray"` uses sRGB for colors, but with pure gray colors (including black and white) expressed as greyscales (which results in smaller files and can be advantageous with some printer drivers). Conversely, its files can be rendered much slower on some viewers, and there can be a noticeable discontinuity in color gradients involving gray or white.
Other possibilities are `"gray"` (or `"grey"`) which used only greyscales (and converts other colours to a luminance), and `"cmyk"`. The simplest possible conversion from sRGB to CMYK is used (<https://en.wikipedia.org/wiki/CMYK_color_model#Mapping_RGB_to_CMYK>), and raster images are output in RGB.
Color models provided for backwards compatibility are `"rgb"` (which is RGB+gray) and `"rgb-nogray"` which use uncalibrated RGB (as used in **R** prior to 2.13.0). These result in slightly smaller files which may render faster, but do rely on the viewer being properly calibrated.
### Printing
A postscript plot can be printed via `postscript` in two ways.
1. Setting `print.it = TRUE` causes the command given in argument `command` to be called with argument `"file"` when the device is closed. Note that the plot file is not deleted unless `command` arranges to delete it.
2. `file = ""` or `file = "|cmd"` can be used to print using a pipe. Failure to open the command will probably be reported to the terminal but not to **R**, in which case close the device by `dev.off` immediately.
On Windows the default `"printcmd"` is empty and will give an error if `print.it = TRUE` is used. Suitable commands to spool a PostScript file to a printer can be found in ‘RedMon’ suite available from <http://pages.cs.wisc.edu/~ghost/index.html>. The command will be run in a minimized window. GSView 4.x provides ‘gsprint.exe’ which may be more convenient (it requires Ghostscript version 6.50 or later).
### Conventions
This section describes the implementation of the conventions for graphics devices set out in the ‘R Internals’ manual.
* The default device size is 7 inches square.
* Font sizes are in big points.
* The default font family is Helvetica.
* Line widths are as a multiple of 1/96 inch, with a minimum of 0.01 enforced.
* Circle of any radius are allowed.
* Colours are by default specified as sRGB.
At very small line widths, the line type may be forced to solid.
Raster images are currently limited to opaque colours.
### Note
If you see problems with postscript output, do remember that the problem is much more likely to be in your viewer than in **R**. Try another viewer if possible. Symptoms for which the viewer has been at fault are apparent grids on image plots (turn off graphics anti-aliasing in your viewer if you can) and missing or incorrect glyphs in text (viewers silently doing font substitution).
Unfortunately the default viewers on most Linux and macOS systems have these problems, and no obvious way to turn off graphics anti-aliasing.
### Author(s)
Support for Computer Modern fonts is based on a contribution by Brian D'Urso [[email protected]](mailto:[email protected]).
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[postscriptFonts](postscriptfonts)`, `[Devices](devices)`, and `<check.options>` which is called from both `<ps.options>` and `postscript`.
`[cairo\_ps](cairo)` for another device that can produce PostScript.
More details of font families and encodings and especially handling text in a non-Latin-1 encoding and embedding fonts can be found in
Paul Murrell and Brian Ripley (2006). “Non-standard fonts in PostScript and PDF graphics.” *R News*, **6**(2), 41–47. <https://www.r-project.org/doc/Rnews/Rnews_2006-2.pdf>.
### Examples
```
require(graphics)
## Not run:
# open the file "foo.ps" for graphics output
postscript("foo.ps")
# produce the desired graph(s)
dev.off() # turn off the postscript device
## On Unix-alikes only:
postscript("|lp -dlw")
# produce the desired graph(s)
dev.off() # plot will appear on printer
## On Windows:
options(printcmd = 'redpr -P"\\printhost\lw"')
postscript(file = tempfile("Rps."), print.it = TRUE)
# produce the desired graph(s)
dev.off() # send plot file to the printer
## alternative using GSView 4.x :
options(printcmd = '/GhostGum/gsview/gsprint -query')
# for URW PostScript devices
postscript("foo.ps", family = "NimbusSan")
## for inclusion in Computer Modern TeX documents, perhaps
postscript("cm_test.eps", width = 4.0, height = 3.0,
horizontal = FALSE, onefile = FALSE, paper = "special",
family = "ComputerModern", encoding = "TeXtext.enc")
## The resultant postscript file can be used by dvips -Ppfb -j0.
## To test out encodings, you can use
TestChars <- function(encoding = "ISOLatin1", family = "URWHelvetica")
{
postscript(encoding = encoding, family = family)
par(pty = "s")
plot(c(-1,16), c(-1,16), type = "n", xlab = "", ylab = "",
xaxs = "i", yaxs = "i")
title(paste("Centred chars in encoding", encoding))
grid(17, 17, lty = 1)
for(i in c(32:255)) {
x <- i %% 16
y <- i %/% 16
points(x, y, pch = i)
}
dev.off()
}
## there will be many warnings. We use URW to get a complete enough
## set of font metrics.
TestChars()
TestChars("ISOLatin2")
TestChars("WinAnsi")
## End(Not run)
```
r None
`dev.size` Find Size of Device Surface
---------------------------------------
### Description
Find the dimensions of the device surface of the current device.
### Usage
```
dev.size(units = c("in", "cm", "px"))
```
### Arguments
| | |
| --- | --- |
| `units` | the units in which to return the value – inches, cm, or pixels (device units). |
### Value
A two-element numeric vector giving width and height of the current device; a new device is opened if there is none, similarly to `[dev.new](dev)()`.
### See Also
The size information in inches can be obtained by `[par](../../graphics/html/par)("din")`, but this provides a way to access it independent of the graphics sub-system in use. Note that `par("din")` is only updated when a new plot is started, whereas `dev.size` tracks the size as an on-screen device is resized.
### Examples
```
dev.size("cm")
```
| programming_docs |
r None
`Japanese` Japanese characters in R
------------------------------------
### Description
The implementation of Hershey vector fonts provides a large number of Japanese characters (Hiragana, Katakana, and Kanji).
### Details
Without keyboard support for typing Japanese characters, the only way to produce these characters is to use special escape sequences: see `[Hershey](hershey)`.
For example, the Hiragana character for the sound "ka" is produced by \\#J242b and the Katakana character for this sound is produced by \\#J252b. The Kanji ideograph for "one" is produced by \\#J306c or \\#N0001.
The output from `[demo](../../utils/html/demo)(Japanese)` shows tables of the escape sequences for the available Japanese characters.
### References
<https://www.gnu.org/software/plotutils/plotutils.html>
### See Also
`[demo](../../utils/html/demo)(Japanese)`, `[Hershey](hershey)`, `[text](../../graphics/html/text)`
### Examples
```
require(graphics)
plot(1:9, type = "n", axes = FALSE, frame.plot = TRUE, ylab = "",
main = "example(Japanese)", xlab = "using Hershey fonts")
par(cex = 3)
Vf <- c("serif", "plain")
text(4, 2, "\\#J244b\\#J245b\\#J2473", vfont = Vf)
text(4, 4, "\\#J2538\\#J2563\\#J2551\\#J2573", vfont = Vf)
text(4, 6, "\\#J467c\\#J4b5c", vfont = Vf)
text(4, 8, "Japan", vfont = Vf)
par(cex = 1)
text(8, 2, "Hiragana")
text(8, 4, "Katakana")
text(8, 6, "Kanji")
text(8, 8, "English")
```
r None
`dev.flush` Hold or Flush Output on an On-Screen Graphics Device.
------------------------------------------------------------------
### Description
This gives a way to hold/flush output on certain on-screen devices, and is ignored by other devices.
### Usage
```
dev.hold(level = 1L)
dev.flush(level = 1L)
```
### Arguments
| | |
| --- | --- |
| `level` | Integer >= 0. The amount by which to change the hold level. Negative values will be silently replaced by zero. |
### Details
Devices which implement this maintain a stack of hold levels: calling `dev.hold` increases the level and `dev.flush` decreases it. Calling `dev.hold` when the hold level is zero increases the hold level and inhibits graphics display. When calling `dev.flush` clears all pending holds the screen display is refreshed and normal operation is resumed.
This is implemented for the cairo-based `X11` types with buffering. When the hold level is positive the ‘watch’ cursor is set on the device's window.
It is available on the `quartz` device on macOS.
This is implemented for the `windows` device with buffering selected (the default). When the hold level is positive the ‘busy’ cursor is set on the device's window.
### Value
The current level after the change, invisibly. This is `0` on devices where hold levels are not supported.
r None
`grDevices-package` The R Graphics Devices and Support for Colours and Fonts
-----------------------------------------------------------------------------
### Description
Graphics devices and support for base and grid graphics
### Details
This package contains functions which support both [base](../../graphics/html/graphics-package) and [grid](../../grid/html/grid-package) graphics.
For a complete list of functions, use `library(help = "grDevices")`.
### Author(s)
R Core Team and contributors worldwide
Maintainer: R Core Team [[email protected]](mailto:[email protected])
r None
`rgb` RGB Color Specification
------------------------------
### Description
This function creates colors corresponding to the given intensities (between 0 and `max`) of the red, green and blue primaries. The colour specification refers to the standard sRGB colorspace (IEC standard 61966).
An alpha transparency value can also be specified (as an opacity, so `0` means fully transparent and `max` means opaque). If `alpha` is not specified, an opaque colour is generated.
The `names` argument may be used to provide names for the colors.
The values returned by these functions can be used with a `col=` specification in graphics functions or in `[par](../../graphics/html/par)`.
### Usage
```
rgb(red, green, blue, alpha, names = NULL, maxColorValue = 1)
```
### Arguments
| | |
| --- | --- |
| `red, blue, green, alpha` | numeric vectors with values in *[0, M]* where *M* is `maxColorValue`. When this is `255`, the `red`, `blue`, `green`, and `alpha` values are coerced to integers in `0:255` and the result is computed most efficiently. |
| `names` | character vector. The names for the resulting vector. |
| `maxColorValue` | number giving the maximum of the color values range, see above. |
### Details
The colors may be specified by passing a matrix or data frame as argument `red`, and leaving `blue` and `green` missing. In this case the first three columns of `red` are taken to be the `red`, `green` and `blue` values.
Semi-transparent colors (`0 < alpha < 1`) are supported only on some devices: at the time of writing on the `<pdf>`, `windows`, `quartz` and `X11(type = "cairo")` devices and associated bitmap devices (`jpeg`, `png`, `bmp`, `tiff` and `bitmap`). They are supported by several third-party devices such as those in packages [Cairo](https://CRAN.R-project.org/package=Cairo), [cairoDevice](https://CRAN.R-project.org/package=cairoDevice) and [JavaGD](https://CRAN.R-project.org/package=JavaGD). Only some of these devices support semi-transparent backgrounds.
Most other graphics devices plot semi-transparent colors as fully transparent, usually with a warning when first encountered.
`NA` values are not allowed for any of `red`, `blue`, `green` or `alpha`.
### Value
A character vector with elements of 7 or 9 characters, `"#"` followed by the red, blue, green and optionally alpha values in hexadecimal (after rescaling to `0 ... 255`). The optional alpha values range from `0` (fully transparent) to `255` (opaque).
**R** does **not** use ‘premultiplied alpha’.
### See Also
`<col2rgb>` for translating **R** colors to RGB vectors; `[rainbow](palettes)`, `<hsv>`, `<hcl>`, `<gray>`.
### Examples
```
rgb(0, 1, 0)
rgb((0:15)/15, green = 0, blue = 0, names = paste("red", 0:15, sep = "."))
rgb(0, 0:12, 0, maxColorValue = 255) # integer input
ramp <- colorRamp(c("red", "white"))
rgb( ramp(seq(0, 1, length.out = 5)), maxColorValue = 255)
```
r None
`png` BMP, JPEG, PNG and TIFF graphics devices
-----------------------------------------------
### Description
Graphics devices for BMP, JPEG, PNG and TIFF format bitmap files.
### Usage
```
bmp(filename = "Rplot%03d.bmp",
width = 480, height = 480, units = "px", pointsize = 12,
bg = "white", res = NA, ...,
type = c("cairo", "Xlib", "quartz"), antialias)
jpeg(filename = "Rplot%03d.jpeg",
width = 480, height = 480, units = "px", pointsize = 12,
quality = 75,
bg = "white", res = NA, ...,
type = c("cairo", "Xlib", "quartz"), antialias)
png(filename = "Rplot%03d.png",
width = 480, height = 480, units = "px", pointsize = 12,
bg = "white", res = NA, ...,
type = c("cairo", "cairo-png", "Xlib", "quartz"), antialias)
tiff(filename = "Rplot%03d.tiff",
width = 480, height = 480, units = "px", pointsize = 12,
compression = c("none", "rle", "lzw", "jpeg", "zip", "lzw+p", "zip+p"),
bg = "white", res = NA, ...,
type = c("cairo", "Xlib", "quartz"), antialias)
```
### Arguments
| | |
| --- | --- |
| `filename` | the output file path. The page number is substituted if a C integer format is included in the character string, as in the default. (The result must be less than `PATH_MAX` characters long, and may be truncated if not. See `<postscript>` for further details.) Tilde expansion is performed where supported by the platform. An input with a marked encoding is converted to the native encoding on an error is given. |
| `width` | the width of the device. |
| `height` | the height of the device. |
| `units` | The units in which `height` and `width` are given. Can be `px` (pixels, the default), `in` (inches), `cm` or `mm`. |
| `pointsize` | the default pointsize of plotted text, interpreted as big points (1/72 inch) at `res` ppi. |
| `bg` | the initial background colour: can be overridden by setting par("bg"). |
| `quality` | the ‘quality’ of the JPEG image, as a percentage. Smaller values will give more compression but also more degradation of the image. |
| `compression` | the type of compression to be used. Ignored for `type = "quartz"`. |
| `res` | The nominal resolution in ppi which will be recorded in the bitmap file, if a positive integer. Also used for `units` other than the default, and to convert points to pixels. |
| `...` | for `type = "Xlib"` only, additional arguments to the underlying `[X11](x11)` device such as `fonts` or `family`. For types `"cairo"` and `"quartz"`, the `family` argument can be supplied. See the ‘Cairo fonts’ section in the help for `[X11](x11)`. For type `"cairo"`, the `symbolfamily` argument can be supplied. See `[X11.options](x11)`. |
| `type` | character string, one of `"Xlib"` or `"quartz"` (some macOS builds) or `"cairo"`. The latter will only be available if the system was compiled with support for cairo – otherwise `"Xlib"` will be used. The default is set by `[getOption](../../base/html/options)("bitmapType")` – the ‘out of the box’ default is `"quartz"` or `"cairo"` where available, otherwise `"Xlib"`. |
| `antialias` | for `type = "cairo"`, giving the type of anti-aliasing (if any) to be used for fonts and lines (but not fills). See `[X11](x11)`. The default is set by `[X11.options](x11)`. Also for `type = "quartz"`, where antialiasing is used unless `antialias = "none"`. |
### Details
Plots in PNG and JPEG format can easily be converted to many other bitmap formats, and both can be displayed in modern web browsers. The PNG format is lossless and is best for line diagrams and blocks of colour. The JPEG format is lossy, but may be useful for image plots, for example. BMP is a standard format on Windows. TIFF is a meta-format: the default format written by `tiff` is lossless and stores RGB (and alpha where appropriate) values uncompressed—such files are widely accepted, which is their main virtue over PNG.
`png` supports transparent backgrounds: use `bg =
"transparent"`. (Not all PNG viewers render files with transparency correctly.) When transparency is in use in the `type = "Xlib"` variant a very light grey is used as the background and so appears as transparent if used in the plot. This allows opaque white to be used, as in the example. The `type = "cairo"`, `type =
"cairo-png"` and `type = "quartz"` variants allow semi-transparent colours, including on a transparent or semi-transparent background.
`tiff` with types `"cairo"` and `"quartz"` supports semi-transparent colours, including on a transparent or semi-transparent background. Compression type `"zip"` is ‘deflate (Adobe-style)’. Compression types `"lzw+p"` and `"zip+p"` use horizontal differencing (‘differencing predictor’, section 14 of the TIFF specification) in combination with the compression method, which is effective for continuous-tone images, especially colour ones.
**R** can be compiled without support for some or all of the types for each of these devices: this will be reported if you attempt to use them on a system where they are not supported. For `type =
"Xlib"` they may not be usable unless the X11 display is available to the owner of the **R** process. `type = "cairo"` requires cairo 1.2 or later. `type = "quartz"` uses the `<quartz>` device and so is only available where that is (on some macOS builds: see `[capabilities](../../base/html/capabilities)("aqua")`).
By default no resolution is recorded in the file, except for BMP. Viewers will often assume a nominal resolution of 72 ppi when none is recorded. As resolutions in PNG files are recorded in pixels/metre, the reported ppi value will be changed slightly.
For graphics parameters that make use of dimensions in inches (including font sizes in points) the resolution used is `res` (or 72 ppi if unset).
`png` will normally use a palette if there are less than 256 colours on the page, and record a 24-bit RGB file otherwise (or a 32-bit ARGB file if `type = "cairo"` and non-opaque colours are used). However, `type = "cairo-png"` uses cairographics' PNG backend which will never use a palette and normally creates a larger 32-bit ARGB file—this may work better for specialist uses with semi-transparent colours.
Quartz-produced PNG and TIFF plots with a transparent background are recorded with a dark grey matte which will show up in some viewers, including `Preview` on macOS.
Unknown resolutions in BMP files are recorded as 72 ppi.
### Value
A plot device is opened: nothing is returned to the **R** interpreter.
### Warnings
Note that by default the `width` and `height` values are in pixels not inches. A warning will be issued if both are less than 20.
If you plot more than one page on one of these devices and do not include something like `%d` for the sequence number in `file`, the file will contain the last page plotted.
### Differences between OSes
These functions are interfaces to three or more different underlying devices.
* On Windows, devices based on plotting to a hidden screen using Windows' GDI calls.
* On platforms with support for X11, plotting to a hidden X11 display.
* On macOS when working at the console and when **R** is compiled with suitable support, using Apple's Quartz plotting system.
* Where support has been compiled in for cairographics, plotting on cairo surfaces. This may use the native platform support for fonts, or it may use `fontconfig` to support a wide range of font formats.
Inevitably there will be differences between the options supported and output produced. Perhaps the most important are support for antialiased fonts and semi-transparent colours: the best results are likely to be obtained with the cairo- or Quartz-based devices where available.
The default extensions are ‘.jpg’ and ‘.tif’ on Windows, and ‘.jpeg’ and ‘.tiff’ elsewhere.
### Conventions
This section describes the implementation of the conventions for graphics devices set out in the ‘R Internals’ manual.
* The default device size is in pixels.
* Font sizes are in big points interpreted at `res` ppi.
* The default font family is Helvetica.
* Line widths in 1/96 inch (interpreted at `res` ppi), minimum one pixel for `type = "Xlib"`, 0.01 for `type =
"cairo"`.
* For `type = "Xlib"` circle radii are in pixels with minimum one.
* Colours are interpreted by the viewing application.
For `type = "quartz"` see the help for `<quartz>`.
### Note
For `type = "Xlib"` these devices are based on the `[X11](x11)` device. The colour model used will be that set up by `X11.options` at the time the first Xlib-based devices was opened (or the first after all such devices have been closed).
### Author(s)
Guido Masarotto and Brian Ripley
### References
The PNG specification, <https://www.w3.org/TR/PNG/>.
The TIFF specification, <https://www.iso.org/standard/34342.html>. See also <https://en.wikipedia.org/wiki/TIFF>.
### See Also
`[Devices](devices)`, `[dev.print](dev2)`
`[capabilities](../../base/html/capabilities)` to see if these devices are supported by this build of **R**, and if `type = "cairo"` is supported.
`[bitmap](dev2bitmap)` provides an alternative way to generate plots in many bitmap formats that does not depend on accessing the X11 display but does depend on having GhostScript installed.
### Examples
```
## these examples will work only if the devices are available
## and cairo or an X11 display or a macOS display is available.
## copy current plot to a (large) PNG file
## Not run: dev.print(png, file = "myplot.png", width = 1024, height = 768)
png(file = "myplot.png", bg = "transparent")
plot(1:10)
rect(1, 5, 3, 7, col = "white")
dev.off()
## will make myplot1.jpeg and myplot2.jpeg
jpeg(file = "myplot%d.jpeg")
example(rect)
dev.off()
```
r None
`pdf.options` Auxiliary Function to Set/View Defaults for Arguments of pdf
---------------------------------------------------------------------------
### Description
The auxiliary function `pdf.options` can be used to set or view (if called without arguments) the default values for some of the arguments to `<pdf>`.
`pdf.options` needs to be called before calling `pdf`, and the default values it sets can be overridden by supplying arguments to `pdf`.
### Usage
```
pdf.options(..., reset = FALSE)
```
### Arguments
| | |
| --- | --- |
| `...` | arguments `width`, `height`, `onefile`, `family`, `title`, `fonts`, `paper`, `encoding`, `pointsize`, `bg`, `fg`, `pagecentre`, `useDingbats`, `colormodel`, `fillOddEven` and `compress` can be supplied. |
| `reset` | logical: should the defaults be reset to their ‘factory-fresh’ values? |
### Details
If both `reset = TRUE` and `...` are supplied the defaults are first reset to the ‘factory-fresh’ values and then the new values are applied.
### Value
A named list of all the defaults. If any arguments are supplied the return values are the old values and the result has the visibility flag turned off.
### See Also
`<pdf>`, `<ps.options>`.
### Examples
```
pdf.options(bg = "pink")
utils::str(pdf.options())
pdf.options(reset = TRUE) # back to factory-fresh
```
r None
`xyTable` Multiplicities of (x,y) Points, e.g., for a Sunflower Plot
---------------------------------------------------------------------
### Description
Given (x,y) points, determine their multiplicity – checking for equality only up to some (crude kind of) noise. Note that this is special kind of 2D binning.
### Usage
```
xyTable(x, y = NULL, digits)
```
### Arguments
| | |
| --- | --- |
| `x, y` | numeric vectors of the same length; alternatively other (x, y) argument combinations as allowed by `<xy.coords>(x, y)`. |
| `digits` | integer specifying the significant digits to be used for determining equality of coordinates. These are compared after rounding them via `[signif](../../base/html/round)(*, digits)`. |
### Value
A list with three components of same length,
| | |
| --- | --- |
| `x` | x coordinates, rounded and sorted. |
| `y` | y coordinates, rounded (and sorted within `x`). |
| `number` | multiplicities (positive integers); i.e., `number[i]` is the multiplicity of `(x[i], y[i])`. |
### See Also
`[sunflowerplot](../../graphics/html/sunflowerplot)` which typically uses `xyTable()`; `[signif](../../base/html/round)`.
### Examples
```
xyTable(iris[, 3:4], digits = 6)
## Discretized uncorrelated Gaussian:
require(stats)
xy <- data.frame(x = round(sort(rnorm(100))), y = rnorm(100))
xyTable(xy, digits = 1)
```
r None
`dev.interactive` Is the Current Graphics Device Interactive?
--------------------------------------------------------------
### Description
Test if the current graphics device (or that which would be opened) is interactive.
### Usage
```
dev.interactive(orNone = FALSE)
deviceIsInteractive(name = NULL)
```
### Arguments
| | |
| --- | --- |
| `orNone` | logical; if `TRUE`, the function also returns `TRUE` when `[.Device](../../base/html/dev) == "null device"` and `[getOption](../../base/html/options)("device")` is among the known interactive devices. |
| `name` | one or more device names as a character vector, or `NULL` to give the existing list. |
### Details
The `X11` (Unix), `windows` (Windows) and `quartz` (macOS, on-screen types only) are regarded as interactive, together with `JavaGD` (from the package of the same name) and `CairoWin` and `CairoX11` (from package [Cairo](https://CRAN.R-project.org/package=Cairo)). Packages can add their devices to the list by calling `deviceIsInteractive`.
### Value
`dev.interactive()` returns a logical, `TRUE` if and only if an interactive (screen) device is in use.
`deviceIsInteractive` returns the updated list of known interactive devices, invisibly unless `name = NULL`.
### See Also
`[Devices](devices)` for the available devices on your platform.
### Examples
```
dev.interactive()
print(deviceIsInteractive(NULL))
```
| programming_docs |
r None
`convertColor` Convert between Colour Spaces
---------------------------------------------
### Description
Convert colours between their representations in standard colour spaces.
### Usage
```
convertColor(color, from, to, from.ref.white, to.ref.white,
scale.in = 1, scale.out = 1, clip = TRUE)
```
### Arguments
| | |
| --- | --- |
| `color` | A matrix whose rows specify colors. The function will also accept a data frame, but will silently convert to a matrix internally. |
| `from, to` | Input and output color spaces. See ‘Details’ below. |
| `from.ref.white, to.ref.white` | Reference whites or `NULL` if these are built in to the definition, as for RGB spaces. `D65` is the default, see ‘Details’ for others. |
| `scale.in, scale.out` | Input is divided by `scale.in`, output is multiplied by `scale.out`. Use `NULL` to suppress scaling when input or output is not numeric. |
| `clip` | If `TRUE`, truncate RGB output to [0,1], `FALSE` return out-of-range RGB, `NA` set out of range colors to `NaN`. |
### Details
Color spaces are specified by objects of class `colorConverter`, created by `[colorConverter](make.rgb)` or `<make.rgb>`. Built-in color spaces may be referenced by strings: `"XYZ"`, `"sRGB"`, `"Apple RGB"`, `"CIE RGB"`, `"Lab"`, `"Luv"`. The converters for these colour spaces are in the object `colorspaces`.
The `"sRGB"` color space is that used by standard PC monitors. `"Apple RGB"` is used by Apple monitors. `"Lab"` and `"Luv"` are approximately perceptually uniform spaces standardized by the Commission Internationale d'Eclairage. `XYZ` is a 1931 CIE standard capable of representing all visible colors (and then some), but not in a perceptually uniform way.
The `Lab` and `Luv` spaces describe colors of objects, and so require the specification of a reference ‘white light’ color. Illuminant `D65` is a standard indirect daylight, Illuminant `D50` is close to direct sunlight, and Illuminant `A` is the light from a standard incandescent bulb. Other standard CIE illuminants supported are `B`, `C`, `E` and `D55`. RGB colour spaces are defined relative to a particular reference white, and can be only approximately translated to other reference whites. The von Kries chromatic adaptation algorithm is used for this. Prior to R 3.6, color conversions involving color spaces created with `<make.rgb>` were carried out assuming a `D65` illuminant, irrespective of the actual illuminant used in the creation of the color space. This affected the built-in `"CIE RGB"` color space.
The RGB color spaces are specific to a particular class of display. An RGB space cannot represent all colors, and the `clip` option controls what is done to out-of-range colors.
For the named color spaces `color` must be a matrix of values in the `from` color space: in particular opaque colors.
### Value
A 3-column matrix whose rows specify the colors.
### References
For all the conversion equations <http://www.brucelindbloom.com/>.
For the white points <https://web.archive.org/web/20190613001950/http://efg2.com/Lab/Graphics/Colors/Chromaticity.htm>.
### See Also
`<col2rgb>` and `<colors>` for ways to specify colors in graphics.
`<make.rgb>` for specifying other colour spaces.
### Examples
```
## The displayable colors from four planes of Lab space
ab <- expand.grid(a = (-10:15)*10,
b = (-15:10)*10)
require(graphics); require(stats) # for na.omit
par(mfrow = c(2, 2), mar = .1+c(3, 3, 3, .5), mgp = c(2, .8, 0))
Lab <- cbind(L = 20, ab)
srgb <- convertColor(Lab, from = "Lab", to = "sRGB", clip = NA)
clipped <- attr(na.omit(srgb), "na.action")
srgb[clipped, ] <- 0
cols <- rgb(srgb[, 1], srgb[, 2], srgb[, 3])
image((-10:15)*10, (-15:10)*10, matrix(1:(26*26), ncol = 26), col = cols,
xlab = "a", ylab = "b", main = "Lab: L=20")
Lab <- cbind(L = 40, ab)
srgb <- convertColor(Lab, from = "Lab", to = "sRGB", clip = NA)
clipped <- attr(na.omit(srgb), "na.action")
srgb[clipped, ] <- 0
cols <- rgb(srgb[, 1], srgb[, 2], srgb[, 3])
image((-10:15)*10, (-15:10)*10, matrix(1:(26*26), ncol = 26), col = cols,
xlab = "a", ylab = "b", main = "Lab: L=40")
Lab <- cbind(L = 60, ab)
srgb <- convertColor(Lab, from = "Lab", to = "sRGB", clip = NA)
clipped <- attr(na.omit(srgb), "na.action")
srgb[clipped, ] <- 0
cols <- rgb(srgb[, 1], srgb[, 2], srgb[, 3])
image((-10:15)*10, (-15:10)*10, matrix(1:(26*26), ncol = 26), col = cols,
xlab = "a", ylab = "b", main = "Lab: L=60")
Lab <- cbind(L = 80, ab)
srgb <- convertColor(Lab, from = "Lab", to = "sRGB", clip = NA)
clipped <- attr(na.omit(srgb), "na.action")
srgb[clipped, ] <- 0
cols <- rgb(srgb[, 1], srgb[, 2], srgb[, 3])
image((-10:15)*10, (-15:10)*10, matrix(1:(26*26), ncol = 26), col = cols,
xlab = "a", ylab = "b", main = "Lab: L=80")
cols <- t(col2rgb(palette())); rownames(cols) <- palette(); cols
zapsmall(lab <- convertColor(cols, from = "sRGB", to = "Lab", scale.in = 255))
stopifnot(all.equal(cols, # converting back.. getting the original:
round(convertColor(lab, from = "Lab", to = "sRGB", scale.out = 255)),
check.attributes = FALSE))
```
r None
`cairoSymbolFont` Specify a Symbol Font
----------------------------------------
### Description
Specify a symbol font for a Cairo-based graphics device. This function provides the opportunity to specify whether the font supports Private Use Area code points.
### Usage
```
cairoSymbolFont(family, usePUA = TRUE)
```
### Arguments
| | |
| --- | --- |
| `family` | A character vector giving the symbol font family name. |
| `usePUA` | Does the font support Private Use Area code points? |
### Details
On Cairo-based graphics devices, when drawing with a symbol font (e.g., <plotmath>), Adobe Symbol Encoding characters are converted to UTF-8 code points. This conversion can use Private Use Area code points or not. It is useful to be able to specify this option because some fonts (e.g., the OpenSymbol font that is included in LibreOffice) have glyphs mapped to the Private Use Area and some fonts (e.g., Nimbus Sans L, the URW Fonts equivalent of Helvetica) do not.
### Value
An object of class `"CairoSymbolFont"`.
### See Also
`[cairo\_pdf](cairo)`.
### Examples
```
## Not run:
## If a font uses PUA, we can just specify the font name ...
cairo_pdf(symbolfamily="OpenSymbol")
dev.off()
## ... or equivalently ...
cairo_pdf(symbolfamily=cairoSymbolFont("OpenSymbol"))
dev.off()
## If a font does not use PUA, we must indicate that ...
cairo_pdf(symbolfamily=cairoSymbolFont("Nimbus Sans", usePUA=FALSE))
dev.off()
## End(Not run)
```
r None
`chull` Compute Convex Hull of a Set of Points
-----------------------------------------------
### Description
Computes the subset of points which lie on the convex hull of the set of points specified.
### Usage
```
chull(x, y = NULL)
```
### Arguments
| | |
| --- | --- |
| `x, y` | coordinate vectors of points. This can be specified as two vectors `x` and `y`, a 2-column matrix `x`, a list `x` with two components, etc, see `<xy.coords>`. |
### Details
`<xy.coords>` is used to interpret the specification of the points. Infinite, missing and `NaN` values are not allowed.
The algorithm is that given by Eddy (1977).
### Value
An integer vector giving the indices of the unique points lying on the convex hull, in clockwise order. (The first will be returned for duplicate points.)
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988). *The New S Language*. Wadsworth & Brooks/Cole.
Eddy, W. F. (1977). A new convex hull algorithm for planar sets. *ACM Transactions on Mathematical Software*, **3**, 398–403. doi: [10.1145/355759.355766](https://doi.org/10.1145/355759.355766).
Eddy, W. F. (1977). Algorithm 523: CONVEX, A new convex hull algorithm for planar sets [Z]. *ACM Transactions on Mathematical Software*, **3**, 411–412. doi: [10.1145/355759.355768](https://doi.org/10.1145/355759.355768).
### See Also
`<xy.coords>`, `[polygon](../../graphics/html/polygon)`
### Examples
```
X <- matrix(stats::rnorm(2000), ncol = 2)
chull(X)
## Not run:
# Example usage from graphics package
plot(X, cex = 0.5)
hpts <- chull(X)
hpts <- c(hpts, hpts[1])
lines(X[hpts, ])
## End(Not run)
```
r None
`dev2bitmap` Graphics Device for Bitmap Files via Ghostscript
--------------------------------------------------------------
### Description
`bitmap` generates a graphics file. `dev2bitmap` copies the current graphics device to a file in a graphics format.
### Usage
```
bitmap(file, type = "png16m", height = 7, width = 7, res = 72,
units = "in", pointsize, taa = NA, gaa = NA, ...)
dev2bitmap(file, type = "png16m", height = 7, width = 7, res = 72,
units = "in", pointsize, ...,
method = c("postscript", "pdf"), taa = NA, gaa = NA)
```
### Arguments
| | |
| --- | --- |
| `file` | The output file name, with an appropriate extension. |
| `type` | The type of bitmap. |
| `width, height` | Dimensions of the display region. |
| `res` | Resolution, in dots per inch. |
| `units` | The units in which `height` and `width` are given. Can be `in` (inches), `px` (pixels), `cm` or `mm`. |
| `pointsize` | The pointsize to be used for text: defaults to something reasonable given the width and height |
| `...` | Other parameters passed to `<postscript>` or `<pdf>`. |
| `method` | Should the plot be done by `<postscript>` or `<pdf>`? |
| `taa, gaa` | Number of bits of antialiasing for text and for graphics respectively. Usually 4 (for best effect) or 2. Not supported on all types. |
### Details
`dev2bitmap` works by copying the current device to a `<postscript>` or `<pdf>` device, and post-processing the output file using `ghostscript`. `bitmap` works in the same way using a `postscript` device and post-processing the output as ‘printing’.
You will need `ghostscript`: the full path to the executable can be set by the environment variable R\_GSCMD. If this is unset, a GhostScript executable will be looked for by name on your path: on a Unix alike `"gs"` is used, and on Windows the setting of the environment variable GSC is used, otherwise commands `"gswi64c.exe"` then `"gswin32c.exe"` are tried.
The types available will depend on the version of `ghostscript`, but are likely to include `"jpeg"`, `"jpegcmyk"`, `"jpeggray"`, `"tiffcrle"`, `"tiffg3"`, `"tiffg32d"`, `"tiffg4"`, `"tiffgray"`, `"tifflzw"`, `"tiffpack"`, `"tiff12nc"`, `"tiff24nc"`, `"tiff32nc"` `"png16"`, `"png16m"`, `"png256"`, `"png48"`, `"pngmono"`, `"pnggray"`, `"pngalpha"`, `"bmp16"`, `"bmp16m"` `"bmp256"`, `"bmp32b"`, `"bmpgray"`, `"bmpmono"`.
The default type, `"png16m"`, supports 24-bit colour and anti-aliasing. Type `"png256"` uses a palette of 256 colours and could give a more compact representation. Monochrome graphs can use `"pngmono"`, or `"pnggray"` if anti-aliasing is desired. Plots with a transparent background and varying degrees of transparency should use `"pngalpha"`.
Note that for a colour TIFF image you probably want `"tiff24nc"`, which is 8-bit per channel RGB (the most common TIFF format). None of the listed TIFF types support transparency. `"tiff32nc"` uses 8-bit per channel CMYK, which printers might require.
For formats which contain a single image, a file specification like `Rplots%03d.png` can be used: this is interpreted by Ghostscript.
For `dev2bitmap` if just one of `width` and `height` is specified, the other is chosen to preserve the aspect ratio of the device being copied. The main reason to prefer `method = "pdf"` over the default would be to allow semi-transparent colours to be used.
For graphics parameters such as `"cra"` that need to work in pixels, the default resolution of 72dpi is always used.
On Windows only, paths for `file` and R\_GSCMD which contain spaces are mapped to short names *via* `[shortPathName](../../utils/html/shortpathname)`.
### Value
None.
### Conventions
This section describes the implementation of the conventions for graphics devices set out in the ‘R Internals’ manual. These devices follow the underlying device, so when viewed at the stated `res`:
* The default device size is 7 inches square.
* Font sizes are in big points.
* The default font family is (for the standard Ghostscript setup) URW Nimbus Sans.
* Line widths are as a multiple of 1/96 inch, with no minimum.
* Circle of any radius are allowed.
* Colours are interpreted by the viewing/printing application.
### Note
On Windows, Use of `bitmap` will leave a temporary file (with file name starting `Rbit`).
Although using `type = "pdfwrite"` will work for simple plots, it is not recommended. Either use `<pdf>` to produce PDF directly, or call `ps2pdf -dAutoRotatePages=/None` on the output of `<postscript>`: that command is optimized to do the conversion to PDF in ways that these functions are not.
### See Also
`[savePlot](saveplot)`, which for `windows` and `X11(type = "cairo")` provides a simple way to record a PNG record of the current plot.
`<postscript>`, `<pdf>`, `<png>`, `[jpeg](png)`, `[tiff](png)` and `bmp`.
To display an array of data, see `[image](../../graphics/html/image)`.
r None
`grSoftVersion` Report Versions of Graphics Software
-----------------------------------------------------
### Description
Report versions of third-party graphics software available on the current platform for **R**'s graphics.
### Usage
```
grSoftVersion()
```
### Value
A named character vector containing at least the elements
| | |
| --- | --- |
| `cairo` | the version of cairographics in use, or `""` if cairographics is not available. |
| `cairoFT` | the FreeType/FontConfig versions if cairographics is using those libraries directly (not *via* pango); otherwise, `""`. Earlier versions of **R** returned `"yes"` rather than the versions. The FontConfig version is determined when **R** is built. |
| `pango` | the version of pango in use, or `""` if pango is not available. |
It may also contain the versions of third-party software used by the standard (on Windows), or X11-based (on Unix-alikes) bitmap devices:
| | |
| --- | --- |
| `libpng` | the version of `libpng` in use, or `""` if not available. |
| `jpeg` | the version of the JPEG headers used for compilation, or `""` if JPEG support was not compiled in. |
| `libtiff` | the version of `libtiff` in use, or `""` if not available. |
It is conceivable but unlikely that the cairo-based bitmap devices will use different versions linked *via* cairographics, especially `png(type = "cairo-png")`.
On macOS, if available, the Quartz-based devices will use the system versions of these libraries rather than those reported here.
Unless otherwise stated the reported version is that of the (possibly dynamically-linked) library in use at runtime.
Note that `libjpeg-turbo` as used on some Linux distributions reports its version as `"6.2"`, the IJG version from which it forked.
### See Also
`[extSoftVersion](../../base/html/extsoftversion)` for versions of non-graphics software.
### Examples
```
grSoftVersion()
```
r None
`ps.options` Auxiliary Function to Set/View Defaults for Arguments of postscript
---------------------------------------------------------------------------------
### Description
The auxiliary function `ps.options` can be used to set or view (if called without arguments) the default values for some of the arguments to `<postscript>`.
`ps.options` needs to be called before calling `postscript`, and the default values it sets can be overridden by supplying arguments to `postscript`.
### Usage
```
ps.options(..., reset = FALSE, override.check = FALSE)
setEPS(...)
setPS(...)
```
### Arguments
| | |
| --- | --- |
| `...` | arguments `onefile`, `family`, `title`, `fonts`, `encoding`, `bg`, `fg`, `width`, `height`, `horizontal`, `pointsize`, `paper`, `pagecentre`, `print.it`, `command`, `colormodel` and `fillOddEven` can be supplied. `onefile`, `horizontal` and `paper` are *ignored* for `setEPS` and `setPS`. |
| `reset` | logical: should the defaults be reset to their ‘factory-fresh’ values? |
| `override.check` | logical argument passed to `<check.options>`. See the Examples. |
### Details
If both `reset = TRUE` and `...` are supplied the defaults are first reset to the ‘factory-fresh’ values and then the new values are applied.
For backwards compatibility argument `append` is accepted but ignored with a warning.
`setEPS` and `setPS` are wrappers to set defaults appropriate for figures for inclusion in documents (the default size is 7 inches square unless `width` or `height` is supplied) and for spooling to a PostScript printer respectively. For historical reasons the latter is the ultimate default.
### Value
A named list of all the previous defaults. If `...` or `reset = TRUE` is supplied the result has the visibility flag turned off.
### See Also
`<postscript>`, `<pdf.options>`
### Examples
```
ps.options(bg = "pink")
utils::str(ps.options())
### ---- error checking of arguments: ----
ps.options(width = 0:12, onefile = 0, bg = pi)
# override the check for 'width', but not 'bg':
ps.options(width = 0:12, bg = pi, override.check = c(TRUE,FALSE))
utils::str(ps.options())
ps.options(reset = TRUE) # back to factory-fresh
```
r None
`pictex` A PicTeX Graphics Driver
----------------------------------
### Description
This function produces simple graphics suitable for inclusion in TeX and LaTeX documents. It dates from the very early days of **R** and is for historical interest only.
### Usage
```
pictex(file = "Rplots.tex", width = 5, height = 4, debug = FALSE,
bg = "white", fg = "black")
```
### Arguments
| | |
| --- | --- |
| `file` | the file path where output will appear. Tilde expansion (see `[path.expand](../../base/html/path.expand)`) is done. An input with a marked encoding is converted to the native encoding or an error is given. |
| `width` | The width of the plot in inches. |
| `height` | the height of the plot in inches. |
| `debug` | should debugging information be printed. |
| `bg` | the background color for the plot. Ignored. |
| `fg` | the foreground color for the plot. Ignored. |
### Details
This driver is much more basic than the other graphics drivers included in **R**. It does not have any font metric information, so the use of `<plotmath>` is not supported.
Line widths are ignored except when setting the spacing of line textures. `pch = "."` corresponds to a square of side 1pt.
This device does not support colour (nor does the PicTeX package), and all colour settings are ignored.
Note that text is recorded in the file as-is, so annotations involving TeX special characters (such as ampersand and underscore) need to be quoted as they would be when entering TeX.
Multiple plots will be placed as separate environments in the output file.
### Conventions
This section describes the implementation of the conventions for graphics devices set out in the ‘R Internals’ manual.
* The default device size is 5 inches by 4 inches.
* There is no `pointsize` argument: the default size is interpreted as 10 point.
* The only font family is `cmss10`.
* Line widths are only used when setting the spacing on line textures.
* Circle of any radius are allowed.
* Colour is not supported.
### Author(s)
This driver was provided around 1996–7 by Valerio Aimale of the Department of Internal Medicine, University of Genoa, Italy.
### References
Knuth, D. E. (1984) *The TeXbook.* Reading, MA: Addison-Wesley.
Lamport, L. (1994) *LATEX: A Document Preparation System.* Reading, MA: Addison-Wesley.
Goossens, M., Mittelbach, F. and Samarin, A. (1994) *The LATEX Companion.* Reading, MA: Addison-Wesley.
### See Also
`<postscript>`, `<pdf>`, `[Devices](devices)`.
The `tikzDevice` in the CRAN package of that name for more modern TeX-based graphics (<http://pgf.sourceforge.net/>, although including PDF figures *via* `pdftex` is most common in (La)TeX documents).
### Examples
```
require(graphics)
pictex()
plot(1:11, (-5:5)^2, type = "b", main = "Simple Example Plot")
dev.off()
##--------------------
## Not run:
%% LaTeX Example
\documentclass{article}
\usepackage{pictex}
\usepackage{graphics} % for \rotatebox
\begin{document}
%...
\begin{figure}[h]
\centerline{\input{Rplots.tex}}
\caption{}
\end{figure}
%...
\end{document}
## End(Not run)
##--------------------
unlink("Rplots.tex")
```
| programming_docs |
r None
`densCols` Colors for Smooth Density Plots
-------------------------------------------
### Description
`densCols` produces a vector containing colors which encode the local densities at each point in a scatterplot.
### Usage
```
densCols(x, y = NULL, nbin = 128, bandwidth,
colramp = colorRampPalette(blues9[-(1:3)]))
blues9
```
### Arguments
| | |
| --- | --- |
| `x, y` | the `x` and `y` arguments provide the x and y coordinates of the points. Any reasonable way of defining the coordinates is acceptable. See the function `<xy.coords>` for details. If supplied separately, they must be of the same length. |
| `nbin` | numeric vector of length one (for both directions) or two (for x and y separately) specifying the number of equally spaced grid points for the density estimation; directly used as `gridsize` in `[bkde2D](../../kernsmooth/html/bkde2d)()`. |
| `bandwidth` | numeric vector (length 1 or 2) of smoothing bandwidth(s). If missing, a more or less useful default is used. `bandwidth` is subsequently passed to function `[bkde2D](../../kernsmooth/html/bkde2d)`. |
| `colramp` | function accepting an integer `n` as an argument and returning `n` colors. |
### Details
`densCols` computes and returns the set of colors that will be used in plotting, calling `[bkde2D](../../kernsmooth/html/bkde2d)(*,
bandwidth, gridsize = nbin, ..)` from package [KernSmooth](https://CRAN.R-project.org/package=KernSmooth).
`blues9` is a set of 9 color shades of blue used as the default in plotting.
### Value
`densCols` returns a vector of length `nrow(x)` that contains colors to be used in a subsequent scatterplot. Each color represents the local density around the corresponding point.
### Author(s)
Florian Hahne at FHCRC, originally
### See Also
`[bkde2D](../../kernsmooth/html/bkde2d)` from package [KernSmooth](https://CRAN.R-project.org/package=KernSmooth); further, `[smoothScatter](../../graphics/html/smoothscatter)()` (package graphics) which builds on the same computations as `densCols`.
### Examples
```
x1 <- matrix(rnorm(1e3), ncol = 2)
x2 <- matrix(rnorm(1e3, mean = 3, sd = 1.5), ncol = 2)
x <- rbind(x1, x2)
dcols <- densCols(x)
graphics::plot(x, col = dcols, pch = 20, main = "n = 1000")
```
r None
`xyz.coords` Extracting Plotting Structures
--------------------------------------------
### Description
Utility for obtaining consistent x, y and z coordinates and labels for three dimensional (3D) plots.
### Usage
```
xyz.coords(x, y = NULL, z = NULL,
xlab = NULL, ylab = NULL, zlab = NULL,
log = NULL, recycle = FALSE, setLab = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x, y, z` | the x, y and z coordinates of a set of points. Both `y` and `z` can be left at `NULL`. In this case, an attempt is made to interpret `x` in a way suitable for plotting. If the argument is a formula `zvar ~ xvar + yvar`, `xvar`, `yvar` and `zvar` are used as x, y and z variables; if the argument is a list containing components `x`, `y` and `z`, these are assumed to define plotting coordinates; if the argument is a matrix or `[data.frame](../../base/html/data.frame)` with three or more columns, the first is assumed to contain the x values, the 2nd the y ones, and the 3rd the z ones – independently of any column names that `x` may have. Alternatively two arguments `x` and `y` can be provided (leaving `z = NULL`). One may be real, the other complex; in any other case, the arguments are coerced to vectors and the values plotted against their indices. |
| `xlab, ylab, zlab` | names for the x, y and z variables to be extracted. |
| `log` | character, `"x"`, `"y"`, `"z"` or combinations. Sets negative values to `[NA](../../base/html/na)` and gives a warning. |
| `recycle` | logical; if `TRUE`, recycle (`[rep](../../base/html/rep)`) the shorter ones of `x`, `y` or `z` if their lengths differ. |
| `setLab` | logical indicating if the resulting `xlab` and `ylab` should be constructed from the “kind” of `(x,y)`; otherwise, the arguments `xlab` and `ylab` are used. |
### Value
A list with the components
| | |
| --- | --- |
| `x` | numeric (i.e., `[double](../../base/html/double)`) vector of abscissa values. |
| `y` | numeric vector of the same length as `x`. |
| `z` | numeric vector of the same length as `x`. |
| `xlab` | `character(1)` or `NULL`, the axis label of `x`. |
| `ylab` | `character(1)` or `NULL`, the axis label of `y`. |
| `zlab` | `character(1)` or `NULL`, the axis label of `z`. |
### Author(s)
Uwe Ligges and Martin Maechler
### See Also
`<xy.coords>` for 2D.
### Examples
```
xyz.coords(data.frame(10*1:9, -4), y = NULL, z = NULL)
xyz.coords(1:5, stats::fft(1:5), z = NULL, xlab = "X", ylab = "Y")
y <- 2 * (x2 <- 10 + (x1 <- 1:10))
xyz.coords(y ~ x1 + x2, y = NULL, z = NULL)
xyz.coords(data.frame(x = -1:9, y = 2:12, z = 3:13), y = NULL, z = NULL,
log = "xy")
##> Warning message: 2 x values <= 0 omitted ...
```
r None
`recordplot` Record and Replay Plots
-------------------------------------
### Description
Functions to save the current plot in an **R** variable, and to replay it.
### Usage
```
recordPlot(load=NULL, attach=NULL)
replayPlot(x, reloadPkgs=FALSE)
```
### Arguments
| | |
| --- | --- |
| `load` | If not `NULL`, a character vector of package names, which are saved as part of the recorded plot. |
| `attach` | If not `NULL`, a character vector of package names, which are saved as part of the recorded plot. |
| `x` | A saved plot. |
| `reloadPkgs` | A logical indicating whether to reload and/or reattach any packages that were saved as part of the recorded plot. |
### Details
These functions record and replay the displaylist of the current graphics device. The returned object is of class `"recordedplot"`, and `replayPlot` acts as a `print` method for that class.
The returned object is stored as a pairlist, but the usual methods for examining **R** objects such as `[deparse](../../base/html/deparse)` and `[str](../../utils/html/str)` are liable to mislead.
### Value
`recordPlot` returns an object of class `"recordedplot"`.
`replayPlot` has no return value.
### Warning
The format of recorded plots may change between **R** versions, so recorded plots should **not** be used as a permanent storage format for **R** plots.
As of **R** 3.3.0, it is possible (again) to replay a plot from another **R** session using, for example, `[saveRDS](../../base/html/readrds)` and `[readRDS](../../base/html/readrds)`. It is even possible to replay a plot from another **R** version, however, this will produce warnings, may produce errors, or something worse.
### Note
Replay of a recorded plot may not produce the correct result (or may just fail) if the display list contains a call to `[recordGraphics](recordgraphics)` which in turn contains an expression that calls code from a non-base package other than graphics or grid. The most well-known example of this is a plot drawn with the package [ggplot2](https://CRAN.R-project.org/package=ggplot2). One solution is to load the relevant package(s) before replaying the recorded plot. The `load` and `attach` arguments to `recordPlot` can be used to automate this - any packages named in `load` will be reloaded, via `[loadNamespace](../../base/html/ns-load)`, and any packages named in `attach` will be reattached, via `[library](../../base/html/library)`, as long as `reloadPkgs` is `TRUE` in the call to `replayPlot`. This is only relevant when attempting to replay in one R session a plot that was recorded in a different R session.
### See Also
The displaylist can be turned on and off using `[dev.control](dev2)`. Initially recording is on for screen devices, and off for print devices.
r None
`palettes` Color Palettes
--------------------------
### Description
Create a vector of `n` contiguous colors.
### Usage
```
hcl.colors(n, palette = "viridis", alpha = NULL, rev = FALSE, fixup = TRUE)
hcl.pals(type = NULL)
rainbow(n, s = 1, v = 1, start = 0, end = max(1, n - 1)/n,
alpha, rev = FALSE)
heat.colors(n, alpha, rev = FALSE)
terrain.colors(n, alpha, rev = FALSE)
topo.colors(n, alpha, rev = FALSE)
cm.colors(n, alpha, rev = FALSE)
```
### Arguments
| | |
| --- | --- |
| `n` | the number of colors (*≥ 1*) to be in the palette. |
| `palette` | a valid palette name (one of `hcl.pals()`). The name is matched to the list of available palettes, ignoring upper vs. lower case, spaces, dashes, etc. in the matching. |
| `alpha` | an alpha-transparency level in the range [0,1] (0 means transparent and 1 means opaque), see argument `alpha` in `<hsv>` and `<hcl>`, respectively. Since **R** 4.0.0, a `[missing](../../base/html/missing)`, i.e., not explicitly specified `alpha` is equivalent to `alpha = NULL`, which does *not* add opacity codes (`"FF"`) to the individual color hex codes. |
| `rev` | logical indicating whether the ordering of the colors should be reversed. |
| `fixup` | logical indicating whether the resulting color should be corrected to RGB coordinates in [0,1], see `<hcl>`. |
| `type` | the type of palettes to list: `"qualitative"`, `"sequential"`, `"diverging"`, or `"divergingx"`. `NULL` lists all palettes. |
| `s, v` | the ‘saturation’ and ‘value’ to be used to complete the HSV color descriptions. |
| `start` | the (corrected) hue in [0,1] at which the rainbow begins. |
| `end` | the (corrected) hue in [0,1] at which the rainbow ends. |
### Details
All of these functions (except the helper function `hcl.pals`) create a vector of `n` contiguous colors, either based on the HSV color space (rainbow, heat, terrain, topography, and cyan-magenta colors) or the perceptually-based HCL color space.
HSV (hue-saturation-value) is a simple transformation of the RGB (red-green-blue) space which was therefore a convenient choice for color palettes in many software systems (see also `<hsv>`). However, HSV colors capture the perceptual properties hue, colorfulness/saturation/chroma, and lightness/brightness/luminance/value only poorly and consequently the corresponding palettes are typically not a good choice for statistical graphics and data visualization.
In contrast, HCL (hue-chroma-luminance) colors are much more suitable for capturing human color perception (see also `<hcl>`) and better color palettes can be derived based on HCL coordinates. Conceptually, three types of palettes are often distinguished:
* Qualitative: For coding categorical information, i.e., where no particular ordering of categories is available and every color should receive the same perceptual weight.
* Sequential: For coding ordered/numeric information, i.e., where colors go from high to low (or vice versa).
* Diverging: Designed for coding numeric information around a central neutral value, i.e., where colors diverge from neutral to two extremes.
The `hcl.colors` function provides a basic and lean implementation of the pre-specified palettes in the colorspace package. In addition to the types above, the functions distinguish “diverging” palettes where the two arms are restricted to be rather balanced as opposed to flexible “divergingx” palettes that combine two sequential palettes without any restrictions. The latter group also includes the cividis palette as it is based on two different hues (blue and yellow) but it is actually a sequential palette (going from dark to light).
The names of all available HCL palettes can be queried with the `hcl.pals` function and they are also visualized by color swatches in the examples. Many of the palettes closely approximate palettes of the same name from various other packages (including RColorBrewer, rcartocolor, viridis, scico, among others).
The default HCL palette is the widely used viridis palette which is a sequential palette with relatively high chroma throughout so that it also works reasonably well as a qualitative palette. However, while viridis is a rather robust default palette, more suitable HCL palettes are available for most visualizations.
For example, `"Dark 3"` works well for shading points or lines in up to five groups, `"YlGnBu"` is a sequential palette similar to `"viridis"` but with aligned chroma/luminance, and `"Green-Brown"` or `"Blue-Red 3"` are colorblind-safe diverging palettes.
Further qualitative palettes are provided in the `[palette.colors](palette)` function. While the qualitative palettes in `hcl.colors` are always based on the same combination of chroma and luminance, the `palette.colors` vary in chroma and luminance up to a certain degree. The advantage of fixing chroma/luminance is that the perceptual weight of the resulting colors is more balanced. The advantage of allowing variation is that more distinguishable colors can be obtained, especially for viewers with color vision deficiencies.
Note that the `rainbow` function implements the (in-)famous rainbow (or jet) color palette that was used very frequently in many software packages but has been widely criticized for its many perceptual problems. It is specified by a `start` and `end` hue with red = 0, yellow = *1/6*, green = *2/6*, cyan = *3/6*, blue = *4/6*, and magenta = *5/6*. However, these are very flashy and unbalanced with respect to both chroma and luminance which can lead to various optical illusions. Also, the hues that are equispaced in RGB space tend to cluster at the red, green, and blue primaries. Therefore, it is recommended to use a suitable palette from `hcl.colors` instead of `rainbow`.
### Value
A character vector `cv` containing either palette names (for `hcl.pals`) or `n` hex color codes (for all other functions). The latter can be used either to create a user-defined color palette for subsequent graphics by `<palette>(cv)`, a `col =` specification in graphics functions or in `par`.
### References
Wikipedia (2019). HCL color space – Wikipedia, The Free Encyclopedia. <https://en.wikipedia.org/w/index.php?title=HCL_color_space&oldid=883465135>. Accessed March 26, 2019.
Zeileis, A., Fisher, J. C., Hornik, K., Ihaka, R., McWhite, C. D., Murrell, P., Stauffer, R. and Wilke, C. O. (2019). “colorspace: A toolbox for manipulating and assessing colors and palettes.” arXiv:1903.06490, arXiv.org E-Print Archive. <https://arxiv.org/abs/1903.06490>.
Ihaka, R. (2003). “Colour for presentation graphics.” Proceedings of the 3rd International Workshop on Distributed Statistical Computing (DSC 2003), March 20-22, 2003, Technische Universität Wien, Vienna, Austria. <http://www.ci.tuwien.ac.at/Conferences/DSC-2003/>.
Zeileis, A., Hornik, K. and Murrell, P. (2009). Escaping RGBland: Selecting colors for statistical graphics. *Computational Statistics & Data Analysis*, **53**, 3259–3270. doi: [10.1016/j.csda.2008.11.033](https://doi.org/10.1016/j.csda.2008.11.033).
### See Also
`<colors>`, `<palette>`, `<gray.colors>`, `<hsv>`, `<hcl>`, `<rgb>`, `<gray>` and `<col2rgb>` for translating to RGB numbers.
### Examples
```
require("graphics")
# color wheels in RGB/HSV and HCL space
par(mfrow = c(2, 2))
pie(rep(1, 12), col = rainbow(12), main = "RGB/HSV")
pie(rep(1, 12), col = hcl.colors(12, "Set 2"), main = "HCL")
par(mfrow = c(1, 1))
## color swatches for RGB/HSV palettes
demo.pal <-
function(n, border = if (n < 32) "light gray" else NA,
main = paste("color palettes; n=", n),
ch.col = c("rainbow(n, start=.7, end=.1)", "heat.colors(n)",
"terrain.colors(n)", "topo.colors(n)",
"cm.colors(n)"))
{
nt <- length(ch.col)
i <- 1:n; j <- n / nt; d <- j/6; dy <- 2*d
plot(i, i+d, type = "n", yaxt = "n", ylab = "", main = main)
for (k in 1:nt) {
rect(i-.5, (k-1)*j+ dy, i+.4, k*j,
col = eval(str2lang(ch.col[k])), border = border)
text(2*j, k * j + dy/4, ch.col[k])
}
}
demo.pal(16)
## color swatches for HCL palettes
hcl.swatch <- function(type = NULL, n = 5, nrow = 11,
border = if (n < 15) "black" else NA) {
palette <- hcl.pals(type)
cols <- sapply(palette, hcl.colors, n = n)
ncol <- ncol(cols)
nswatch <- min(ncol, nrow)
par(mar = rep(0.1, 4),
mfrow = c(1, min(5, ceiling(ncol/nrow))),
pin = c(1, 0.5 * nswatch),
cex = 0.7)
while (length(palette)) {
subset <- 1:min(nrow, ncol(cols))
plot.new()
plot.window(c(0, n), c(0, nrow + 1))
text(0, rev(subset) + 0.1, palette[subset], adj = c(0, 0))
y <- rep(subset, each = n)
rect(rep(0:(n-1), n), rev(y), rep(1:n, n), rev(y) - 0.5,
col = cols[, subset], border = border)
palette <- palette[-subset]
cols <- cols[, -subset, drop = FALSE]
}
par(mfrow = c(1, 1), mar = c(5.1, 4.1, 4.1, 2.1), cex = 1)
}
hcl.swatch()
hcl.swatch("qualitative")
hcl.swatch("sequential")
hcl.swatch("diverging")
hcl.swatch("divergingx")
## heat maps with sequential HCL palette (purple)
image(volcano, col = hcl.colors(11, "purples", rev = TRUE))
filled.contour(volcano, nlevels = 10,
color.palette = function(n, ...)
hcl.colors(n, "purples", rev = TRUE, ...))
## list available HCL color palettes
hcl.pals("qualitative")
hcl.pals("sequential")
hcl.pals("diverging")
hcl.pals("divergingx")
```
r None
`col2rgb` Color to RGB Conversion
----------------------------------
### Description
**R** color to RGB (red/green/blue) conversion.
### Usage
```
col2rgb(col, alpha = FALSE)
```
### Arguments
| | |
| --- | --- |
| `col` | vector of any of the three kinds of **R** color specifications, i.e., either a color name (as listed by `<colors>()`), a hexadecimal string of the form `"#rrggbb"` or `"#rrggbbaa"` (see `<rgb>`), or a positive integer `i` meaning `<palette>()[i]`. |
| `alpha` | logical value indicating whether the alpha channel (opacity) values should be returned. |
### Details
`[NA](../../base/html/na)` (as integer or character) and `"NA"` mean transparent, which can also be specified as `"transparent"`.
Values of `col` not of one of these types are coerced: real vectors are coerced to integer and other types to character. (factors are coerced to character: in all other cases the class is ignored when doing the coercion.)
Zero and negative values of `col` are an error.
### Value
An integer matrix with three or four (for `alpha = TRUE`) rows and number of columns the length of `col`. If `col` has names these are used as the column names of the return value.
### Author(s)
Martin Maechler and the R core team.
### See Also
`<rgb>`, `<colors>`, `<palette>`, etc.
The newer, more flexible interface, `[convertColor](convertcolor)()`.
### Examples
```
col2rgb("peachpuff")
col2rgb(c(blu = "royalblue", reddish = "tomato")) # note: colnames
col2rgb(1:8) # the ones from the palette() (if the default)
col2rgb(paste0("gold", 1:4))
col2rgb("#08a0ff")
## all three kinds of color specifications:
col2rgb(c(red = "red", hex = "#abcdef"))
col2rgb(c(palette = 1:3))
##-- NON-INTRODUCTORY examples --
grC <- col2rgb(paste0("gray", 0:100))
table(print(diff(grC["red",]))) # '2' or '3': almost equidistant
## The 'named' grays are in between {"slate gray" is not gray, strictly}
col2rgb(c(g66 = "gray66", darkg = "dark gray", g67 = "gray67",
g74 = "gray74", gray = "gray", g75 = "gray75",
g82 = "gray82", light = "light gray", g83 = "gray83"))
crgb <- col2rgb(cc <- colors())
colnames(crgb) <- cc
t(crgb) # The whole table
ccodes <- c(256^(2:0) %*% crgb) # = internal codes
## How many names are 'aliases' of each other:
table(tcc <- table(ccodes))
length(uc <- unique(sort(ccodes))) # 502
## All the multiply named colors:
mult <- uc[tcc >= 2]
cl <- lapply(mult, function(m) cc[ccodes == m])
names(cl) <- apply(col2rgb(sapply(cl, function(x)x[1])),
2, function(n)paste(n, collapse = ","))
utils::str(cl)
## Not run:
if(require(xgobi)) { ## Look at the color cube dynamically :
tc <- t(crgb[, !duplicated(ccodes)])
table(is.gray <- tc[,1] == tc[,2] & tc[,2] == tc[,3]) # (397, 105)
xgobi(tc, color = c("gold", "gray")[1 + is.gray])
}
## End(Not run)
```
| programming_docs |
r None
`xfig` XFig Graphics Device
----------------------------
### Description
`xfig` starts the graphics device driver for producing XFig (version 3.2) graphics.
The auxiliary function `ps.options` can be used to set and view (if called without arguments) default values for the arguments to `xfig` and `postscript`.
### Usage
```
xfig(file = if(onefile) "Rplots.fig" else "Rplot%03d.fig",
onefile = FALSE, encoding = "none",
paper = "default", horizontal = TRUE,
width = 0, height = 0, family = "Helvetica",
pointsize = 12, bg = "transparent", fg = "black",
pagecentre = TRUE, defaultfont = FALSE, textspecial = FALSE)
```
### Arguments
| | |
| --- | --- |
| `file` | a character string giving the file path. For use with `onefile = FALSE` give a C integer format such as `"Rplot%03d.fig"` (the default in that case). (See `<postscript>` for further details.) |
| `onefile` | logical: if true allow multiple figures in one file. If false, assume only one page per file and generate a file number containing the page number. |
| `encoding` | The encoding in which to write text strings. The default is not to re-encode. This can be any encoding recognized by `[iconv](../../base/html/iconv)`: in a Western UTF-8 locale you probably want to select an 8-bit encoding such as `latin1`, and in an East Asian locale an `EUC` encoding. If re-encoding fails, the text strings will be written in the current encoding with a warning. |
| `paper` | the size of paper region. The choices are `"A4"`, `"Letter"` and `"Legal"` (and these can be lowercase). A further choice is `"default"`, which is the default. If this is selected, the papersize is taken from the option `"papersize"` if that is set to a non-empty value, otherwise `"A4"`. |
| `horizontal` | the orientation of the printed image, a logical. Defaults to true, that is landscape orientation. |
| `width, height` | the width and height of the graphics region in inches. The default is to use the entire page less a 0.5 inch overall margin in each direction. (See `<postscript>` for further details.) |
| `family` | the font family to be used. This must be one of `"AvantGarde"`, `"Bookman"`, `"Courier"`, `"Helvetica"` (the default), `"Helvetica-Narrow"`, `"NewCenturySchoolbook"`, `"Palatino"` or `"Times"`. Any other value is replaced by `"Helvetica"`, with a warning. |
| `pointsize` | the default point size to be used. |
| `bg` | the initial background color to be used. |
| `fg` | the initial foreground color to be used. |
| `pagecentre` | logical: should the device region be centred on the page? |
| `defaultfont` | logical: should the device use xfig's default font? |
| `textspecial` | logical: should the device set the textspecial flag for all text elements. This is useful when generating pstex from xfig figures. |
### Details
Although `xfig` can produce multiple plots in one file, the XFig format does not say how to separate or view them. So `onefile = FALSE` is the default.
The `file` argument is interpreted as a C integer format as used by `[sprintf](../../base/html/sprintf)`, with integer argument the page number. The default gives files ‘Rplot001.fig’, ..., ‘Rplot999.fig’, ‘Rplot1000.fig’, ....
Line widths as controlled by `par(lwd =)` are in multiples of 5/6\*1/72 inch. Multiples less than 1 are allowed. `pch = "."` with `cex = 1` corresponds to a square of side 1/72 inch.
Windows users can make use of WinFIG (<http://www.schmidt-web-berlin.de/WinFIG.htm>, shareware), or XFig under Cygwin.
### Conventions
This section describes the implementation of the conventions for graphics devices set out in the ‘R Internals’ manual.
* The default device size is the paper size with a 0.25 inch border on all sides.
* Font sizes are in big points.
* The default font family is Helvetica.
* Line widths are integers, multiples of 5/432 inch.
* Circle radii are multiples of 1/1200 inch.
* Colours are interpreted by the viewing/printing application.
### Note
Only some line textures (`0 <= lty < 4`) are used. Eventually this may be partially remedied, but the XFig file format does not allow as general line textures as the **R** model. Unimplemented line textures are displayed as *dash-double-dotted*.
There is a limit of 512 colours (plus white and black) per file.
### Author(s)
Brian Ripley. Support for `defaultFont` and `textSpecial` contributed by Sebastian Fischmeister.
### See Also
`[Devices](devices)`, `<postscript>`, `<ps.options>`.
r None
`dev.capabilities` Query Capabilities of the Current Graphics Device
---------------------------------------------------------------------
### Description
Query the capabilities of the current graphics device.
### Usage
```
dev.capabilities(what = NULL)
```
### Arguments
| | |
| --- | --- |
| `what` | a character vector partially matching the names of the components listed in section ‘Value’, or `NULL` which lists all available capabilities. |
### Details
The capabilities have to be specified by the author of the graphics device, unless they can be deduced from missing hooks. Thus they will often by returned as `NA`, and may reflect the maximal capabilities of the underlying device where several output formats are supported by one device.
Most recent devices support semi-transparent colours provided the graphics format does (which PostScript does not). On the other hand, relatively few graphics formats support (fully or semi-) transparent backgrounds: generally the latter is found only in PDF and PNG plots.
### Value
A named list with some or all of the following components, any of which may take value `NA`:
| | |
| --- | --- |
| `semiTransparency` | logical: Does the device support semi-transparent colours? |
| `transparentBackground` | character: Does the device support (semi)-transparent backgrounds? Possible values are `"no"`, `"fully"` (only full transparency) and `"semi"` (semi-transparent background colours are supported). |
| `rasterImage` | character: To what extent does the device support raster images as used by `[rasterImage](../../graphics/html/rasterimage)` and `[grid.raster](../../grid/html/grid.raster)`? Possible values `"no"`, `"yes"` and `"non-missing"` (support only for arrays without any missing values). |
| `capture` | logical: Does the current device support raster capture as used by `[grid.cap](../../grid/html/grid.cap)`? |
| `locator` | logical: Does the current device support `[locator](../../graphics/html/locator)` and `[identify](../../graphics/html/identify)`? |
| `events` | character: Which events can be generated on this device? Currently this will be a subset of `c("MouseDown",
"MouseMove", "MouseUp", "Keybd")`, but other events may be supported in the future. |
### See Also
See `[getGraphicsEvent](getgraphicsevent)` for details on interactive events.
### Examples
```
dev.capabilities()
```
r None
`contourLines` Calculate Contour Lines
---------------------------------------
### Description
Calculate contour lines for a given set of data.
### Usage
```
contourLines(x = seq(0, 1, length.out = nrow(z)),
y = seq(0, 1, length.out = ncol(z)),
z, nlevels = 10,
levels = pretty(range(z, na.rm = TRUE), nlevels))
```
### Arguments
| | |
| --- | --- |
| `x, y` | locations of grid lines at which the values in `z` are measured. These must be in ascending order. By default, equally spaced values from 0 to 1 are used. If `x` is a `list`, its components `x$x` and `x$y` are used for `x` and `y`, respectively. If the list has component `z` this is used for `z`. |
| `z` | a matrix containing the values to be plotted (`NA`s are allowed). Note that `x` can be used instead of `z` for convenience. |
| `nlevels` | number of contour levels desired **iff** `levels` is not supplied. |
| `levels` | numeric vector of levels at which to draw contour lines. |
### Details
`contourLines` draws nothing, but returns a set of contour lines.
There is currently no documentation about the algorithm. The source code is in ‘[R\_HOME](../../base/html/rhome)/src/main/plot3d.c’.
### Value
A `[list](../../base/html/list)` of contours, each itself a `list` with elements:
| | |
| --- | --- |
| `level` | The contour level. |
| `x` | The x-coordinates of the contour. |
| `y` | The y-coordinates of the contour. |
### See Also
`[options](../../base/html/options)("max.contour.segments")` for the maximal complexity of a single contour line.
`[contour](../../graphics/html/contour)`: Its ‘Examples’ demonstrate how `contourLines()` can be drawn and are the same (as those from `contour()`).
### Examples
```
x <- 10*1:nrow(volcano)
y <- 10*1:ncol(volcano)
cl <- contourLines(x, y, volcano)
## summarize the sizes of each the contour lines :
cbind(lev = vapply(cl, `[[`, .5, "level"),
n = vapply(cl, function(l) length(l$x), 1))
z <- outer(-9:25, -9:25)
pretty(range(z), 10) # -300 -200 ... 600 700
utils::str(c2 <- contourLines(z))
# no segments for {-300, 700};
# 2 segments for {-200, -100, 0}
# 1 segment for 100:600
```
r None
`x11` X Window System Graphics (X11)
-------------------------------------
### Description
on Windows,
the `X11()` and `x11()` functions are simple wrappers to `<windows>()` for historical compatibility convenience: Calling `x11()` or `X11()` would work in most cases to open an interactive graphics device.
In **R** versions before 3.6.0, the Windows version had a shorter list of formal arguments. Consequently, calls to `X11(*)` with arguments should *name* them for back compatibility.
Almost all information below does *not* apply on Windows.
on Unix-alikes
`X11` starts a graphics device driver for the X Window System (version 11). This can only be done on machines/accounts that have access to an X server.
`x11` is recognized as a synonym for `X11`.
The **R** function is a wrapper for two devices, one based on Xlib (<https://en.wikipedia.org/wiki/Xlib>) and one using cairographics (<https://www.cairographics.org>).
### Usage
```
X11(display = "", width, height, pointsize, gamma, bg, canvas,
fonts, family, xpos, ypos, title, type, antialias, symbolfamily)
X11.options(..., reset = FALSE)
```
### Arguments
| | |
| --- | --- |
| `display` | the display on which the graphics window will appear. The default is to use the value in the user's environment variable DISPLAY. This is ignored (with a warning) if an X11 device is already open on another display. |
| `width, height` | the width and height of the plotting window, in inches. If `NA`, taken from the resources and if not specified there defaults to `7` inches. See also ‘Resources’. |
| `pointsize` | the default pointsize to be used. Defaults to `12`. |
| `gamma` | gamma correction fudge factor. Colours in R are sRGB; if your monitor does not conform to sRGB, you might be able to improve things by tweaking this parameter to apply additional gamma correction to the RGB channels. By default 1 (no additional gamma correction). |
| `bg` | colour, the initial background colour. Default `"transparent"`. |
| `canvas` | colour. The colour of the canvas, which is visible only when the background colour is transparent. Should be an opaque colour (and any alpha value will be ignored). Default `"white"`. |
| `fonts` | for `type = "Xlib"` only: X11 font description strings into which weight, slant and size will be substituted. There are two, the first for fonts 1 to 4 and the second for font 5, the symbol font. See section ‘Fonts’. |
| `family` | The default family: a length-one character string. This is primarily intended for cairo-based devices, but for `type =
"Xlib"`, the `[X11Fonts](x11fonts)()` database is used to map family names to `fonts` (and this argument takes precedence over that one). |
| `xpos, ypos` | integer: initial position of the top left corner of the window, in pixels. Negative values are from the opposite corner, e.g. `xpos = -100` says the top right corner should be 100 pixels from the right edge of the screen. If `NA` (the default), successive devices are cascaded in 20 pixel steps from the top left. See also ‘Resources’. |
| `title` | character string, up to 100 bytes. With the default, `""`, a suitable title is created internally. A C-style format for an integer will be substituted by the device number (see the `file` argument to `<postscript>` for further details). How non-ASCII titles are handled is implementation-dependent. |
| `type` | character string, one of `"Xlib"`, `"cairo"`, `"nbcairo"` or `"dbcairo"`. Only the first will be available if the system was compiled without support for cairographics. The default is `"cairo"` where **R** was built using `pangocairo` (so not usually on macOS), otherwise `"Xlib"`. |
| `antialias` | for cairo types, the type of anti-aliasing (if any) to be used. One of `c("default", "none", "gray", "subpixel")`. |
| `symbolfamily` | for cairo-based devices only: a length-one character string that specifies the font family to be used as the "symbol" font (e.g., for <plotmath> output). The default value is "default", which means that R will choose a default "symbol" font based on the graphics device capabilities. |
| `reset` | logical: should the defaults be reset to their defaults? |
| `...` | Any of the arguments to `X11`, plus `colortype` and `maxcubesize` (see section ‘Colour Rendering’). |
### Details
The defaults for all of the arguments of `X11` are set by `X11.options`: the ‘Arguments’ section gives the ‘factory-fresh’ defaults.
The initial size and position are only hints, and may not be acted on by the window manager. Also, some systems (especially laptops) are set up to appear to have a screen of a different size to the physical screen.
Option `type` selects between two separate devices: **R** can be built with support for neither, `type = "Xlib"` or both. Where both are available, types `"cairo"`, `"nbcairo"` and `"dbcairo"` offer
* antialiasing of text and lines.
* translucent colours.
* scalable text, including to sizes like 4.5 pt.
* full support for UTF-8, so on systems with suitable fonts you can plot in many languages on a single figure (and this will work even in non-UTF-8 locales). The output should be locale-independent.
There are three variants of the cairo-based device. `type =
"nbcairo"` has no buffering. `type = "cairo"` has some buffering, and supports `[dev.hold](dev.flush)` and `dev.flush`. `type = "dbcairo"` buffers output and updates the screen about every 100ms (by default). The refresh interval can be set (in units of seconds) by e.g. `[options](../../base/html/options)(X11updates = 0.25)`: the value is consulted when a device is opened. Updates are only looked for every 50ms (at most), and during heavy graphics computations only every 500ms.
Which version will be fastest depends on the X11 connection and the type of plotting. You will probably want to use a buffered type unless backing store is in use on the X server (which for example it always is on the `x86_64` macOS XQuartz server), as otherwise repainting when the window is exposed will be slow. On slow connections `type = "dbcairo"` will probably give the best performance.
Because of known problems with font selection on macOS without Pango (for example, the CRAN distribution), `type = "cairo"` is not the default there. These problems have included mixing up bold and italic (since worked around), selecting incorrect glyphs and ugly or missing symbol glyphs.
All devices which use an X11 server (including the `type =
"Xlib"` versions of bitmap devices such as `<png>`) share internal structures, which means that they must use the same `display` and visual. If you want to change display, first close all such devices.
The cursor shown indicates the state of the device. If quiescent the cursor is an arrow: when the locator is in use it is a crosshair cursor, and when plotting computations are in progress (and this can be detected) it is a watch cursor. (The exact cursors displayed will depend on the window manager in use.)
### X11 Fonts
This section applies only to `type = "Xlib"`.
An initial/default font family for the device can be specified via the `fonts` argument, but if a device-independent R graphics font family is specified (e.g., via `par(family =)` in the graphics package), the X11 device makes use of the X11 font database (see `X11Fonts`) to convert the R graphics font family to an X11-specific font family description. If `family` is supplied as an argument, the X11 font database is used to convert that, but otherwise the argument `fonts` (with default given by `X11.options`) is used.
X11 chooses fonts by matching to a pattern, and it is quite possible that it will choose a font in the wrong encoding or which does not contain glyphs for your language (particularly common in `iso10646-1` fonts).
The `fonts` argument is a two-element character vector, and the first element will be crucial in successfully using non-Western-European fonts. Settings that have proved useful include
`"-*-mincho-%s-%s-*-*-%d-*-*-*-*-*-*-*"` for CJK languages and `"-cronyx-helvetica-%s-%s-*-*-%d-*-*-*-*-*-*-*"` for Russian.
For UTF-8 locales, the `XLC_LOCALE` databases provide mappings between character encodings, and you may need to add an entry for your locale (e.g., Fedora Core 3 lacked one for `ru_RU.utf8`).
### Cairo Fonts
The cairographics-based devices work directly with font family names such as `"Helvetica"` which can be selected initially by the `family` argument and subsequently by `[par](../../graphics/html/par)` or `[gpar](../../grid/html/gpar)`. There are mappings for the three device-independent font families, `"sans"` for a sans-serif font (to `"Helvetica"`), `"serif"` for a serif font (to `"Times"`) and `"mono"` for a monospaced font (to `"Courier"`).
The font selection is handled by `Pango` (usually *via* `fontconfig`) or `fontconfig` (on macOS and perhaps elsewhere). The results depend on the fonts installed on the system running **R** – setting the environmnent variable FC\_DEBUG to 1 normally allows some tracing of the selection process.
This works best when high-quality scalable fonts are installed, usually in Type 1 or TrueType formats: see the ‘R Installation and Administration’ manual for advice on how to obtain and install such fonts. At present the best rendering (including using kerning) will be achieved with TrueType fonts: see <https://www.freedesktop.org/software/fontconfig/fontconfig-user.html> for ways to set up your system to prefer them. The default family (`"Helvetica"`) is likely not to use kerning: alternatives which should if you have them installed are `"Arial"`, `"DejaVu Sans"` and `"Liberation Sans"` (and perhaps `"FreeSans"`). For those who prefer fonts with serifs, try `"Times New Roman"`, `"DejaVu Serif"` and `"Liberation
Serif"`. To match LaTeX text, use something like `"CM Roman"`.
Fedora systems from version 31 on do not like the default `"symbol"` font family for rendering symbols (e.g., <plotmath>). For those systems, users should specify a different font via `symbolfamily`. The default can also be changed via `X11.options`.
Problems with incorrect rendering of symbols (e.g., of `quote(pi)` and `expression(10^degree)`) have been seen on Linux systems which have the Wine symbol font installed – `fontconfig` then prefers this and misinterprets its encoding. Adding the following lines to ‘~/.fonts.conf’ or ‘/etc/fonts/local.conf’ may circumvent this problem by preferring the URW Type 1 symbol font.
```
<fontconfig>
<match target="pattern">
<test name="family"><string>Symbol</string></test>
<edit name="family" mode="prepend" binding="same">
<string>Standard Symbols L</string>
</edit>
</match>
</fontconfig>
```
A test for this is to run at the command line `fc-match Symbol`. If that shows `symbol.ttf` that may be the Wine symbol font – use `locate symbol.ttf` to see if it is found from a directory with wine in the name.
### Resources
The standard X11 resource `geometry` can be used to specify the window position and/or size, but will be overridden by values specified as arguments or non-`NA` defaults set in `X11.options`. The class looked for is `R_x11`. Note that the resource specifies the width and height in pixels and not in inches. See for example man X (or <https://www.x.org/releases/current/>). An example line in ‘~/.Xresources’ might be
```
R_x11*geometry: 900x900-0+0
```
which specifies a 900 x 900 pixel window at the top right of the screen.
### Colour Rendering
X11 supports several ‘visual’ types, and nowadays almost all systems support ‘truecolor’ which `X11` will use by default. This uses a direct specification of any RGB colour up to the depth supported (usually 8 bits per colour). Other visuals make use of a palette to support fewer colours, only grays or even only black/white. The palette is shared between all X11 clients, so it can be necessary to limit the number of colours used by **R**.
The default for `type = "Xlib"` is to use the best possible colour model for the visual of the X11 server: these days this will almost always be ‘truecolor’. This can be overridden by the `colortype` argument of `X11.options`. **Note:** All `X11` and `type = "Xlib"` `[bmp](png)`, `jpeg`, `png` and `tiff` devices share a `colortype` which is set when the first device to be opened. To change the `colortype` you need to close *all* open such devices, and then use `X11.options(colortype =)`.
The colortype types are tried in the order `"true"`, `"pseudo"`, `"gray"` and `"mono"` (black or white only). The values `"pseudo"` and `"pseudo.cube"` provide two colour strategies for a pseudocolor visual. The first strategy provides on-demand colour allocation which produces exact colours until the colour resources of the display are exhausted (when plotting will fail). The second allocates (if possible) a standard colour cube, and requested colours are approximated by the closest value in the cube.
With `colortype` equal to `"pseudo.cube"` or `"gray"` successively smaller palettes are tried until one is completely allocated. If allocation of the smallest attempt fails the device will revert to `"mono"`. For `"gray"` the search starts at 256 grays for a display with depth greater than 8, otherwise with half the available colours. For `"pseudo.cube"` the maximum cube size is set by `X11.options(maxcolorsize =)` and defaults to 256. With that setting the largest cube tried is 4 levels each for RGB, using 64 colours in the palette.
The cairographics-based devices most likely only work (or work correctly) with ‘TrueColor’ visuals, although in principle this depends on the cairo installation: a warning is given if any other visual is encountered.
`type = "Xlib"` supports ‘TrueColor’, ‘PseudoColor’, ‘GrayScale’, `StaticGray` and `MonoChrome` visuals: ‘StaticColor’ and ‘DirectColor’ visuals are handled only in black/white.
### Anti-aliasing
Anti-aliasing is only supported for cairographics-based devices, and applies to both graphics and fonts. It is generally preferable for lines and text, but can lead to undesirable effects for fills, e.g. for `[image](../../graphics/html/image)` plots, and so is never used for fills.
`antialias = "default"` is in principle platform-dependent, but seems most often equivalent to `antialias = "gray"`.
### Conventions
This section describes the implementation of the conventions for graphics devices set out in the ‘R Internals’ manual.
* The default device size is 7 inches square.
* Font sizes are in big points.
* The default font family is Helvetica.
* Line widths in 1/96 inch, minimum one pixel for `type =
"Xlib"`, 0.01 otherwise.
* For `type = "Xlib"` circle radii are in pixels with minimum one.
* Colours are interpreted by the X11 server, which is *assumed* to conform to sRGB.
### Warning
Support for all the Unix devices is optional, so in packages `X11()` should be used conditionally after checking `[capabilities](../../base/html/capabilities)("X11")`.
### See Also
`[Devices](devices)`, `[X11Fonts](x11fonts)`, `[savePlot](saveplot)`.
### Examples
```
## Not run:
if(.Platform$OS.type == "unix") { # Only on unix-alikes, possibly Mac,
## put something like this is your .Rprofile to customize the defaults
setHook(packageEvent("grDevices", "onLoad"),
function(...) grDevices::X11.options(width = 8, height = 6, xpos = 0,
pointsize = 10))
}
## End(Not run)
```
| programming_docs |
r None
`make.rgb` Create colour spaces
--------------------------------
### Description
These functions specify colour spaces for use in `[convertColor](convertcolor)`.
### Usage
```
make.rgb(red, green, blue, name = NULL, white = "D65",
gamma = 2.2)
colorConverter(toXYZ, fromXYZ, name, white = NULL, vectorized = FALSE)
```
### Arguments
| | |
| --- | --- |
| `red,green,blue` | Chromaticity (xy or xyY) of RGB primaries |
| `name` | Name for the colour space |
| `white` | Character string specifying the reference white (see ‘Details’.) |
| `gamma` | Display gamma (nonlinearity). A positive number or the string `"sRGB"` |
| `fromXYZ` | Function to convert from XYZ tristimulus coordinates to this space |
| `toXYZ` | Function to convert from this space to XYZ tristimulus coordinates. |
| `vectorized` | Whether `fromXYZ` and `toXYZ` are vectorized internally to handle input color matrices. |
### Details
An RGB colour space is defined by the chromaticities of the red, green and blue primaries. These are given as vectors of length 2 or 3 in xyY coordinates (the Y component is not used and may be omitted). The chromaticities are defined relative to a reference white, which must be one of the CIE standard illuminants: "A", "B", "C", "D50", "D55", "D60", "E" (usually "D65").
The display gamma is most commonly 2.2, though 1.8 is used for Apple RGB. The sRGB standard specifies a more complicated function that is close to a gamma of 2.2; `gamma = "sRGB"` uses this function.
Colour spaces other than RGB can be specified directly by giving conversions to and from XYZ tristimulus coordinates. The functions should take two arguments. The first is a vector giving the coordinates for one colour. The second argument is the reference white. If a specific reference white is included in the definition of the colour space (as for the RGB spaces) this second argument should be ignored and may be `...`.
As of R 3.6.0 the built in color converters along with `[convertColor](convertcolor)` were vectorized to process three column color matrices in one call, instead of row by row via `[apply](../../base/html/apply)`. In order to maintain backwards compatibility, `colorConverter` wraps `fromXYZ` and `toXYZ` in a `apply` loop in case they do not also support matrix inputs. If the `fromXYZ` and `toXYZ` functions you are using operate correctly on the whole color matrix at once instead of row by row, you can set `vectorized=TRUE` for a performance improvement.
### Value
An object of class `colorConverter`
### References
Conversion algorithms from <http://www.brucelindbloom.com>.
### See Also
`[convertColor](convertcolor)`
### Examples
```
(pal <- make.rgb(red = c(0.6400, 0.3300),
green = c(0.2900, 0.6000),
blue = c(0.1500, 0.0600),
name = "PAL/SECAM RGB"))
## converter for sRGB in #rrggbb format
hexcolor <- colorConverter(toXYZ = function(hex, ...) {
rgb <- t(col2rgb(hex))/255
colorspaces$sRGB$toXYZ(rgb, ...) },
fromXYZ = function(xyz, ...) {
rgb <- colorspaces$sRGB$fromXYZ(xyz, ...)
rgb <- round(rgb, 5)
if (min(rgb) < 0 || max(rgb) > 1)
as.character(NA)
else rgb(rgb[1], rgb[2], rgb[3])},
white = "D65", name = "#rrggbb")
(cols <- t(col2rgb(palette())))
zapsmall(luv <- convertColor(cols, from = "sRGB", to = "Luv", scale.in = 255))
(hex <- convertColor(luv, from = "Luv", to = hexcolor, scale.out = NULL))
## must make hex a matrix before using it
(cc <- round(convertColor(as.matrix(hex), from = hexcolor, to = "sRGB",
scale.in = NULL, scale.out = 255)))
stopifnot(cc == cols)
## Internally vectorized version of hexcolor, notice the use
## of `vectorized = TRUE`:
hexcolorv <- colorConverter(toXYZ = function(hex, ...) {
rgb <- t(col2rgb(hex))/255
colorspaces$sRGB$toXYZ(rgb, ...) },
fromXYZ = function(xyz, ...) {
rgb <- colorspaces$sRGB$fromXYZ(xyz, ...)
rgb <- round(rgb, 5)
oob <- pmin(rgb[,1],rgb[,2],rgb[,3]) < 0 |
pmax(rgb[,1],rgb[,2],rgb[,3]) > 0
res <- rep(NA_character_, nrow(rgb))
res[!oob] <- rgb(rgb[!oob,,drop=FALSE])},
white = "D65", name = "#rrggbb",
vectorized=TRUE)
(ccv <- round(convertColor(as.matrix(hex), from = hexcolor, to = "sRGB",
scale.in = NULL, scale.out = 255)))
stopifnot(ccv == cols)
```
r None
`postscriptFonts` PostScript and PDF Font Families
---------------------------------------------------
### Description
These functions handle the translation of a **R** graphics font family name to a PostScript or PDF font description, used by the `<postscript>` or `<pdf>` graphics devices.
### Usage
```
postscriptFonts(...)
pdfFonts(...)
```
### Arguments
| | |
| --- | --- |
| `...` | either character strings naming mappings to display, or named arguments specifying mappings to add or change. |
### Details
If these functions are called with no argument they list all the existing mappings, whereas if they are called with named arguments they add (or change) mappings.
A PostScript or PDF device is created with a default font family (see the documentation for `<postscript>`), but it is also possible to specify a font family when drawing to the device (for example, see the documentation for `"family"` in `[par](../../graphics/html/par)` and for `"fontfamily"` in `[gpar](../../grid/html/gpar)` in the grid package).
The font family sent to the device is a simple string name, which must be mapped to a set of PostScript fonts. Separate lists of mappings for `postscript` and `pdf` devices are maintained for the current **R** session and can be added to by the user.
The `postscriptFonts` and `pdfFonts` functions can be used to list existing mappings and to define new mappings. The `[Type1Font](type1font)` and `[CIDFont](type1font)` functions can be used to create new mappings, when the `xxxFonts` function is used to add them to the database. See the examples.
Default mappings are provided for three device-independent family names: `"sans"` for a sans-serif font (to `"Helvetica"`), `"serif"` for a serif font (to `"Times"`) and `"mono"` for a monospaced font (to `"Courier"`).
Mappings for a number of standard Adobe fonts (and URW equivalents) are also provided: `"AvantGarde"`, `"Bookman"`, `"Courier"`, `"Helvetica"`, `"Helvetica-Narrow"`, `"NewCenturySchoolbook"`, `"Palatino"` and `"Times"`; `"URWGothic"`, `"URWBookman"`, `"NimbusMon"`, `"NimbusSan"` (synonym `"URWHelvetica"`), `"NimbusSanCond"`, `"CenturySch"`, `"URWPalladio"` and `"NimbusRom"` (synonym `"URWTimes"`).
There are also mappings for `"ComputerModern"`, `"ComputerModernItalic"` and `"ArialMT"` (Monotype Arial).
Finally, there are some default mappings for East Asian locales described in a separate section.
The specification of font metrics and encodings is described in the help for the `<postscript>` function.
The fonts are not embedded in the resulting PostScript or PDF file, so software including the PostScript or PDF plot file should either embed the font outlines (usually from ‘.pfb’ or ‘.pfa’ files) or use DSC comments to instruct the print spooler or including application to do so (see also `[embedFonts](embedfonts)`).
A font family has both an **R**-level name, the argument name used when `postscriptFonts` was called, and an internal name, the `family` component. These two names are the same for all the pre-defined font families.
Once a font family is in use it cannot be changed. ‘In use’ means that it has been specified *via* a `family` or `fonts` argument to an invocation of the same graphics device already in the **R** session. (For these purposes `xfig` counts the same as `postscript` but only uses some of the predefined mappings.)
### Value
A list of one or more font mappings.
### East Asian fonts
There are some default mappings for East Asian locales:
`"Japan1"`, `"Japan1HeiMin"`, `"Japan1GothicBBB"`, and `"Japan1Ryumin"` for Japanese; `"Korea1"` and `"Korea1deb"` for Korean; `"GB1"` (Simplified Chinese) for mainland China and Singapore; `"CNS1"` (Traditional Chinese) for Hong Kong and Taiwan.
These refer to the following fonts
| | |
| --- | --- |
| Japan1 (PS) | `HeiseiKakuGo-W5` |
| | Linotype Japanese printer font |
| Japan1 (PDF) | `KozMinPro-Regular-Acro` |
| | from Adobe Reader 7.0 Japanese Font Pack |
| Japan1HeiMin (PS) | `HeiseiMin-W3` |
| | Linotype Japanese printer font |
| Japan1HeiMin (PDF) | `HeiseiMin-W3-Acro` |
| | from Adobe Reader 7.0 Japanese Font Pack |
| Japan1GothicBBB | `GothicBBB-Medium` |
| | Japanese-market PostScript printer font |
| Japan1Ryumin | `Ryumin-Light` |
| | Japanese-market PostScript printer font |
| Korea1 (PS) | `Baekmuk-Batang` |
| | TrueType font found on some Linux systems |
| Korea1 (PDF) | `HYSMyeongJoStd-Medium-Acro` |
| | from Adobe Reader 7.0 Korean Font Pack |
| Korea1deb (PS) | `Batang-Regular` |
| | another name for Baekmuk-Batang |
| Korea1deb (PDF) | `HYGothic-Medium-Acro` |
| | from Adobe Reader 4.0 Korean Font Pack |
| GB1 (PS) | `BousungEG-Light-GB` |
| | TrueType font found on some Linux systems |
| GB1 (PDF) | `STSong-Light-Acro` |
| | from Adobe Reader 7.0 Simplified Chinese Font Pack |
| CNS1 (PS) | `MOESung-Regular` |
| | Ken Lunde's CJKV resources |
| CNS1 (PDF) | `MSungStd-Light-Acro` |
| | from Adobe Reader 7.0 Traditional Chinese Font Pack |
| |
`BousungEG-Light-GB` can be found at <https://ftp.gnu.org/pub/non-gnu/chinese-fonts-truetype/>. These will need to be installed or otherwise made available to the postscript/PDF interpreter such as ghostscript (and not all interpreters can handle TrueType fonts).
You may well find that your postscript/PDF interpreters has been set up to provide aliases for many of these fonts. For example, ghostscript on Windows can optionally be installed to map common East Asian fonts names to Windows TrueType fonts. (You may want to add the `-Acro` versions as well.)
Adding a mapping for a CID-keyed font is for gurus only.
### Author(s)
Support for Computer Modern fonts is based on a contribution by Brian D'Urso.
### See Also
`<postscript>` and `<pdf>`; `[Type1Font](type1font)` and `[CIDFont](type1font)` for specifying new font mappings.
### Examples
```
postscriptFonts()
## This duplicates "ComputerModernItalic".
CMitalic <- Type1Font("ComputerModern2",
c("CM_regular_10.afm", "CM_boldx_10.afm",
"cmti10.afm", "cmbxti10.afm",
"CM_symbol_10.afm"),
encoding = "TeXtext.enc")
postscriptFonts(CMitalic = CMitalic)
## A CID font for Japanese using a different CMap and
## corresponding cmapEncoding.
`Jp_UCS-2` <- CIDFont("TestUCS2",
c("Adobe-Japan1-UniJIS-UCS2-H.afm",
"Adobe-Japan1-UniJIS-UCS2-H.afm",
"Adobe-Japan1-UniJIS-UCS2-H.afm",
"Adobe-Japan1-UniJIS-UCS2-H.afm"),
"UniJIS-UCS2-H", "UCS-2")
pdfFonts(`Jp_UCS-2` = `Jp_UCS-2`)
names(pdfFonts())
```
r None
`msgWindow` Manipulate a Window
--------------------------------
### Description
`msgWindow` sends a message to manipulate the specified screen device's window. With argument `which = -1` it applies to the GUI console (which only accepts the first three actions).
### Usage
```
msgWindow(type = c("minimize", "restore", "maximize",
"hide", "recordOn", "recordOff"),
which = dev.cur())
```
### Arguments
| | |
| --- | --- |
| `type` | action to be taken. |
| `which` | a device number, or `-1`. |
### See Also
`[bringToTop](bringtotop)`, `<windows>`
r None
`boxplot.stats` Box Plot Statistics
------------------------------------
### Description
This function is typically called by another function to gather the statistics necessary for producing box plots, but may be invoked separately.
### Usage
```
boxplot.stats(x, coef = 1.5, do.conf = TRUE, do.out = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x` | a numeric vector for which the boxplot will be constructed (`[NA](../../base/html/na)`s and `[NaN](../../base/html/is.finite)`s are allowed and omitted). |
| `coef` | this determines how far the plot ‘whiskers’ extend out from the box. If `coef` is positive, the whiskers extend to the most extreme data point which is no more than `coef` times the length of the box away from the box. A value of zero causes the whiskers to extend to the data extremes (and no outliers be returned). |
| `do.conf, do.out` | logicals; if `FALSE`, the `conf` or `out` component respectively will be empty in the result. |
### Details
The two ‘hinges’ are versions of the first and third quartile, i.e., close to `[quantile](../../stats/html/quantile)(x, c(1,3)/4)`. The hinges equal the quartiles for odd *n* (where `n <- length(x)`) and differ for even *n*. Whereas the quartiles only equal observations for `n %% 4 == 1` (*n = 1 mod 4*), the hinges do so *additionally* for `n %% 4 == 2` (*n = 2 mod 4*), and are in the middle of two observations otherwise.
The notches (if requested) extend to `+/-1.58 IQR/sqrt(n)`. This seems to be based on the same calculations as the formula with 1.57 in Chambers *et al* (1983, p. 62), given in McGill *et al* (1978, p. 16). They are based on asymptotic normality of the median and roughly equal sample sizes for the two medians being compared, and are said to be rather insensitive to the underlying distributions of the samples. The idea appears to be to give roughly a 95% confidence interval for the difference in two medians.
### Value
List with named components as follows:
| | |
| --- | --- |
| `stats` | a vector of length 5, containing the extreme of the lower whisker, the lower ‘hinge’, the median, the upper ‘hinge’ and the extreme of the upper whisker. |
| `n` | the number of non-`NA` observations in the sample. |
| `conf` | the lower and upper extremes of the ‘notch’ (`if(do.conf)`). See the details. |
| `out` | the values of any data points which lie beyond the extremes of the whiskers (`if(do.out)`). |
Note that `$stats` and `$conf` are sorted in *in*creasing order, unlike S, and that `$n` and `$out` include any `+- Inf` values.
### References
Tukey, J. W. (1977). *Exploratory Data Analysis*. Section 2C.
McGill, R., Tukey, J. W. and Larsen, W. A. (1978). Variations of box plots. *The American Statistician*, **32**, 12–16. doi: [10.2307/2683468](https://doi.org/10.2307/2683468).
Velleman, P. F. and Hoaglin, D. C. (1981). *Applications, Basics and Computing of Exploratory Data Analysis*. Duxbury Press.
Emerson, J. D and Strenio, J. (1983). Boxplots and batch comparison. Chapter 3 of *Understanding Robust and Exploratory Data Analysis*, eds. D. C. Hoaglin, F. Mosteller and J. W. Tukey. Wiley.
Chambers, J. M., Cleveland, W. S., Kleiner, B. and Tukey, P. A. (1983). *Graphical Methods for Data Analysis*. Wadsworth & Brooks/Cole.
### See Also
`[fivenum](../../stats/html/fivenum)`, `[boxplot](../../graphics/html/boxplot)`, `[bxp](../../graphics/html/bxp)`.
### Examples
```
require(stats)
x <- c(1:100, 1000)
(b1 <- boxplot.stats(x))
(b2 <- boxplot.stats(x, do.conf = FALSE, do.out = FALSE))
stopifnot(b1 $ stats == b2 $ stats) # do.out = FALSE is still robust
boxplot.stats(x, coef = 3, do.conf = FALSE)
## no outlier treatment:
boxplot.stats(x, coef = 0)
boxplot.stats(c(x, NA)) # slight change : n is 101
(r <- boxplot.stats(c(x, -1:1/0)))
stopifnot(r$out == c(1000, -Inf, Inf))
```
r None
`multiedit` Multiedit for k-NN Classifier
------------------------------------------
### Description
Multiedit for k-NN classifier
### Usage
```
multiedit(x, class, k = 1, V = 3, I = 5, trace = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x` | matrix of training set. |
| `class` | vector of classification of training set. |
| `k` | number of neighbours used in k-NN. |
| `V` | divide training set into V parts. |
| `I` | number of null passes before quitting. |
| `trace` | logical for statistics at each pass. |
### Value
Index vector of cases to be retained.
### References
P. A. Devijver and J. Kittler (1982) *Pattern Recognition. A Statistical Approach.* Prentice-Hall, p. 115.
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks.* Cambridge.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<condense>`, `<reduce.nn>`
### Examples
```
tr <- sample(1:50, 25)
train <- rbind(iris3[tr,,1], iris3[tr,,2], iris3[tr,,3])
test <- rbind(iris3[-tr,,1], iris3[-tr,,2], iris3[-tr,,3])
cl <- factor(c(rep(1,25),rep(2,25), rep(3,25)), labels=c("s", "c", "v"))
table(cl, knn(train, test, cl, 3))
ind1 <- multiedit(train, cl, 3)
length(ind1)
table(cl, knn(train[ind1, , drop=FALSE], test, cl[ind1], 1))
ntrain <- train[ind1,]; ncl <- cl[ind1]
ind2 <- condense(ntrain, ncl)
length(ind2)
table(cl, knn(ntrain[ind2, , drop=FALSE], test, ncl[ind2], 1))
```
r None
`lvq2` Learning Vector Quantization 2.1
----------------------------------------
### Description
Moves examples in a codebook to better represent the training set.
### Usage
```
lvq2(x, cl, codebk, niter = 100 * nrow(codebk$x), alpha = 0.03,
win = 0.3)
```
### Arguments
| | |
| --- | --- |
| `x` | a matrix or data frame of examples |
| `cl` | a vector or factor of classifications for the examples |
| `codebk` | a codebook |
| `niter` | number of iterations |
| `alpha` | constant for training |
| `win` | a tolerance for the closeness of the two nearest vectors. |
### Details
Selects `niter` examples at random with replacement, and adjusts the nearest two examples in the codebook if one is correct and the other incorrect.
### Value
A codebook, represented as a list with components `x` and `cl` giving the examples and classes.
### References
Kohonen, T. (1990) The self-organizing map. *Proc. IEEE* **78**, 1464–1480.
Kohonen, T. (1995) *Self-Organizing Maps.* Springer, Berlin.
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks.* Cambridge.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<lvqinit>`, `<lvq1>`, `<olvq1>`, `<lvq3>`, `<lvqtest>`
### Examples
```
train <- rbind(iris3[1:25,,1], iris3[1:25,,2], iris3[1:25,,3])
test <- rbind(iris3[26:50,,1], iris3[26:50,,2], iris3[26:50,,3])
cl <- factor(c(rep("s",25), rep("c",25), rep("v",25)))
cd <- lvqinit(train, cl, 10)
lvqtest(cd, train)
cd0 <- olvq1(train, cl, cd)
lvqtest(cd0, train)
cd2 <- lvq2(train, cl, cd0)
lvqtest(cd2, train)
```
r None
`condense` Condense training set for k-NN classifier
-----------------------------------------------------
### Description
Condense training set for k-NN classifier
### Usage
```
condense(train, class, store, trace = TRUE)
```
### Arguments
| | |
| --- | --- |
| `train` | matrix for training set |
| `class` | vector of classifications for test set |
| `store` | initial store set. Default one randomly chosen element of the set. |
| `trace` | logical. Trace iterations? |
### Details
The store set is used to 1-NN classify the rest, and misclassified patterns are added to the store set. The whole set is checked until no additions occur.
### Value
Index vector of cases to be retained (the final store set).
### References
P. A. Devijver and J. Kittler (1982) *Pattern Recognition. A Statistical Approach.* Prentice-Hall, pp. 119–121.
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks.* Cambridge.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<reduce.nn>`, `<multiedit>`
### Examples
```
train <- rbind(iris3[1:25,,1], iris3[1:25,,2], iris3[1:25,,3])
test <- rbind(iris3[26:50,,1], iris3[26:50,,2], iris3[26:50,,3])
cl <- factor(c(rep("s",25), rep("c",25), rep("v",25)))
keep <- condense(train, cl)
knn(train[keep, , drop=FALSE], test, cl[keep])
keep2 <- reduce.nn(train, keep, cl)
knn(train[keep2, , drop=FALSE], test, cl[keep2])
```
| programming_docs |
r None
`lvq3` Learning Vector Quantization 3
--------------------------------------
### Description
Moves examples in a codebook to better represent the training set.
### Usage
```
lvq3(x, cl, codebk, niter = 100*nrow(codebk$x), alpha = 0.03,
win = 0.3, epsilon = 0.1)
```
### Arguments
| | |
| --- | --- |
| `x` | a matrix or data frame of examples |
| `cl` | a vector or factor of classifications for the examples |
| `codebk` | a codebook |
| `niter` | number of iterations |
| `alpha` | constant for training |
| `win` | a tolerance for the closeness of the two nearest vectors. |
| `epsilon` | proportion of move for correct vectors |
### Details
Selects `niter` examples at random with replacement, and adjusts the nearest two examples in the codebook for each.
### Value
A codebook, represented as a list with components `x` and `cl` giving the examples and classes.
### References
Kohonen, T. (1990) The self-organizing map. *Proc. IEEE* **78**, 1464–1480.
Kohonen, T. (1995) *Self-Organizing Maps.* Springer, Berlin.
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks.* Cambridge.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<lvqinit>`, `<lvq1>`, `<olvq1>`, `<lvq2>`, `<lvqtest>`
### Examples
```
train <- rbind(iris3[1:25,,1], iris3[1:25,,2], iris3[1:25,,3])
test <- rbind(iris3[26:50,,1], iris3[26:50,,2], iris3[26:50,,3])
cl <- factor(c(rep("s",25), rep("c",25), rep("v",25)))
cd <- lvqinit(train, cl, 10)
lvqtest(cd, train)
cd0 <- olvq1(train, cl, cd)
lvqtest(cd0, train)
cd3 <- lvq3(train, cl, cd0)
lvqtest(cd3, train)
```
r None
`SOM` Self-Organizing Maps: Online Algorithm
---------------------------------------------
### Description
Kohonen's Self-Organizing Maps are a crude form of multidimensional scaling.
### Usage
```
SOM(data, grid = somgrid(), rlen = 10000, alpha, radii, init)
```
### Arguments
| | |
| --- | --- |
| `data` | a matrix or data frame of observations, scaled so that Euclidean distance is appropriate. |
| `grid` | A grid for the representatives: see `<somgrid>`. |
| `rlen` | the number of updates: used only in the defaults for `alpha` and `radii`. |
| `alpha` | the amount of change: one update is done for each element of `alpha`. Default is to decline linearly from 0.05 to 0 over `rlen` updates. |
| `radii` | the radii of the neighbourhood to be used for each update: must be the same length as `alpha`. Default is to decline linearly from 4 to 1 over `rlen` updates. |
| `init` | the initial representatives. If missing, chosen (without replacement) randomly from `data`. |
### Details
`alpha` and `radii` can also be lists, in which case each component is used in turn, allowing two- or more phase training.
### Value
An object of class `"SOM"` with components
| | |
| --- | --- |
| `grid` | the grid, an object of class `"somgrid"`. |
| `codes` | a matrix of representatives. |
### References
Kohonen, T. (1995) *Self-Organizing Maps.* Springer-Verlag
Kohonen, T., Hynninen, J., Kangas, J. and Laaksonen, J. (1996) *SOM PAK: The self-organizing map program package.* Laboratory of Computer and Information Science, Helsinki University of Technology, Technical Report A31.
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks.* Cambridge.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<somgrid>`, `[batchSOM](batchsom)`
### Examples
```
require(graphics)
data(crabs, package = "MASS")
lcrabs <- log(crabs[, 4:8])
crabs.grp <- factor(c("B", "b", "O", "o")[rep(1:4, rep(50,4))])
gr <- somgrid(topo = "hexagonal")
crabs.som <- SOM(lcrabs, gr)
plot(crabs.som)
## 2-phase training
crabs.som2 <- SOM(lcrabs, gr,
alpha = list(seq(0.05, 0, len = 1e4), seq(0.02, 0, len = 1e5)),
radii = list(seq(8, 1, len = 1e4), seq(4, 1, len = 1e5)))
plot(crabs.som2)
```
r None
`knn.cv` k-Nearest Neighbour Cross-Validatory Classification
-------------------------------------------------------------
### Description
k-nearest neighbour cross-validatory classification from training set.
### Usage
```
knn.cv(train, cl, k = 1, l = 0, prob = FALSE, use.all = TRUE)
```
### Arguments
| | |
| --- | --- |
| `train` | matrix or data frame of training set cases. |
| `cl` | factor of true classifications of training set |
| `k` | number of neighbours considered. |
| `l` | minimum vote for definite decision, otherwise `doubt`. (More precisely, less than `k-l` dissenting votes are allowed, even if `k` is increased by ties.) |
| `prob` | If this is true, the proportion of the votes for the winning class are returned as attribute `prob`. |
| `use.all` | controls handling of ties. If true, all distances equal to the `k`th largest are included. If false, a random selection of distances equal to the `k`th is chosen to use exactly `k` neighbours. |
### Details
This uses leave-one-out cross validation. For each row of the training set `train`, the `k` nearest (in Euclidean distance) other training set vectors are found, and the classification is decided by majority vote, with ties broken at random. If there are ties for the `k`th nearest vector, all candidates are included in the vote.
### Value
Factor of classifications of training set. `doubt` will be returned as `NA`.
### References
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks.* Cambridge.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<knn>`
### Examples
```
train <- rbind(iris3[,,1], iris3[,,2], iris3[,,3])
cl <- factor(c(rep("s",50), rep("c",50), rep("v",50)))
knn.cv(train, cl, k = 3, prob = TRUE)
attributes(.Last.value)
```
r None
`knn1` 1-Nearest Neighbour Classification
------------------------------------------
### Description
Nearest neighbour classification for test set from training set. For each row of the test set, the nearest (by Euclidean distance) training set vector is found, and its classification used. If there is more than one nearest, a majority vote is used with ties broken at random.
### Usage
```
knn1(train, test, cl)
```
### Arguments
| | |
| --- | --- |
| `train` | matrix or data frame of training set cases. |
| `test` | matrix or data frame of test set cases. A vector will be interpreted as a row vector for a single case. |
| `cl` | factor of true classification of training set. |
### Value
Factor of classifications of test set.
### References
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks.* Cambridge.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<knn>`
### Examples
```
train <- rbind(iris3[1:25,,1], iris3[1:25,,2], iris3[1:25,,3])
test <- rbind(iris3[26:50,,1], iris3[26:50,,2], iris3[26:50,,3])
cl <- factor(c(rep("s",25), rep("c",25), rep("v",25)))
knn1(train, test, cl)
```
r None
`batchSOM` Self-Organizing Maps: Batch Algorithm
-------------------------------------------------
### Description
Kohonen's Self-Organizing Maps are a crude form of multidimensional scaling.
### Usage
```
batchSOM(data, grid = somgrid(), radii, init)
```
### Arguments
| | |
| --- | --- |
| `data` | a matrix or data frame of observations, scaled so that Euclidean distance is appropriate. |
| `grid` | A grid for the representatives: see `<somgrid>`. |
| `radii` | the radii of the neighbourhood to be used for each pass: one pass is run for each element of `radii`. |
| `init` | the initial representatives. If missing, chosen (without replacement) randomly from `data`. |
### Details
The batch SOM algorithm of Kohonen(1995, section 3.14) is used.
### Value
An object of class `"SOM"` with components
| | |
| --- | --- |
| `grid` | the grid, an object of class `"somgrid"`. |
| `codes` | a matrix of representatives. |
### References
Kohonen, T. (1995) *Self-Organizing Maps.* Springer-Verlag.
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks.* Cambridge.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<somgrid>`, `[SOM](som)`
### Examples
```
require(graphics)
data(crabs, package = "MASS")
lcrabs <- log(crabs[, 4:8])
crabs.grp <- factor(c("B", "b", "O", "o")[rep(1:4, rep(50,4))])
gr <- somgrid(topo = "hexagonal")
crabs.som <- batchSOM(lcrabs, gr, c(4, 4, 2, 2, 1, 1, 1, 0, 0))
plot(crabs.som)
bins <- as.numeric(knn1(crabs.som$code, lcrabs, 0:47))
plot(crabs.som$grid, type = "n")
symbols(crabs.som$grid$pts[, 1], crabs.som$grid$pts[, 2],
circles = rep(0.4, 48), inches = FALSE, add = TRUE)
text(crabs.som$grid$pts[bins, ] + rnorm(400, 0, 0.1),
as.character(crabs.grp))
```
r None
`lvqinit` Initialize a LVQ Codebook
------------------------------------
### Description
Construct an initial codebook for LVQ methods.
### Usage
```
lvqinit(x, cl, size, prior, k = 5)
```
### Arguments
| | |
| --- | --- |
| `x` | a matrix or data frame of training examples, `n` by `p`. |
| `cl` | the classifications for the training examples. A vector or factor of length `n`. |
| `size` | the size of the codebook. Defaults to `min(round(0.4*ng*(ng-1 + p/2),0), n)` where `ng` is the number of classes. |
| `prior` | Probabilities to represent classes in the codebook. Default proportions in the training set. |
| `k` | k used for k-NN test of correct classification. Default is 5. |
### Details
Selects `size` examples from the training set without replacement with proportions proportional to the prior or the original proportions.
### Value
A codebook, represented as a list with components `x` and `cl` giving the examples and classes.
### References
Kohonen, T. (1990) The self-organizing map. *Proc. IEEE* **78**, 1464–1480.
Kohonen, T. (1995) *Self-Organizing Maps.* Springer, Berlin.
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks.* Cambridge.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<lvq1>`, `<lvq2>`, `<lvq3>`, `<olvq1>`, `<lvqtest>`
### Examples
```
train <- rbind(iris3[1:25,,1], iris3[1:25,,2], iris3[1:25,,3])
test <- rbind(iris3[26:50,,1], iris3[26:50,,2], iris3[26:50,,3])
cl <- factor(c(rep("s",25), rep("c",25), rep("v",25)))
cd <- lvqinit(train, cl, 10)
lvqtest(cd, train)
cd1 <- olvq1(train, cl, cd)
lvqtest(cd1, train)
```
r None
`somgrid` Plot SOM Fits
------------------------
### Description
Plotting functions for SOM results.
### Usage
```
somgrid(xdim = 8, ydim = 6, topo = c("rectangular", "hexagonal"))
## S3 method for class 'somgrid'
plot(x, type = "p", ...)
## S3 method for class 'SOM'
plot(x, ...)
```
### Arguments
| | |
| --- | --- |
| `xdim, ydim` | dimensions of the grid |
| `topo` | the topology of the grid. |
| `x` | an object inheriting from class `"somgrid"` or `"SOM"`. |
| `type, ...` | graphical parameters. |
### Details
The class `"somgrid"` records the coordinates of the grid to be used for (batch or on-line) SOM: this has a plot method.
The plot method for class `"SOM"` plots a `[stars](../../graphics/html/stars)` plot of the representative at each grid point.
### Value
For `somgrid`, an object of class `"somgrid"`, a list with components
| | |
| --- | --- |
| `pts` | a two-column matrix giving locations for the grid points. |
| `xdim, ydim, topo` | as in the arguments to `somgrid`. |
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[batchSOM](batchsom)`, `[SOM](som)`
r None
`knn` k-Nearest Neighbour Classification
-----------------------------------------
### Description
k-nearest neighbour classification for test set from training set. For each row of the test set, the `k` nearest (in Euclidean distance) training set vectors are found, and the classification is decided by majority vote, with ties broken at random. If there are ties for the `k`th nearest vector, all candidates are included in the vote.
### Usage
```
knn(train, test, cl, k = 1, l = 0, prob = FALSE, use.all = TRUE)
```
### Arguments
| | |
| --- | --- |
| `train` | matrix or data frame of training set cases. |
| `test` | matrix or data frame of test set cases. A vector will be interpreted as a row vector for a single case. |
| `cl` | factor of true classifications of training set |
| `k` | number of neighbours considered. |
| `l` | minimum vote for definite decision, otherwise `doubt`. (More precisely, less than `k-l` dissenting votes are allowed, even if `k` is increased by ties.) |
| `prob` | If this is true, the proportion of the votes for the winning class are returned as attribute `prob`. |
| `use.all` | controls handling of ties. If true, all distances equal to the `k`th largest are included. If false, a random selection of distances equal to the `k`th is chosen to use exactly `k` neighbours. |
### Value
Factor of classifications of test set. `doubt` will be returned as `NA`.
### References
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks.* Cambridge.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<knn1>`, `<knn.cv>`
### Examples
```
train <- rbind(iris3[1:25,,1], iris3[1:25,,2], iris3[1:25,,3])
test <- rbind(iris3[26:50,,1], iris3[26:50,,2], iris3[26:50,,3])
cl <- factor(c(rep("s",25), rep("c",25), rep("v",25)))
knn(train, test, cl, k = 3, prob=TRUE)
attributes(.Last.value)
```
r None
`olvq1` Optimized Learning Vector Quantization 1
-------------------------------------------------
### Description
Moves examples in a codebook to better represent the training set.
### Usage
```
olvq1(x, cl, codebk, niter = 40 * nrow(codebk$x), alpha = 0.3)
```
### Arguments
| | |
| --- | --- |
| `x` | a matrix or data frame of examples |
| `cl` | a vector or factor of classifications for the examples |
| `codebk` | a codebook |
| `niter` | number of iterations |
| `alpha` | constant for training |
### Details
Selects `niter` examples at random with replacement, and adjusts the nearest example in the codebook for each.
### Value
A codebook, represented as a list with components `x` and `cl` giving the examples and classes.
### References
Kohonen, T. (1990) The self-organizing map. *Proc. IEEE* **78**, 1464–1480.
Kohonen, T. (1995) *Self-Organizing Maps.* Springer, Berlin.
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks.* Cambridge.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<lvqinit>`, `<lvqtest>`, `<lvq1>`, `<lvq2>`, `<lvq3>`
### Examples
```
train <- rbind(iris3[1:25,,1], iris3[1:25,,2], iris3[1:25,,3])
test <- rbind(iris3[26:50,,1], iris3[26:50,,2], iris3[26:50,,3])
cl <- factor(c(rep("s",25), rep("c",25), rep("v",25)))
cd <- lvqinit(train, cl, 10)
lvqtest(cd, train)
cd1 <- olvq1(train, cl, cd)
lvqtest(cd1, train)
```
r None
`reduce.nn` Reduce Training Set for a k-NN Classifier
------------------------------------------------------
### Description
Reduce training set for a k-NN classifier. Used after `condense`.
### Usage
```
reduce.nn(train, ind, class)
```
### Arguments
| | |
| --- | --- |
| `train` | matrix for training set |
| `ind` | Initial list of members of the training set (from `condense`). |
| `class` | vector of classifications for test set |
### Details
All the members of the training set are tried in random order. Any which when dropped do not cause any members of the training set to be wrongly classified are dropped.
### Value
Index vector of cases to be retained.
### References
Gates, G.W. (1972) The reduced nearest neighbor rule. *IEEE Trans. Information Theory* **IT-18**, 431–432.
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks.* Cambridge.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<condense>`, `<multiedit>`
### Examples
```
train <- rbind(iris3[1:25,,1], iris3[1:25,,2], iris3[1:25,,3])
test <- rbind(iris3[26:50,,1], iris3[26:50,,2], iris3[26:50,,3])
cl <- factor(c(rep("s",25), rep("c",25), rep("v",25)))
keep <- condense(train, cl)
knn(train[keep,], test, cl[keep])
keep2 <- reduce.nn(train, keep, cl)
knn(train[keep2,], test, cl[keep2])
```
r None
`lvqtest` Classify Test Set from LVQ Codebook
----------------------------------------------
### Description
Classify a test set by 1-NN from a specified LVQ codebook.
### Usage
```
lvqtest(codebk, test)
```
### Arguments
| | |
| --- | --- |
| `codebk` | codebook object returned by other LVQ software |
| `test` | matrix of test examples |
### Details
Uses 1-NN to classify each test example against the codebook.
### Value
Factor of classification for each row of `x`
### References
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks.* Cambridge.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<lvqinit>`, `<olvq1>`
### Examples
```
# The function is currently defined as
function(codebk, test) knn1(codebk$x, test, codebk$cl)
```
r None
`lvq1` Learning Vector Quantization 1
--------------------------------------
### Description
Moves examples in a codebook to better represent the training set.
### Usage
```
lvq1(x, cl, codebk, niter = 100 * nrow(codebk$x), alpha = 0.03)
```
### Arguments
| | |
| --- | --- |
| `x` | a matrix or data frame of examples |
| `cl` | a vector or factor of classifications for the examples |
| `codebk` | a codebook |
| `niter` | number of iterations |
| `alpha` | constant for training |
### Details
Selects `niter` examples at random with replacement, and adjusts the nearest example in the codebook for each.
### Value
A codebook, represented as a list with components `x` and `cl` giving the examples and classes.
### References
Kohonen, T. (1990) The self-organizing map. *Proc. IEEE* **78**, 1464–1480.
Kohonen, T. (1995) *Self-Organizing Maps.* Springer, Berlin.
Ripley, B. D. (1996) *Pattern Recognition and Neural Networks.* Cambridge.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<lvqinit>`, `<olvq1>`, `<lvq2>`, `<lvq3>`, `<lvqtest>`
### Examples
```
train <- rbind(iris3[1:25,,1], iris3[1:25,,2], iris3[1:25,,3])
test <- rbind(iris3[26:50,,1], iris3[26:50,,2], iris3[26:50,,3])
cl <- factor(c(rep("s",25), rep("c",25), rep("v",25)))
cd <- lvqinit(train, cl, 10)
lvqtest(cd, train)
cd0 <- olvq1(train, cl, cd)
lvqtest(cd0, train)
cd1 <- lvq1(train, cl, cd0)
lvqtest(cd1, train)
```
r None
`qq.gam` QQ plots for gam model residuals
------------------------------------------
### Description
Takes a fitted `gam` object produced by `gam()` and produces QQ plots of its residuals (conditional on the fitted model coefficients and scale parameter). If the model distributional assumptions are met then usually these plots should be close to a straight line (although discrete data can yield marked random departures from this line).
### Usage
```
qq.gam(object, rep=0, level=.9,s.rep=10,
type=c("deviance","pearson","response"),
pch=".", rl.col=2, rep.col="gray80", ...)
```
### Arguments
| | |
| --- | --- |
| `object` | a fitted `gam` object as produced by `gam()` (or a `glm` object). |
| `rep` | How many replicate datasets to generate to simulate quantiles of the residual distribution. `0` results in an efficient simulation free method for direct calculation, if this is possible for the object family. |
| `level` | If simulation is used for the quantiles, then reference intervals can be provided for the QQ-plot, this specifies the level. 0 or less for no intervals, 1 or more to simply plot the QQ plot for each replicate generated. |
| `s.rep` | how many times to randomize uniform quantiles to data under direct computation. |
| `type` | what sort of residuals should be plotted? See `<residuals.gam>`. |
| `pch` | plot character to use. 19 is good. |
| `rl.col` | color for the reference line on the plot. |
| `rep.col` | color for reference bands or replicate reference plots. |
| `...` | extra graphics parameters to pass to plotting functions. |
### Details
QQ-plots of the the model residuals can be produced in one of two ways. The cheapest method generates reference quantiles by associating a quantile of the uniform distribution with each datum, and feeding these uniform quantiles into the quantile function associated with each datum. The resulting quantiles are then used in place of each datum to generate approximate quantiles of residuals. The residual quantiles are averaged over `s.rep` randomizations of the uniform quantiles to data.
The second method is to use direct simulatation. For each replicate, data are simulated from the fitted model, and the corresponding residuals computed. This is repeated `rep` times. Quantiles are readily obtained from the empirical distribution of residuals so obtained. From this method reference bands are also computable.
Even if `rep` is set to zero, the routine will attempt to simulate quantiles if no quantile function is available for the family. If no random deviate generating function family is available (e.g. for the quasi families), then a normal QQ-plot is produced. The routine conditions on the fitted model coefficents and the scale parameter estimate.
The plots are very similar to those proposed in Ben and Yohai (2004), but are substantially cheaper to produce (the interpretation of residuals for binary data in Ben and Yohai is not recommended).
Note that plots for raw residuals from fits to binary data contain almost no useful information about model fit. Whether the residual is negative or positive is decided by whether the response is zero or one. The magnitude of the residual, given its sign, is determined entirely by the fitted values. In consequence only the most gross violations of the model are detectable from QQ-plots of residuals for binary data. To really check distributional assumptions from residuals for binary data you have to be able to group the data somehow. Binomial models other than binary are ok.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
N.H. Augustin, E-A Sauleaub, S.N. Wood (2012) On quantile quantile plots for generalized linear models Computational Statistics & Data Analysis. 56(8), 2404-2409.
M.G. Ben and V.J. Yohai (2004) JCGS 13(1), 36-47.
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<choose.k>`, `<gam>`
### Examples
```
library(mgcv)
## simulate binomial data...
set.seed(0)
n.samp <- 400
dat <- gamSim(1,n=n.samp,dist="binary",scale=.33)
p <- binomial()$linkinv(dat$f) ## binomial p
n <- sample(c(1,3),n.samp,replace=TRUE) ## binomial n
dat$y <- rbinom(n,n,p)
dat$n <- n
lr.fit <- gam(y/n~s(x0)+s(x1)+s(x2)+s(x3)
,family=binomial,data=dat,weights=n,method="REML")
par(mfrow=c(2,2))
## normal QQ-plot of deviance residuals
qqnorm(residuals(lr.fit),pch=19,cex=.3)
## Quick QQ-plot of deviance residuals
qq.gam(lr.fit,pch=19,cex=.3)
## Simulation based QQ-plot with reference bands
qq.gam(lr.fit,rep=100,level=.9)
## Simulation based QQ-plot, Pearson resids, all
## simulated reference plots shown...
qq.gam(lr.fit,rep=100,level=1,type="pearson",pch=19,cex=.2)
## Now fit the wrong model and check....
pif <- gam(y~s(x0)+s(x1)+s(x2)+s(x3)
,family=poisson,data=dat,method="REML")
par(mfrow=c(2,2))
qqnorm(residuals(pif),pch=19,cex=.3)
qq.gam(pif,pch=19,cex=.3)
qq.gam(pif,rep=100,level=.9)
qq.gam(pif,rep=100,level=1,type="pearson",pch=19,cex=.2)
## Example of binary data model violation so gross that you see a problem
## on the QQ plot...
y <- c(rep(1,10),rep(0,20),rep(1,40),rep(0,10),rep(1,40),rep(0,40))
x <- 1:160
b <- glm(y~x,family=binomial)
par(mfrow=c(2,2))
## Note that the next two are not necessarily similar under gross
## model violation...
qq.gam(b)
qq.gam(b,rep=50,level=1)
## and a much better plot for detecting the problem
plot(x,residuals(b),pch=19,cex=.3)
plot(x,y);lines(x,fitted(b))
## alternative model
b <- gam(y~s(x,k=5),family=binomial,method="ML")
qq.gam(b)
qq.gam(b,rep=50,level=1)
plot(x,residuals(b),pch=19,cex=.3)
plot(b,residuals=TRUE,pch=19,cex=.3)
```
| programming_docs |
r None
`smooth.construct.cr.smooth.spec` Penalized Cubic regression splines in GAMs
-----------------------------------------------------------------------------
### Description
`<gam>` can use univariate penalized cubic regression spline smooths, specified via terms like `s(x,bs="cr")`. `s(x,bs="cs")` specifies a penalized cubic regression spline which has had its penalty modified to shrink towards zero at high enough smoothing parameters (as the smoothing parameter goes to infinity a normal cubic spline tends to a straight line.) `s(x,bs="cc")` specifies a cyclic penalized cubic regression spline smooth.
‘Cardinal’ spline bases are used: Wood (2017) sections 5.3.1 and 5.3.2 gives full details. These bases have very low setup costs. For a given basis dimension, `k`, they typically perform a little less well then thin plate regression splines, but a little better than p-splines. See `<te>` to use these bases in tensor product smooths of several variables.
Default `k` is 10.
### Usage
```
## S3 method for class 'cr.smooth.spec'
smooth.construct(object, data, knots)
## S3 method for class 'cs.smooth.spec'
smooth.construct(object, data, knots)
## S3 method for class 'cc.smooth.spec'
smooth.construct(object, data, knots)
```
### Arguments
| | |
| --- | --- |
| `object` | a smooth specification object, usually generated by a term `s(...,bs="cr",...)`, `s(...,bs="cs",...)` or `s(...,bs="cc",...)` |
| `data` | a list containing just the data (including any `by` variable) required by this term, with names corresponding to `object$term` (and `object$by`). The `by` variable is the last element. |
| `knots` | a list containing any knots supplied for basis setup — in same order and with same names as `data`. Can be `NULL`. See details. |
### Details
The constructor is not normally called directly, but is rather used internally by `<gam>`. To use for basis setup it is recommended to use `[smooth.construct2](smooth.construct)`.
If they are not supplied then the knots of the spline are placed evenly throughout the covariate values to which the term refers: For example, if fitting 101 data with an 11 knot spline of `x` then there would be a knot at every 10th (ordered) `x` value. The parameterization used represents the spline in terms of its values at the knots. The values at neighbouring knots are connected by sections of cubic polynomial constrained to be continuous up to and including second derivative at the knots. The resulting curve is a natural cubic spline through the values at the knots (given two extra conditions specifying that the second derivative of the curve should be zero at the two end knots).
The shrinkage version of the smooth, eigen-decomposes the wiggliness penalty matrix, and sets its 2 zero eigenvalues to small multiples of the smallest strictly positive eigenvalue. The penalty is then set to the matrix with eigenvectors corresponding to those of the original penalty, but eigenvalues set to the peturbed versions. This penalty matrix has full rank and shrinks the curve to zero at high enough smoothing parameters.
Note that the cyclic smoother will wrap at the smallest and largest covariate values, unless knots are supplied. If only two knots are supplied then they are taken as the end points of the smoother (provided all the data lie between them), and the remaining knots are generated automatically.
The cyclic smooth is not subject to the condition that second derivatives go to zero at the first and last knots.
### Value
An object of class `"cr.smooth"` `"cs.smooth"` or `"cyclic.smooth"`. In addition to the usual elements of a smooth class documented under `<smooth.construct>`, this object will contain:
| | |
| --- | --- |
| `xp` | giving the knot locations used to generate the basis. |
| `F` | For class `"cr.smooth"` and `"cs.smooth"` objects `t(F)` transforms function values at the knots to second derivatives at the knots. |
| `BD` | class `"cyclic.smooth"` objects include matrix `BD` which transforms function values at the knots to second derivatives at the knots. |
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
### Examples
```
## cyclic spline example...
require(mgcv)
set.seed(6)
x <- sort(runif(200)*10)
z <- runif(200)
f <- sin(x*2*pi/10)+.5
y <- rpois(exp(f),exp(f))
## finished simulating data, now fit model...
b <- gam(y ~ s(x,bs="cc",k=12) + s(z),family=poisson,
knots=list(x=seq(0,10,length=12)))
## or more simply
b <- gam(y ~ s(x,bs="cc",k=12) + s(z),family=poisson,
knots=list(x=c(0,10)))
## plot results...
par(mfrow=c(2,2))
plot(x,y);plot(b,select=1,shade=TRUE);lines(x,f-mean(f),col=2)
plot(b,select=2,shade=TRUE);plot(fitted(b),residuals(b))
```
r None
`gam2objective` Objective functions for GAM smoothing parameter estimation
---------------------------------------------------------------------------
### Description
Estimation of GAM smoothing parameters is most stable if optimization of the UBRE/AIC or GCV score is outer to the penalized iteratively re-weighted least squares scheme used to estimate the model given smoothing parameters. These functions evaluate the GCV/UBRE/AIC score of a GAM model, given smoothing parameters, in a manner suitable for use by `[optim](../../stats/html/optim)` or `[nlm](../../stats/html/nlm)`. Not normally called directly, but rather service routines for `<gam.outer>`.
### Usage
```
gam2objective(lsp,args,...)
gam2derivative(lsp,args,...)
```
### Arguments
| | |
| --- | --- |
| `lsp` | The log smoothing parameters. |
| `args` | List of arguments required to call `<gam.fit3>`. |
| `...` | Other arguments for passing to `gam.fit3`. |
### Details
`gam2objective` and `gam2derivative` are functions suitable for calling by `[optim](../../stats/html/optim)`, to evaluate the GCV/UBRE/AIC score and its derivatives w.r.t. log smoothing parameters.
`gam4objective` is an equivalent to `gam2objective`, suitable for optimization by `[nlm](../../stats/html/nlm)` - derivatives of the GCV/UBRE/AIC function are calculated and returned as attributes.
The basic idea of optimizing smoothing parameters ‘outer’ to the P-IRLS loop was first proposed in O'Sullivan et al. (1986).
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2011) Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. Journal of the Royal Statistical Society (B) 73(1):3-36
O 'Sullivan, Yandall & Raynor (1986) Automatic smoothing of regression functions in generalized linear models. J. Amer. Statist. Assoc. 81:96-103.
Wood, S.N. (2008) Fast stable direct fitting and smoothness selection for generalized additive models. J.R.Statist.Soc.B 70(3):495-518
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<gam.fit3>`, `<gam>`, `<magic>`
r None
`residuals.gam` Generalized Additive Model residuals
-----------------------------------------------------
### Description
Returns residuals for a fitted `gam` model object. Pearson, deviance, working and response residuals are available.
### Usage
```
## S3 method for class 'gam'
residuals(object, type = "deviance",...)
```
### Arguments
| | |
| --- | --- |
| `object` | a `gam` fitted model object. |
| `type` | the type of residuals wanted. Usually one of `"deviance"`, `"pearson"`,`"scaled.pearson"`, `"working"`, or `"response"`. |
| `...` | other arguments. |
### Details
Response residuals are the raw residuals (data minus fitted values). Scaled Pearson residuals are raw residuals divided by the standard deviation of the data according to the model mean variance relationship and estimated scale parameter. Pearson residuals are the same, but multiplied by the square root of the scale parameter (so they are independent of the scale parameter): (*(y-m)/V(m)^0.5*, where *y* is data *m* is model fitted value and *V* is model mean-variance relationship.). Both are provided since not all texts agree on the definition of Pearson residuals. Deviance residuals simply return the deviance residuals defined by the model family. Working residuals are the residuals returned from model fitting at convergence.
Families can supply their own residual function, which is used in place of the standard function if present, (e.g. `[cox.ph](coxph)`).
### Value
A vector of residuals.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### See Also
`<gam>`
r None
`XWXd` Internal functions for discretized model matrix handling
----------------------------------------------------------------
### Description
Routines for computing with discretized model matrices as described in Wood et al. (2017) and Li and Wood (2019).
### Usage
```
XWXd(X,w,k,ks,ts,dt,v,qc,nthreads=1,drop=NULL,ar.stop=-1,ar.row=-1,ar.w=-1,
lt=NULL,rt=NULL)
XWyd(X,w,y,k,ks,ts,dt,v,qc,drop=NULL,ar.stop=-1,ar.row=-1,ar.w=-1,lt=NULL)
Xbd(X,beta,k,ks,ts,dt,v,qc,drop=NULL,lt=NULL)
diagXVXd(X,V,k,ks,ts,dt,v,qc,drop=NULL,nthreads=1,lt=NULL,rt=NULL)
```
### Arguments
| | |
| --- | --- |
| `X` | A list of the matrices containing the unique rows of model matrices for terms of a full model matrix, or the model matrices of the terms margins. if term subsetting arguments `lt` and `rt` are non-NULL then this requires an `"lpip"` attribute: see details. The elements of `X` may be sparse matrices of class `"dgCMatrix"`, in which case the list requires attributes `"r"` and `"off"` defining reverse indices (see details). |
| `w` | An n-vector of weights |
| `y` | n-vector of data. |
| `beta` | coefficient vector. |
| `k` | A matrix whose columns are index n-vectors each selecting the rows of an X[[i]] required to create the full matrix. |
| `ks` | The ith term has index vectors `ks[i,1]:(ks[i,2]-1)`. The corresponing full model matrices are summed over. |
| `ts` | The element of `X` at which each model term starts. |
| `dt` | How many elements of `X` contribute to each term. |
| `v` | `v[[i]]` is Householder vector for ith term, if `qc[i]>0`. |
| `qc` | if `qc[i]>0` then term has a constraint. |
| `nthreads` | number of threads to use |
| `drop` | list of columns of model matrix/parameters to drop |
| `ar.stop` | Negative to ignore. Otherwise sum rows `(ar.stop[i-1]+1):ar.stop[i]` of the rows selected by `ar.row` and weighted by `ar.w` to get ith row of model matrix to use. |
| `ar.row` | extract these rows... |
| `ar.w` | weight by these weights, and sum up according to `ar.stop`. Used to implement AR models. |
| `lt` | use only columns of X corresponding to these model matrix terms (for left hand `X` in `XWXd`). If `NULL` set to `rt`. |
| `rt` | as `lt` for right hand `X`. If `NULL` set to `lt`. If `lt` and `rt` are `NULL` use all columns. |
| `V` | Coefficient covariance matrix. |
### Details
These functions are really intended to be internal, but are exported so that they can be used in the initialization code of families without problem. They are primarily used by `<bam>` to implement the methods given in the references. `XWXd` produces *X'WX*, `XWy` produces *X'Wy*, `Xbd` produces *Xb* and *diagXVXd* produces the diagonal of *XVX'*.
The `"lpip"` attribute of `X` is a list of the coefficient indices for each term. Required if subsetting via `lt` and `rt`.
`X` can be a list of sparse matrices of class `"dgCMatrix"`, in which case reverse indices are needed, mapping stored matrix rows to rows in the full matrix (that is the reverse of `k` which maps full matrix rows to the stored unique matrix rows). `r` is the same dimension as `k` while `off` is a list with as many elements as `k` has columns. `r` and `off` are supplied as attributes to `X` . For simplicity let `r` and codeoff denote a single column and element corresponding to each other: then coder[off[j]:(off[j+1]-1)] contains the rows of the full matrix corresponding to row `j` of the stored matrix. The reverse indices are essential for efficient computation with sparse matrices. See the example code for how to create them efficiently from the forward index matrix, `k`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N., Li, Z., Shaddick, G. & Augustin N.H. (2017) Generalized additive models for gigadata: modelling the UK black smoke network daily data. Journal of the American Statistical Association. 112(519):1199-1210 doi: [10.1080/01621459.2016.1195744](https://doi.org/10.1080/01621459.2016.1195744)
Li, Z & S.N. Wood (2019) Faster model matrix crossproducts for large generalized linear models with discretized covariates. Statistics and Computing. doi: [10.1007/s11222-019-09864-2](https://doi.org/10.1007/s11222-019-09864-2)
### Examples
```
library(mgcv);library(Matrix)
## simulate some data creating a marginal matrix sequence...
set.seed(0);n <- 4000
dat <- gamSim(1,n=n,dist="normal",scale=2)
dat$x4 <- runif(n)
dat$y <- dat$y + 3*exp(dat$x4*15-5)/(1+exp(dat$x4*15-5))
dat$fac <- factor(sample(1:20,n,replace=TRUE))
G <- gam(y ~ te(x0,x2,k=5,bs="bs",m=1)+s(x1)+s(x4)+s(x3,fac,bs="fs"),
fit=FALSE,data=dat,discrete=TRUE)
p <- ncol(G$X)
## create a sparse version...
Xs <- list(); r <- G$kd*0; off <- list()
for (i in 1:length(G$Xd)) Xs[[i]] <- as(G$Xd[[i]],"dgCMatrix")
for (j in 1:nrow(G$ks)) { ## create the reverse indices...
nr <- nrow(Xs[[j]]) ## make sure we always tab to final stored row
for (i in G$ks[j,1]:(G$ks[j,2]-1)) {
r[,i] <- (1:length(G$kd[,i]))[order(G$kd[,i])]
off[[i]] <- cumsum(c(1,tabulate(G$kd[,i],nbins=nr)))-1
}
}
attr(Xs,"off") <- off;attr(Xs,"r") <- r
par(mfrow=c(2,3))
beta <- runif(p)
Xb0 <- Xbd(G$Xd,beta,G$kd,G$ks,G$ts,G$dt,G$v,G$qc)
Xb1 <- Xbd(Xs,beta,G$kd,G$ks,G$ts,G$dt,G$v,G$qc)
range(Xb0-Xb1);plot(Xb0,Xb1,pch=".")
bb <- cbind(beta,beta+runif(p)*.3)
Xb0 <- Xbd(G$Xd,bb,G$kd,G$ks,G$ts,G$dt,G$v,G$qc)
Xb1 <- Xbd(Xs,bb,G$kd,G$ks,G$ts,G$dt,G$v,G$qc)
range(Xb0-Xb1);plot(Xb0,Xb1,pch=".")
w <- runif(n)
XWy0 <- XWyd(G$Xd,w,y=dat$y,G$kd,G$ks,G$ts,G$dt,G$v,G$qc)
XWy1 <- XWyd(Xs,w,y=dat$y,G$kd,G$ks,G$ts,G$dt,G$v,G$qc)
range(XWy1-XWy0);plot(XWy1,XWy0,pch=".")
yy <- cbind(dat$y,dat$y+runif(n)-.5)
XWy0 <- XWyd(G$Xd,w,y=yy,G$kd,G$ks,G$ts,G$dt,G$v,G$qc)
XWy1 <- XWyd(Xs,w,y=yy,G$kd,G$ks,G$ts,G$dt,G$v,G$qc)
range(XWy1-XWy0);plot(XWy1,XWy0,pch=".")
A <- XWXd(G$Xd,w,G$kd,G$ks,G$ts,G$dt,G$v,G$qc)
B <- XWXd(Xs,w,G$kd,G$ks,G$ts,G$dt,G$v,G$qc)
range(A-B);plot(A,B,pch=".")
V <- crossprod(matrix(runif(p*p),p,p))
ii <- c(20:30,100:200)
jj <- c(50:90,150:160)
V[ii,jj] <- 0;V[jj,ii] <- 0
d1 <- diagXVXd(G$Xd,V,G$kd,G$ks,G$ts,G$dt,G$v,G$qc)
Vs <- as(V,"dgCMatrix")
d2 <- diagXVXd(Xs,Vs,G$kd,G$ks,G$ts,G$dt,G$v,G$qc)
range(d1-d2);plot(d1,d2,pch=".")
```
r None
`new.name` Obtain a name for a new variable that is not already in use
-----------------------------------------------------------------------
### Description
`<gamm>` works by transforming a GAMM into something that can be estimated by `[lme](../../nlme/html/lme)`, but this involves creating new variables, the names of which should not clash with the names of other variables on which the model depends. This simple service routine checks a suggested name against a list of those in use, and if neccesary modifies it so that there is no clash.
### Usage
```
new.name(proposed,old.names)
```
### Arguments
| | |
| --- | --- |
| `proposed` | a suggested name |
| `old.names` | An array of names that must not be duplicated |
### Value
A name that is not in `old.names`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<gamm>`
### Examples
```
require(mgcv)
old <- c("a","tuba","is","tubby")
new.name("tubby",old)
```
r None
`missing.data` Missing data in GAMs
------------------------------------
### Description
If there are missing values in the response or covariates of a GAM then the default is simply to use only the ‘complete cases’. If there are many missing covariates, this can get rather wasteful. One possibility is then to use imputation. Another is to substitute a simple random effects model in which the `by` variable mechanism is used to set `s(x)` to zero for any missing `x`, while a Gaussian random effect is then substituted for the ‘missing’ s(x). See the example for details of how this works, and `<gam.models>` for the necessary background on `by` variables.
### Author(s)
Simon Wood <[email protected]>
### See Also
`<gam.vcomp>`, `<gam.models>`, `<s>`, `<smooth.construct.re.smooth.spec>`,`<gam>`
### Examples
```
## The example takes a couple of minutes to run...
require(mgcv)
par(mfrow=c(4,4),mar=c(4,4,1,1))
for (sim in c(1,7)) { ## cycle over uncorrelated and correlated covariates
n <- 350;set.seed(2)
## simulate data but randomly drop 300 covariate measurements
## leaving only 50 complete cases...
dat <- gamSim(sim,n=n,scale=3) ## 1 or 7
drop <- sample(1:n,300) ## to
for (i in 2:5) dat[drop[1:75+(i-2)*75],i] <- NA
## process data.frame producing binary indicators of missingness,
## mx0, mx1 etc. For each missing value create a level of a factor
## idx0, idx1, etc. So idx0 has as many levels as x0 has missing
## values. Replace the NA's in each variable by the mean of the
## non missing for that variable...
dname <- names(dat)[2:5]
dat1 <- dat
for (i in 1:4) {
by.name <- paste("m",dname[i],sep="")
dat1[[by.name]] <- is.na(dat1[[dname[i]]])
dat1[[dname[i]]][dat1[[by.name]]] <- mean(dat1[[dname[i]]],na.rm=TRUE)
lev <- rep(1,n);lev[dat1[[by.name]]] <- 1:sum(dat1[[by.name]])
id.name <- paste("id",dname[i],sep="")
dat1[[id.name]] <- factor(lev)
dat1[[by.name]] <- as.numeric(dat1[[by.name]])
}
## Fit a gam, in which any missing value contributes zero
## to the linear predictor from its smooth, but each
## missing has its own random effect, with the random effect
## variances being specific to the variable. e.g.
## for s(x0,by=ordered(!mx0)), declaring the `by' as an ordered
## factor ensures that the smooth is centred, but multiplied
## by zero when mx0 is one (indicating a missing x0). This means
## that any value (within range) can be put in place of the
## NA for x0. s(idx0,bs="re",by=mx0) produces a separate Gaussian
## random effect for each missing value of x0 (in place of s(x0),
## effectively). The `by' variable simply sets the random effect to
## zero when x0 is non-missing, so that we can set idx0 to any
## existing level for these cases.
b <- bam(y~s(x0,by=ordered(!mx0))+s(x1,by=ordered(!mx1))+
s(x2,by=ordered(!mx2))+s(x3,by=ordered(!mx3))+
s(idx0,bs="re",by=mx0)+s(idx1,bs="re",by=mx1)+
s(idx2,bs="re",by=mx2)+s(idx3,bs="re",by=mx3)
,data=dat1,discrete=TRUE)
for (i in 1:4) plot(b,select=i) ## plot the smooth effects from b
## fit the model to the `complete case' data...
b2 <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),data=dat,method="REML")
plot(b2) ## plot the complete case results
}
```
r None
`predict.bam` Prediction from fitted Big Additive Model model
--------------------------------------------------------------
### Description
Essentially a wrapper for `<predict.gam>` for prediction from a model fitted by `<bam>`. Can compute on a parallel cluster.
Takes a fitted `bam` object produced by `<bam>` and produces predictions given a new set of values for the model covariates or the original values used for the model fit. Predictions can be accompanied by standard errors, based on the posterior distribution of the model coefficients. The routine can optionally return the matrix by which the model coefficients must be pre-multiplied in order to yield the values of the linear predictor at the supplied covariate values: this is useful for obtaining credible regions for quantities derived from the model (e.g. derivatives of smooths), and for lookup table prediction outside `R`.
### Usage
```
## S3 method for class 'bam'
predict(object,newdata,type="link",se.fit=FALSE,terms=NULL,
exclude=NULL,block.size=50000,newdata.guaranteed=FALSE,
na.action=na.pass,cluster=NULL,discrete=TRUE,n.threads=1,...)
```
### Arguments
| | |
| --- | --- |
| `object` | a fitted `bam` object as produced by `<bam>`. |
| `newdata` | A data frame or list containing the values of the model covariates at which predictions are required. If this is not provided then predictions corresponding to the original data are returned. If `newdata` is provided then it should contain all the variables needed for prediction: a warning is generated if not. |
| `type` | When this has the value `"link"` (default) the linear predictor (possibly with associated standard errors) is returned. When `type="terms"` each component of the linear predictor is returned seperately (possibly with standard errors): this includes parametric model components, followed by each smooth component, but excludes any offset and any intercept. `type="iterms"` is the same, except that any standard errors returned for smooth components will include the uncertainty about the intercept/overall mean. When `type="response"` predictions on the scale of the response are returned (possibly with approximate standard errors). When `type="lpmatrix"` then a matrix is returned which yields the values of the linear predictor (minus any offset) when postmultiplied by the parameter vector (in this case `se.fit` is ignored). The latter option is most useful for getting variance estimates for quantities derived from the model: for example integrated quantities, or derivatives of smooths. A linear predictor matrix can also be used to implement approximate prediction outside `R` (see example code, below). |
| `se.fit` | when this is TRUE (not default) standard error estimates are returned for each prediction. |
| `terms` | if `type=="terms"` or `type="iterms"` then only results for the terms (smooth or parametric) named in this array will be returned. Otherwise any smooth terms not named in this array will be set to zero. If `NULL` then all terms are included. |
| `exclude` | if `type=="terms"` or `type="iterms"` then terms (smooth or parametric) named in this array will not be returned. Otherwise any smooth terms named in this array will be set to zero. If `NULL` then no terms are excluded. To avoid supplying covariate values for excluded terms, set `newdata.guaranteed=TRUE`, but note that this skips all checks of `newdata`. |
| `block.size` | maximum number of predictions to process per call to underlying code: larger is quicker, but more memory intensive. |
| `newdata.guaranteed` | Set to `TRUE` to turn off all checking of `newdata` except for sanity of factor levels: this can speed things up for large prediction tasks, but `newdata` must be complete, with no `NA` values for predictors required in the model. |
| `na.action` | what to do about `NA` values in `newdata`. With the default `na.pass`, any row of `newdata` containing `NA` values for required predictors, gives rise to `NA` predictions (even if the term concerned has no `NA` predictors). `na.exclude` or `na.omit` result in the dropping of `newdata` rows, if they contain any `NA` values for required predictors. If `newdata` is missing then `NA` handling is determined from `object$na.action`. |
| `cluster` | `predict.bam` can compute in parallel using [parLapply](../../parallel/html/clusterapply) from the `parallel` package, if it is supplied with a cluster on which to do this (a cluster here can be some cores of a single machine). See details and example code for `<bam>`. |
| `discrete` | if `TRUE` then discrete prediction methods used with model fitted by discrete methods. `FALSE` for regular prediction. See details. |
| `n.threads` | if `se.fit=TRUE` and discrete prediction is used then parallel computation can be used to speed up se calcualtion. This specifies number of htreads to use. |
| `...` | other arguments. |
### Details
The standard errors produced by `predict.gam` are based on the Bayesian posterior covariance matrix of the parameters `Vp` in the fitted bam object.
To facilitate plotting with `[termplot](../../stats/html/termplot)`, if `object` possesses an attribute `"para.only"` and `type=="terms"` then only parametric terms of order 1 are returned (i.e. those that `termplot` can handle).
Note that, in common with other prediction functions, any offset supplied to `<bam>` as an argument is always ignored when predicting, unlike offsets specified in the bam model formula.
See the examples in `<predict.gam>` for how to use the `lpmatrix` for obtaining credible regions for quantities derived from the model.
When `discrete=TRUE` the prediction data in `newdata` is discretized in the same way as is done when using discrete fitting methods with `bam`. However the discretization grids are not currently identical to those used during fitting. Instead, discretization is done afresh for the prediction data. This means that if you are predicting for a relatively small set of prediction data, or on a regular grid, then the results may in fact be identical to those obtained without discretization. The disadvantage to this approach is that if you make predictions with a large data frame, and then split it into smaller data frames to make the predictions again, the results may differ slightly, because of slightly different discretization errors.
### Value
If `type=="lpmatrix"` then a matrix is returned which will give a vector of linear predictor values (minus any offest) at the supplied covariate values, when applied to the model coefficient vector. Otherwise, if `se.fit` is `TRUE` then a 2 item list is returned with items (both arrays) `fit` and `se.fit` containing predictions and associated standard error estimates, otherwise an array of predictions is returned. The dimensions of the returned arrays depends on whether `type` is `"terms"` or not: if it is then the array is 2 dimensional with each term in the linear predictor separate, otherwise the array is 1 dimensional and contains the linear predictor/predicted values (or corresponding s.e.s). The linear predictor returned termwise will not include the offset or the intercept.
`newdata` can be a data frame, list or model.frame: if it's a model frame then all variables must be supplied.
### WARNING
Predictions are likely to be incorrect if data dependent transformations of the covariates are used within calls to smooths. See examples in `<predict.gam>`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
The design is inspired by the S function of the same name described in Chambers and Hastie (1993) (but is not a clone).
### References
Chambers and Hastie (1993) Statistical Models in S. Chapman & Hall.
Marra, G and S.N. Wood (2012) Coverage Properties of Confidence Intervals for Generalized Additive Model Components. Scandinavian Journal of Statistics.
Wood S.N. (2006b) Generalized Additive Models: An Introduction with R. Chapman and Hall/CRC Press.
### See Also
`<bam>`, `<predict.gam>`
### Examples
```
## for parallel computing see examples for ?bam
## for general useage follow examples in ?predict.gam
```
| programming_docs |
r None
`concurvity` GAM concurvity measures
-------------------------------------
### Description
Produces summary measures of concurvity between `<gam>` components.
### Usage
```
concurvity(b,full=TRUE)
```
### Arguments
| | |
| --- | --- |
| `b` | An object inheriting from class `"gam"`. |
| `full` | If `TRUE` then concurvity of each term with the whole of the rest of the model is considered. If `FALSE` then pairwise concurvity measures between each smooth term (as well as the parametric component) are considered. |
### Details
Concurvity occurs when some smooth term in a model could be approximated by one or more of the other smooth terms in the model. This is often the case when a smooth of space is included in a model, along with smooths of other covariates that also vary more or less smoothly in space. Similarly it tends to be an issue in models including a smooth of time, along with smooths of other time varying covariates.
Concurvity can be viewed as a generalization of co-linearity, and causes similar problems of interpretation. It can also make estimates somewhat unstable (so that they become sensitive to apparently innocuous modelling details, for example).
This routine computes three related indices of concurvity, all bounded between 0 and 1, with 0 indicating no problem, and 1 indicating total lack of identifiability. The three indices are all based on the idea that a smooth term, f, in the model can be decomposed into a part, g, that lies entirely in the space of one or more other terms in the model, and a remainder part that is completely within the term's own space. If g makes up a large part of f then there is a concurvity problem. The indices used are all based on the square of ||g||/||f||, that is the ratio of the squared Euclidean norms of the vectors of f and g evaluated at the observed covariate values.
The three measures are as follows
worst
This is the largest value that the square of ||g||/||f|| could take for any coefficient vector. This is a fairly pessimistic measure, as it looks at the worst case irrespective of data. This is the only measure that is symmetric.
observed
This just returns the value of the square of ||g||/||f|| according to the estimated coefficients. This could be a bit over-optimistic about the potential for a problem in some cases.
estimate
This is the squared F-norm of the basis for g divided by the F-norm of the basis for f. It is a measure of the extent to which the f basis can be explained by the g basis. It does not suffer from the pessimism or potential for over-optimism of the previous two measures, but is less easy to understand.
### Value
If `full=TRUE` a matrix with one column for each term and one row for each of the 3 concurvity measures detailed below. If `full=FALSE` a list of 3 matrices, one for each of the three concurvity measures detailed below. Each row of the matrix relates to how the model terms depend on the model term supplying that rows name.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
<https://www.maths.ed.ac.uk/~swood34/>
### Examples
```
library(mgcv)
## simulate data with concurvity...
set.seed(8);n<- 200
f2 <- function(x) 0.2 * x^11 * (10 * (1 - x))^6 + 10 *
(10 * x)^3 * (1 - x)^10
t <- sort(runif(n)) ## first covariate
## make covariate x a smooth function of t + noise...
x <- f2(t) + rnorm(n)*3
## simulate response dependent on t and x...
y <- sin(4*pi*t) + exp(x/20) + rnorm(n)*.3
## fit model...
b <- gam(y ~ s(t,k=15) + s(x,k=15),method="REML")
## assess concurvity between each term and `rest of model'...
concurvity(b)
## ... and now look at pairwise concurvity between terms...
concurvity(b,full=FALSE)
```
r None
`magic.post.proc` Auxilliary information from magic fit
--------------------------------------------------------
### Description
Obtains Bayesian parameter covariance matrix, frequentist parameter estimator covariance matrix, estimated degrees of freedom for each parameter and leading diagonal of influence/hat matrix, for a penalized regression estimated by `magic`.
### Usage
```
magic.post.proc(X,object,w=NULL)
```
### Arguments
| | |
| --- | --- |
| `X` | is the model matrix. |
| `object` | is the list returned by `magic` after fitting the model with model matrix `X`. |
| `w` | is the weight vector used in fitting, or the weight matrix used in fitting (i.e. supplied to `magic`, if one was.). If `w` is a vector then its elements are typically proportional to reciprocal variances (but could even be negative). If `w` is a matrix then `t(w)%*%w` should typically give the inverse of the covariance matrix of the response data supplied to `magic`. |
### Details
`object` contains `rV` (*V*, say), and `scale` (*s*, say) which can be used to obtain the require quantities as follows. The Bayesian covariance matrix of the parameters is *VV's*. The vector of estimated degrees of freedom for each parameter is the leading diagonal of *VV'X'W'WX* where *W* is either the weight matrix `w` or the matrix `diag(w)`. The hat/influence matrix is given by *WXVV'X'W'* .
The frequentist parameter estimator covariance matrix is *VV'X'W'WXVV's*: it is sometimes useful for testing terms for equality to zero.
### Value
A list with three items:
| | |
| --- | --- |
| `Vb` | the Bayesian covariance matrix of the model parameters. |
| `Ve` | the frequentist covariance matrix for the parameter estimators. |
| `hat` | the leading diagonal of the hat (influence) matrix. |
| `edf` | the array giving the estimated degrees of freedom associated with each parameter. |
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### See Also
`<magic>`
r None
`smooth.terms` Smooth terms in GAM
-----------------------------------
### Description
Smooth terms are specified in a `<gam>` formula using `<s>`, `<te>`, `[ti](te)` and `<t2>` terms. Various smooth classes are available, for different modelling tasks, and users can add smooth classes (see `[user.defined.smooth](smooth.construct)`). What defines a smooth class is the basis used to represent the smooth function and quadratic penalty (or multiple penalties) used to penalize the basis coefficients in order to control the degree of smoothness. Smooth classes are invoked directly by `s` terms, or as building blocks for tensor product smoothing via `te`, `ti` or `t2` terms (only smooth classes with single penalties can be used in tensor products). The smooths built into the `mgcv` package are all based one way or another on low rank versions of splines. For the full rank versions see Wahba (1990).
Note that smooths can be used rather flexibly in `gam` models. In particular the linear predictor of the GAM can depend on (a discrete approximation to) any linear functional of a smooth term, using `by` variables and the ‘summation convention’ explained in `<linear.functional.terms>`.
The single penalty built in smooth classes are summarized as follows
Thin plate regression splines
`bs="tp"`. These are low rank isotropic smoothers of any number of covariates. By isotropic is meant that rotation of the covariate co-ordinate system will not change the result of smoothing. By low rank is meant that they have far fewer coefficients than there are data to smooth. They are reduced rank versions of the thin plate splines and use the thin plate spline penalty. They are the default smooth for `s` terms because there is a defined sense in which they are the optimal smoother of any given basis dimension/rank (Wood, 2003). Thin plate regression splines do not have ‘knots’ (at least not in any conventional sense): a truncated eigen-decomposition is used to achieve the rank reduction. See `[tprs](smooth.construct.tp.smooth.spec)` for further details.
`bs="ts"` is as `"tp"` but with a modification to the smoothing penalty, so that the null space is also penalized slightly and the whole term can therefore be shrunk to zero.
Duchon splines
`bs="ds"`. These generalize thin plate splines. In particular, for any given number of covariates they allow lower orders of derivative in the penalty than thin plate splines (and hence a smaller null space). See `[Duchon.spline](smooth.construct.ds.smooth.spec)` for further details.
Cubic regression splines
`bs="cr"`. These have a cubic spline basis defined by a modest sized set of knots spread evenly through the covariate values. They are penalized by the conventional intergrated square second derivative cubic spline penalty. For details see `[cubic.regression.spline](smooth.construct.cr.smooth.spec)` and e.g. Wood (2006a).
`bs="cs"` specifies a shrinkage version of `"cr"`.
`bs="cc"` specifies a cyclic cubic regression splines (see [cyclic.cubic.spline](smooth.construct.cr.smooth.spec)). i.e. a penalized cubic regression splines whose ends match, up to second derivative.
Splines on the sphere
`bs="sos"`. These are two dimensional splines on a sphere. Arguments are latitude and longitude, and they are the analogue of thin plate splines for the sphere. Useful for data sampled over a large portion of the globe, when isotropy is appropriate. See `[Spherical.Spline](smooth.construct.sos.smooth.spec)` for details.
P-splines
`bs="ps"`. These are P-splines as proposed by Eilers and Marx (1996). They combine a B-spline basis, with a discrete penalty on the basis coefficients, and any sane combination of penalty and basis order is allowed. Although this penalty has no exact interpretation in terms of function shape, in the way that the derivative penalties do, P-splines perform almost as well as conventional splines in many standard applications, and can perform better in particular cases where it is advantageous to mix different orders of basis and penalty.
`bs="cp"` gives a cyclic version of a P-spline (see [cyclic.p.spline](smooth.construct.ps.smooth.spec)).
Random effects
`bs="re"`. These are parametric terms penalized by a ridge penalty (i.e. the identity matrix). When such a smooth has multiple arguments then it represents the parametric interaction of these arguments, with the coefficients penalized by a ridge penalty. The ridge penalty is equivalent to an assumption that the coefficients are i.i.d. normal random effects. See `<smooth.construct.re.smooth.spec>`.
Markov Random Fields
`bs="mrf"`. These are popular when space is split up into discrete contiguous geographic units (districts of a town, for example). In this case a simple smoothing penalty is constructed based on the neighbourhood structure of the geographic units. See `[mrf](smooth.construct.mrf.smooth.spec)` for details and an example.
Gaussian process smooths
`bs="gp"`. Gaussian process models with a variety of simple correlation functions can be represented as smooths. See `[gp.smooth](smooth.construct.gp.smooth.spec)` for details.
Soap film smooths
`bs="so"` (actually not single penaltied, but `bs="sw"` and `bs="sf"` allows splitting into single penalty components for use in tensor product smoothing). These are finite area smoothers designed to smooth within complicated geographical boundaries, where the boundary matters (e.g. you do not want to smooth across boundary features). See `[soap](smooth.construct.so.smooth.spec)` for details.
Broadly speaking the default penalized thin plate regression splines tend to give the best MSE performance, but they are slower to set up than the other bases. The knot based penalized cubic regression splines (with derivative based penalties) usually come next in MSE performance, with the P-splines doing just a little worse. However the P-splines are useful in non-standard situations.
All the preceding classes (and any user defined smooths with single penalties) may be used as marginal bases for tensor product smooths specified via `<te>`, `[ti](te)` or `<t2>` terms. Tensor product smooths are smooth functions of several variables where the basis is built up from tensor products of bases for smooths of fewer (usually one) variable(s) (marginal bases). The multiple penalties for these smooths are produced automatically from the penalties of the marginal smooths. Wood (2006b) and Wood, Scheipl and Faraway (2012), give the general recipe for these constructions.
te
`te` smooths have one penalty per marginal basis, each of which is interpretable in a similar way to the marginal penalty from which it is derived. See Wood (2006b).
ti
`ti` smooths exclude the basis functions associated with the ‘main effects’ of the marginal smooths, plus interactions other than the highest order specified. These provide a stable an interpretable way of specifying models with main effects and interactions. For example if we are interested in linear predicto *f1(x) + f2(z) + f3(x,z)*, we might use model formula `y~s(x)+s(z)+ti(x,z)` or `y~ti(x)+ti(z)+ti(x,z)`. A similar construction involving `te` terms instead will be much less statsitically stable.
t2
`t2` uses an alternative tensor product construction that results in more penalties each having a simple non-overlapping structure allowing use with the `gamm4` package. It is a natural generalization of the SS-ANOVA construction, but the penalties are a little harder to interpret. See Wood, Scheipl and Faraway (2012/13).
Tensor product smooths often perform better than isotropic smooths when the covariates of a smooth are not naturally on the same scale, so that their relative scaling is arbitrary. For example, if smoothing with repect to time and distance, an isotropic smoother will give very different results if the units are cm and minutes compared to if the units are metres and seconds: a tensor product smooth will give the same answer in both cases (see `<te>` for an example of this). Note that `te` terms are knot based, and the thin plate splines seem to offer no advantage over cubic or P-splines as marginal bases.
Some further specialist smoothers that are not suitable for use in tensor products are also available.
Adaptive smoothers
`bs="ad"` Univariate and bivariate adaptive smooths are available (see `[adaptive.smooth](smooth.construct.ad.smooth.spec)`). These are appropriate when the degree of smoothing should itself vary with the covariates to be smoothed, and the data contain sufficient information to be able to estimate the appropriate variation. Because this flexibility is achieved by splitting the penalty into several ‘basis penalties’ these terms are not suitable as components of tensor product smooths, and are not supported by `gamm`.
Factor smooth interactions
`bs="fs"` Smooth factor interactions are often produced using `by` variables (see `<gam.models>`), but a special smoother class (see `[factor.smooth.interaction](smooth.construct.fs.smooth.spec)`) is available for the case in which a smooth is required at each of a large number of factor levels (for example a smooth for each patient in a study), and each smooth should have the same smoothing parameter. The `"fs"` smoothers are set up to be efficient when used with `<gamm>`, and have penalties on each null sapce component (i.e. they are fully ‘random effects’).
### Author(s)
Simon Wood <[email protected]>
### References
Eilers, P.H.C. and B.D. Marx (1996) Flexible Smoothing with B-splines and Penalties. Statistical Science, 11(2):89-121
Wahba (1990) Spline Models of Observational Data. SIAM
Wood, S.N. (2003) Thin plate regression splines. J.R.Statist.Soc.B 65(1):95-114
Wood, S.N. (2006a) *Generalized Additive Models: an introduction with R*, CRC
Wood, S.N. (2006b) Low rank scale invariant tensor product smooths for generalized additive mixed models. Biometrics 62(4):1025-1036
Wood S.N., F. Scheipl and J.J. Faraway (2013) Straightforward intermediate rank tensor product smoothing in mixed models. Statistical Computing. 23(3), 341-360. [online 2012]
### See Also
`<s>`, `<te>`, `<t2>` `[tprs](smooth.construct.tp.smooth.spec)`,`[Duchon.spline](smooth.construct.ds.smooth.spec)`, `[cubic.regression.spline](smooth.construct.cr.smooth.spec)`,`[p.spline](smooth.construct.ps.smooth.spec)`, `[mrf](smooth.construct.mrf.smooth.spec)`, `[soap](smooth.construct.so.smooth.spec)`, `[Spherical.Spline](smooth.construct.sos.smooth.spec)`, `[adaptive.smooth](smooth.construct.ad.smooth.spec)`, `[user.defined.smooth](smooth.construct)`, `<smooth.construct.re.smooth.spec>`, `<smooth.construct.gp.smooth.spec>`,`[factor.smooth.interaction](smooth.construct.fs.smooth.spec)`
### Examples
```
## see examples for gam and gamm
```
r None
`smooth.construct.ds.smooth.spec` Low rank Duchon 1977 splines
---------------------------------------------------------------
### Description
Thin plate spline smoothers are a special case of the isotropic splines discussed in Duchon (1977). A subset of this more general class can be invoked by terms like `s(x,z,bs="ds",m=c(1,.5)` in a `<gam>` model formula. In the notation of Duchon (1977) m is given by `m[1]` (default value 2), while s is given by `m[2]` (default value 0).
Duchon's (1977) construction generalizes the usual thin plate spline penalty as follows. The usual TPS penalty is given by the integral of the squared Euclidian norm of a vector of mixed partial mth order derivatives of the function w.r.t. its arguments. Duchon re-expresses this penalty in the Fourier domain, and then weights the squared norm in the integral by the Euclidean norm of the fourier frequencies, raised to the power 2s. s is a user selected constant taking integer values divided by 2. If d is the number of arguments of the smooth, then it is required that -d/2 < s < d/2. To obtain continuous functions we further require that m + s > d/2. If s=0 then the usual thin plate spline is recovered.
The construction is amenable to exactly the low rank approximation method given in Wood (2003) to thin plate splines, with similar optimality properties, so this approach to low rank smoothing is used here. For large datasets the same subsampling approach as is used in the `[tprs](smooth.construct.tp.smooth.spec)` case is employed here to reduce computational costs.
These smoothers allow the use of lower orders of derivative in the penalty than conventional thin plate splines, while still yielding continuous functions. For example, we can set m = 1 and s = d/2 - .5 in order to use first derivative penalization for any d (which has the advantage that the dimension of the null space of unpenalized functions is only d+1).
### Usage
```
## S3 method for class 'ds.smooth.spec'
smooth.construct(object, data, knots)
## S3 method for class 'duchon.spline'
Predict.matrix(object, data)
```
### Arguments
| | |
| --- | --- |
| `object` | a smooth specification object, usually generated by a term `s(...,bs="ds",...)`. |
| `data` | a list containing just the data (including any `by` variable) required by this term, with names corresponding to `object$term` (and `object$by`). The `by` variable is the last element. |
| `knots` | a list containing any knots supplied for basis setup — in same order and with same names as `data`. Can be `NULL` |
### Details
The default basis dimension for this class is `k=M+k.def` where `M` is the null space dimension (dimension of unpenalized function space) and `k.def` is 10 for dimension 1, 30 for dimension 2 and 100 for higher dimensions. This is essentially arbitrary, and should be checked, but as with all penalized regression smoothers, results are statistically insensitive to the exact choise, provided it is not so small that it forces oversmoothing (the smoother's degrees of freedom are controlled primarily by its smoothing parameter).
The constructor is not normally called directly, but is rather used internally by `<gam>`. To use for basis setup it is recommended to use `[smooth.construct2](smooth.construct)`.
For these classes the specification `object` will contain information on how to handle large datasets in their `xt` field. The default is to randomly subsample 2000 ‘knots’ from which to produce a reduced rank eigen approximation to the full basis, if the number of unique predictor variable combinations in excess of 2000. The default can be modified via the `xt` argument to `<s>`. This is supplied as a list with elements `max.knots` and `seed` containing a number to use in place of 2000, and the random number seed to use (either can be missing). Note that the random sampling will not effect the state of R's RNG.
For these bases `knots` has two uses. Firstly, as mentioned already, for large datasets the calculation of the `tp` basis can be time-consuming. The user can retain most of the advantages of the approach by supplying a reduced set of covariate values from which to obtain the basis - typically the number of covariate values used will be substantially smaller than the number of data, and substantially larger than the basis dimension, `k`. This approach is the one taken automatically if the number of unique covariate values (combinations) exceeds `max.knots`. The second possibility is to avoid the eigen-decomposition used to find the spline basis altogether and simply use the basis implied by the chosen knots: this will happen if the number of knots supplied matches the basis dimension, `k`. For a given basis dimension the second option is faster, but gives poorer results (and the user must be quite careful in choosing knot locations).
### Value
An object of class `"duchon.spline"`. In addition to the usual elements of a smooth class documented under `<smooth.construct>`, this object will contain:
| | |
| --- | --- |
| `shift` | A record of the shift applied to each covariate in order to center it around zero and avoid any co-linearity problems that might otehrwise occur in the penalty null space basis of the term. |
| `Xu` | A matrix of the unique covariate combinations for this smooth (the basis is constructed by first stripping out duplicate locations). |
| `UZ` | The matrix mapping the smoother parameters back to the parameters of a full Duchon spline. |
| `null.space.dimension` | The dimension of the space of functions that have zero wiggliness according to the wiggliness penalty for this term. |
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Duchon, J. (1977) Splines minimizing rotation-invariant semi-norms in Solobev spaces. in W. Shemp and K. Zeller (eds) Construction theory of functions of several variables, 85-100, Springer, Berlin.
Wood, S.N. (2003) Thin plate regression splines. J.R.Statist.Soc.B 65(1):95-114
### See Also
`[Spherical.Spline](smooth.construct.sos.smooth.spec)`
### Examples
```
require(mgcv)
eg <- gamSim(2,n=200,scale=.05)
attach(eg)
op <- par(mfrow=c(2,2),mar=c(4,4,1,1))
b0 <- gam(y~s(x,z,bs="ds",m=c(2,0),k=50),data=data) ## tps
b <- gam(y~s(x,z,bs="ds",m=c(1,.5),k=50),data=data) ## first deriv penalty
b1 <- gam(y~s(x,z,bs="ds",m=c(2,.5),k=50),data=data) ## modified 2nd deriv
persp(truth$x,truth$z,truth$f,theta=30) ## truth
vis.gam(b0,theta=30)
vis.gam(b,theta=30)
vis.gam(b1,theta=30)
detach(eg)
```
| programming_docs |
r None
`influence.gam` Extract the diagonal of the influence/hat matrix for a GAM
---------------------------------------------------------------------------
### Description
Extracts the leading diagonal of the influence matrix (hat matrix) of a fitted `gam` object.
### Usage
```
## S3 method for class 'gam'
influence(model,...)
```
### Arguments
| | |
| --- | --- |
| `model` | fitted model objects of class `gam` as produced by `gam()`. |
| `...` | un-used in this case |
### Details
Simply extracts `hat` array from fitted model. (More may follow!)
### Value
An array (see above).
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### See Also
`<gam>`
r None
`inSide` Are points inside boundary?
-------------------------------------
### Description
Assesses whether points are inside a boundary. The boundary must enclose the domain, but may include islands.
### Usage
```
inSide(bnd,x,y)
```
### Arguments
| | |
| --- | --- |
| `bnd` | This should have two equal length columns with names matching whatever is supplied in `x` and `y`. This may contain several sections of boundary separated by `NA`. Alternatively `bnd` may be a list, each element of which contains 2 columns named as above. See below for details. |
| `x` | x co-ordinates of points to be tested. |
| `y` | y co-ordinates of points to be tested. |
### Details
Segments of boundary are separated by `NA`s, or are in separate list elements. The boundary co-ordinates are taken to define nodes which are joined by straight line segments in order to create the boundary. Each segment is assumed to define a closed loop, and the last point in a segment will be assumed to be joined to the first. Loops must not intersect (no test is made for this).
The method used is to count how many times a line, in the y-direction from a point, crosses a boundary segment. An odd number of crossings defines an interior point. Hence in geographic applications it would be usual to have an outer boundary loop, possibly with some inner ‘islands’ completely enclosed in the outer loop.
The routine calls compiled C code and operates by an exhaustive search for each point in `x, y`.
### Value
The function returns a logical array of the same dimension as `x` and `y`. `TRUE` indicates that the corresponding `x, y` point lies inside the boundary.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
<https://www.maths.ed.ac.uk/~swood34/>
### Examples
```
require(mgcv)
m <- 300;n <- 150
xm <- seq(-1,4,length=m);yn<-seq(-1,1,length=n)
x <- rep(xm,n);y<-rep(yn,rep(m,n))
er <- matrix(fs.test(x,y),m,n)
bnd <- fs.boundary()
in.bnd <- inSide(bnd,x,y)
plot(x,y,col=as.numeric(in.bnd)+1,pch=".")
lines(bnd$x,bnd$y,col=3)
points(x,y,col=as.numeric(in.bnd)+1,pch=".")
## check boundary details ...
plot(x,y,col=as.numeric(in.bnd)+1,pch=".",ylim=c(-1,0),xlim=c(3,3.5))
lines(bnd$x,bnd$y,col=3)
points(x,y,col=as.numeric(in.bnd)+1,pch=".")
```
r None
`gamm` Generalized Additive Mixed Models
-----------------------------------------
### Description
Fits the specified generalized additive mixed model (GAMM) to data, by a call to `lme` in the normal errors identity link case, or by a call to `gammPQL` (a modification of `glmmPQL` from the `MASS` library) otherwise. In the latter case estimates are only approximately MLEs. The routine is typically slower than `gam`, and not quite as numerically robust.
To use `lme4` in place of `nlme` as the underlying fitting engine, see `gamm4` from package `gamm4`.
Smooths are specified as in a call to `<gam>` as part of the fixed effects model formula, but the wiggly components of the smooth are treated as random effects. The random effects structures and correlation structures available for `lme` are used to specify other random effects and correlations.
It is assumed that the random effects and correlation structures are employed primarily to model residual correlation in the data and that the prime interest is in inference about the terms in the fixed effects model formula including the smooths. For this reason the routine calculates a posterior covariance matrix for the coefficients of all the terms in the fixed effects formula, including the smooths.
To use this function effectively it helps to be quite familiar with the use of `<gam>` and `[lme](../../nlme/html/lme)`.
### Usage
```
gamm(formula,random=NULL,correlation=NULL,family=gaussian(),
data=list(),weights=NULL,subset=NULL,na.action,knots=NULL,
control=list(niterEM=0,optimMethod="L-BFGS-B",returnObject=TRUE),
niterPQL=20,verbosePQL=TRUE,method="ML",drop.unused.levels=TRUE,
mustart=NULL, etastart=NULL,...)
```
### Arguments
| | |
| --- | --- |
| `formula` | A GAM formula (see also `<formula.gam>` and `<gam.models>`). This is like the formula for a `glm` except that smooth terms (`<s>`, `<te>` etc.) can be added to the right hand side of the formula. Note that `id`s for smooths and fixed smoothing parameters are not supported. Any offset should be specified in the formula. |
| `random` | The (optional) random effects structure as specified in a call to `[lme](../../nlme/html/lme)`: only the `list` form is allowed, to facilitate manipulation of the random effects structure within `gamm` in order to deal with smooth terms. See example below. |
| `correlation` | An optional `corStruct` object (see `[corClasses](../../nlme/html/corclasses)`) as used to define correlation structures in `[lme](../../nlme/html/lme)`. Any grouping factors in the formula for this object are assumed to be nested within any random effect grouping factors, without the need to make this explicit in the formula (this is slightly different to the behaviour of `lme`). This is a GEE approach to correlation in the generalized case. See examples below. |
| `family` | A `family` as used in a call to `[glm](../../stats/html/glm)` or `<gam>`. The default `gaussian` with identity link causes `gamm` to fit by a direct call to `[lme](../../nlme/html/lme)` provided there is no offset term, otherwise `gammPQL` is used. |
| `data` | A data frame or list containing the model response variable and covariates required by the formula. By default the variables are taken from `environment(formula)`, typically the environment from which `gamm` is called. |
| `weights` | In the generalized case, weights with the same meaning as `[glm](../../stats/html/glm)` weights. An `lme` type weights argument may only be used in the identity link gaussian case, with no offset (see documentation for `lme` for details of how to use such an argument). |
| `subset` | an optional vector specifying a subset of observations to be used in the fitting process. |
| `na.action` | a function which indicates what should happen when the data contain ‘NA’s. The default is set by the ‘na.action’ setting of ‘options’, and is ‘na.fail’ if that is unset. The “factory-fresh” default is ‘na.omit’. |
| `knots` | this is an optional list containing user specified knot values to be used for basis construction. Different terms can use different numbers of knots, unless they share a covariate. |
| `control` | A list of fit control parameters for `[lme](../../nlme/html/lme)` to replace the defaults returned by `[lmeControl](../../nlme/html/lmecontrol)`. Note the setting for the number of EM iterations used by `lme`: smooths are set up using custom `pdMat` classes, which are currently not supported by the EM iteration code. If you supply a list of control values, it is advisable to include `niterEM=0`, as well, and only increase from 0 if you want to perturb the starting values used in model fitting (usually to worse values!). The `optimMethod` option is only used if your version of R does not have the `nlminb` optimizer function. |
| `niterPQL` | Maximum number of PQL iterations (if any). |
| `verbosePQL` | Should PQL report its progress as it goes along? |
| `method` | Which of `"ML"` or `"REML"` to use in the Gaussian additive mixed model case when `lme` is called directly. Ignored in the generalized case (or if the model has an offset), in which case `gammPQL` is used. |
| `drop.unused.levels` | by default unused levels are dropped from factors before fitting. For some smooths involving factor variables you might want to turn this off. Only do so if you know what you are doing. |
| `mustart` | starting values for mean if PQL used. |
| `etastart` | starting values for linear predictor if PQL used (over-rides `mustart` if supplied). |
| `...` | further arguments for passing on e.g. to `lme` |
### Details
The Bayesian model of spline smoothing introduced by Wahba (1983) and Silverman (1985) opens up the possibility of estimating the degree of smoothness of terms in a generalized additive model as variances of the wiggly components of the smooth terms treated as random effects. Several authors have recognised this (see Wang 1998; Ruppert, Wand and Carroll, 2003) and in the normal errors, identity link case estimation can be performed using general linear mixed effects modelling software such as `lme`. In the generalized case only approximate inference is so far available, for example using the Penalized Quasi-Likelihood approach of Breslow and Clayton (1993) as implemented in `glmmPQL` by Venables and Ripley (2002). One advantage of this approach is that it allows correlated errors to be dealt with via random effects or the correlation structures available in the `nlme` library (using correlation structures beyond the strictly additive case amounts to using a GEE approach to fitting).
Some details of how GAMs are represented as mixed models and estimated using `lme` or `gammPQL` in `gamm` can be found in Wood (2004 ,2006a,b). In addition `gamm` obtains a posterior covariance matrix for the parameters of all the fixed effects and the smooth terms. The approach is similar to that described in Lin & Zhang (1999) - the covariance matrix of the data (or pseudodata in the generalized case) implied by the weights, correlation and random effects structure is obtained, based on the estimates of the parameters of these terms and this is used to obtain the posterior covariance matrix of the fixed and smooth effects.
The bases used to represent smooth terms are the same as those used in `<gam>`, although adaptive smoothing bases are not available. Prediction from the returned `gam` object is straightforward using `<predict.gam>`, but this will set the random effects to zero. If you want to predict with random effects set to their predicted values then you can adapt the prediction code given in the examples below.
In the event of `lme` convergence failures, consider modifying `options(mgcv.vc.logrange)`: reducing it helps to remove indefiniteness in the likelihood, if that is the problem, but too large a reduction can force over or undersmoothing. See `[notExp2](notexp2)` for more information on this option. Failing that, you can try increasing the `niterEM` option in `control`: this will perturb the starting values used in fitting, but usually to values with lower likelihood! Note that this version of `gamm` works best with R 2.2.0 or above and `nlme`, 3.1-62 and above, since these use an improved optimizer.
### Value
Returns a list with two items:
| | |
| --- | --- |
| `gam` | an object of class `gam`, less information relating to GCV/UBRE model selection. At present this contains enough information to use `predict`, `summary` and `print` methods and `vis.gam`, but not to use e.g. the `anova` method function to compare models. This is based on the working model when using `gammPQL`. |
| `lme` | the fitted model object returned by `lme` or `gammPQL`. Note that the model formulae and grouping structures may appear to be rather bizarre, because of the manner in which the GAMM is split up and the calls to `lme` and `gammPQL` are constructed. |
### WARNINGS
`gamm` has a somewhat different argument list to `<gam>`, `gam` arguments such as `gamma` supplied to `gamm` will just be ignored.
`gamm` performs poorly with binary data, since it uses PQL. It is better to use `gam` with `s(...,bs="re")` terms, or `gamm4`.
`gamm` assumes that you know what you are doing! For example, unlike `glmmPQL` from `MASS` it will return the complete `lme` object from the working model at convergence of the PQL iteration, including the 'log likelihood', even though this is not the likelihood of the fitted GAMM.
The routine will be very slow and memory intensive if correlation structures are used for the very large groups of data. e.g. attempting to run the spatial example in the examples section with many 1000's of data is definitely not recommended: often the correlations should only apply within clusters that can be defined by a grouping factor, and provided these clusters do not get too huge then fitting is usually possible.
Models must contain at least one random effect: either a smooth with non-zero smoothing parameter, or a random effect specified in argument `random`.
`gamm` is not as numerically stable as `gam`: an `lme` call will occasionally fail. See details section for suggestions, or try the ‘gamm4’ package.
`gamm` is usually much slower than `gam`, and on some platforms you may need to increase the memory available to R in order to use it with large data sets (see `[memory.limit](../../utils/html/memory.size)`).
Note that the weights returned in the fitted GAM object are dummy, and not those used by the PQL iteration: this makes partial residual plots look odd.
Note that the `gam` object part of the returned object is not complete in the sense of having all the elements defined in `[gamObject](gamobject)` and does not inherit from `glm`: hence e.g. multi-model `anova` calls will not work. It is also based on the working model when PQL is used.
The parameterization used for the smoothing parameters in `gamm`, bounds them above and below by an effective infinity and effective zero. See `[notExp2](notexp2)` for details of how to change this.
Linked smoothing parameters and adaptive smoothing are not supported.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Breslow, N. E. and Clayton, D. G. (1993) Approximate inference in generalized linear mixed models. Journal of the American Statistical Association 88, 9-25.
Lin, X and Zhang, D. (1999) Inference in generalized additive mixed models by using smoothing splines. JRSSB. 55(2):381-400
Pinheiro J.C. and Bates, D.M. (2000) Mixed effects Models in S and S-PLUS. Springer
Ruppert, D., Wand, M.P. and Carroll, R.J. (2003) Semiparametric Regression. Cambridge
Silverman, B.W. (1985) Some aspects of the spline smoothing approach to nonparametric regression. JRSSB 47:1-52
Venables, W. N. and Ripley, B. D. (2002) Modern Applied Statistics with S. Fourth edition. Springer.
Wahba, G. (1983) Bayesian confidence intervals for the cross validated smoothing spline. JRSSB 45:133-150
Wood, S.N. (2004) Stable and efficient multiple smoothing parameter estimation for generalized additive models. Journal of the American Statistical Association. 99:673-686
Wood, S.N. (2003) Thin plate regression splines. J.R.Statist.Soc.B 65(1):95-114
Wood, S.N. (2006a) Low rank scale invariant tensor product smooths for generalized additive mixed models. Biometrics 62(4):1025-1036
Wood S.N. (2006b) Generalized Additive Models: An Introduction with R. Chapman and Hall/CRC Press.
Wang, Y. (1998) Mixed effects smoothing spline analysis of variance. J.R. Statist. Soc. B 60, 159-174
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<magic>` for an alternative for correlated data, `<te>`, `<s>`, `<predict.gam>`, `<plot.gam>`, `<summary.gam>`, `<negbin>`, `<vis.gam>`,`[pdTens](pdtens)`, `gamm4` ( <https://cran.r-project.org/package=gamm4>)
### Examples
```
library(mgcv)
## simple examples using gamm as alternative to gam
set.seed(0)
dat <- gamSim(1,n=200,scale=2)
b <- gamm(y~s(x0)+s(x1)+s(x2)+s(x3),data=dat)
plot(b$gam,pages=1)
summary(b$lme) # details of underlying lme fit
summary(b$gam) # gam style summary of fitted model
anova(b$gam)
gam.check(b$gam) # simple checking plots
b <- gamm(y~te(x0,x1)+s(x2)+s(x3),data=dat)
op <- par(mfrow=c(2,2))
plot(b$gam)
par(op)
rm(dat)
## Add a factor to the linear predictor, to be modelled as random
dat <- gamSim(6,n=200,scale=.2,dist="poisson")
b2 <- gamm(y~s(x0)+s(x1)+s(x2),family=poisson,
data=dat,random=list(fac=~1))
plot(b2$gam,pages=1)
fac <- dat$fac
rm(dat)
vis.gam(b2$gam)
## In the generalized case the 'gam' object is based on the working
## model used in the PQL fitting. Residuals for this are not
## that useful on their own as the following illustrates...
gam.check(b2$gam)
## But more useful residuals are easy to produce on a model
## by model basis. For example...
fv <- exp(fitted(b2$lme)) ## predicted values (including re)
rsd <- (b2$gam$y - fv)/sqrt(fv) ## Pearson residuals (Poisson case)
op <- par(mfrow=c(1,2))
qqnorm(rsd);plot(fv^.5,rsd)
par(op)
## now an example with autocorrelated errors....
n <- 200;sig <- 2
x <- 0:(n-1)/(n-1)
f <- 0.2*x^11*(10*(1-x))^6+10*(10*x)^3*(1-x)^10
e <- rnorm(n,0,sig)
for (i in 2:n) e[i] <- 0.6*e[i-1] + e[i]
y <- f + e
op <- par(mfrow=c(2,2))
## Fit model with AR1 residuals
b <- gamm(y~s(x,k=20),correlation=corAR1())
plot(b$gam);lines(x,f-mean(f),col=2)
## Raw residuals still show correlation, of course...
acf(residuals(b$gam),main="raw residual ACF")
## But standardized are now fine...
acf(residuals(b$lme,type="normalized"),main="standardized residual ACF")
## compare with model without AR component...
b <- gam(y~s(x,k=20))
plot(b);lines(x,f-mean(f),col=2)
## more complicated autocorrelation example - AR errors
## only within groups defined by `fac'
e <- rnorm(n,0,sig)
for (i in 2:n) e[i] <- 0.6*e[i-1]*(fac[i-1]==fac[i]) + e[i]
y <- f + e
b <- gamm(y~s(x,k=20),correlation=corAR1(form=~1|fac))
plot(b$gam);lines(x,f-mean(f),col=2)
par(op)
## more complex situation with nested random effects and within
## group correlation
set.seed(0)
n.g <- 10
n<-n.g*10*4
## simulate smooth part...
dat <- gamSim(1,n=n,scale=2)
f <- dat$f
## simulate nested random effects....
fa <- as.factor(rep(1:10,rep(4*n.g,10)))
ra <- rep(rnorm(10),rep(4*n.g,10))
fb <- as.factor(rep(rep(1:4,rep(n.g,4)),10))
rb <- rep(rnorm(4),rep(n.g,4))
for (i in 1:9) rb <- c(rb,rep(rnorm(4),rep(n.g,4)))
## simulate auto-correlated errors within groups
e<-array(0,0)
for (i in 1:40) {
eg <- rnorm(n.g, 0, sig)
for (j in 2:n.g) eg[j] <- eg[j-1]*0.6+ eg[j]
e<-c(e,eg)
}
dat$y <- f + ra + rb + e
dat$fa <- fa;dat$fb <- fb
## fit model ....
b <- gamm(y~s(x0,bs="cr")+s(x1,bs="cr")+s(x2,bs="cr")+
s(x3,bs="cr"),data=dat,random=list(fa=~1,fb=~1),
correlation=corAR1())
plot(b$gam,pages=1)
summary(b$gam)
vis.gam(b$gam)
## Prediction from gam object, optionally adding
## in random effects.
## Extract random effects and make names more convenient...
refa <- ranef(b$lme,level=5)
rownames(refa) <- substr(rownames(refa),start=9,stop=20)
refb <- ranef(b$lme,level=6)
rownames(refb) <- substr(rownames(refb),start=9,stop=20)
## make a prediction, with random effects zero...
p0 <- predict(b$gam,data.frame(x0=.3,x1=.6,x2=.98,x3=.77))
## add in effect for fa = "2" and fb="2/4"...
p <- p0 + refa["2",1] + refb["2/4",1]
## and a "spatial" example...
library(nlme);set.seed(1);n <- 100
dat <- gamSim(2,n=n,scale=0) ## standard example
attach(dat)
old.par<-par(mfrow=c(2,2))
contour(truth$x,truth$z,truth$f) ## true function
f <- data$f ## true expected response
## Now simulate correlated errors...
cstr <- corGaus(.1,form = ~x+z)
cstr <- Initialize(cstr,data.frame(x=data$x,z=data$z))
V <- corMatrix(cstr) ## correlation matrix for data
Cv <- chol(V)
e <- t(Cv) %*% rnorm(n)*0.05 # correlated errors
## next add correlated simulated errors to expected values
data$y <- f + e ## ... to produce response
b<- gamm(y~s(x,z,k=50),correlation=corGaus(.1,form=~x+z),
data=data)
plot(b$gam) # gamm fit accounting for correlation
# overfits when correlation ignored.....
b1 <- gamm(y~s(x,z,k=50),data=data);plot(b1$gam)
b2 <- gam(y~s(x,z,k=50),data=data);plot(b2)
par(old.par)
```
| programming_docs |
r None
`t2` Define alternative tensor product smooths in GAM formulae
---------------------------------------------------------------
### Description
Alternative to `<te>` for defining tensor product smooths in a `<gam>` formula. Results in a construction in which the penalties are non-overlapping multiples of identity matrices (with some rows and columns zeroed). The construction, which is due to Fabian Scheipl (`mgcv` implementation, 2010), is analogous to Smoothing Spline ANOVA (Gu, 2002), but using low rank penalized regression spline marginals. The main advantage of this construction is that it is useable with `gamm4` from package `gamm4`.
### Usage
```
t2(..., k=NA,bs="cr",m=NA,d=NA,by=NA,xt=NULL,
id=NULL,sp=NULL,full=FALSE,ord=NULL,pc=NULL)
```
### Arguments
| | |
| --- | --- |
| `...` | a list of variables that are the covariates that this smooth is a function of. Transformations whose form depends on the values of the data are best avoided here: e.g. `t2(log(x),z)` is fine, but `t2(I(x/sd(x)),z)` is not (see `<predict.gam>`). |
| `k` | the dimension(s) of the bases used to represent the smooth term. If not supplied then set to `5^d`. If supplied as a single number then this basis dimension is used for each basis. If supplied as an array then the elements are the dimensions of the component (marginal) bases of the tensor product. See `<choose.k>` for further information. |
| `bs` | array (or single character string) specifying the type for each marginal basis. `"cr"` for cubic regression spline; `"cs"` for cubic regression spline with shrinkage; `"cc"` for periodic/cyclic cubic regression spline; `"tp"` for thin plate regression spline; `"ts"` for t.p.r.s. with extra shrinkage. See `<smooth.terms>` for details and full list. User defined bases can also be used here (see `<smooth.construct>` for an example). If only one basis code is given then this is used for all bases. |
| `m` | The order of the spline and its penalty (for smooth classes that use this) for each term. If a single number is given then it is used for all terms. A vector can be used to supply a different `m` for each margin. For marginals that take vector `m` (e.g. `[p.spline](smooth.construct.ps.smooth.spec)` and `[Duchon.spline](smooth.construct.ds.smooth.spec)`), then a list can be supplied, with a vector element for each margin. `NA` autoinitializes. `m` is ignored by some bases (e.g. `"cr"`). |
| `d` | array of marginal basis dimensions. For example if you want a smooth for 3 covariates made up of a tensor product of a 2 dimensional t.p.r.s. basis and a 1-dimensional basis, then set `d=c(2,1)`. Incompatibilities between built in basis types and dimension will be resolved by resetting the basis type. |
| `by` | a numeric or factor variable of the same dimension as each covariate. In the numeric vector case the elements multiply the smooth evaluated at the corresponding covariate values (a ‘varying coefficient model’ results). In the factor case causes a replicate of the smooth to be produced for each factor level. See `<gam.models>` for further details. May also be a matrix if covariates are matrices: in this case implements linear functional of a smooth (see `<gam.models>` and `<linear.functional.terms>` for details). |
| `xt` | Either a single object, providing any extra information to be passed to each marginal basis constructor, or a list of such objects, one for each marginal basis. |
| `id` | A label or integer identifying this term in order to link its smoothing parameters to others of the same type. If two or more smooth terms have the same `id` then they will have the same smoothing paramsters, and, by default, the same bases (first occurance defines basis type, but data from all terms used in basis construction). |
| `sp` | any supplied smoothing parameters for this term. Must be an array of the same length as the number of penalties for this smooth. Positive or zero elements are taken as fixed smoothing parameters. Negative elements signal auto-initialization. Over-rides values supplied in `sp` argument to `<gam>`. Ignored by `gamm`. |
| `full` | If `TRUE` then there is a separate penalty for each combination of null space column and range space. This gives strict invariance. If `FALSE` each combination of null space and range space generates one penalty, but the coulmns of each null space basis are treated as one group. The latter is more parsimonious, but does mean that invariance is only achieved by an arbitrary rescaling of null space basis vectors. |
| `ord` | an array giving the orders of terms to retain. Here order means number of marginal range spaces used in the construction of the component. `NULL` to retain everything. |
| `pc` | If not `NULL`, signals a point constraint: the smooth should pass through zero at the point given here (as a vector or list with names corresponding to the smooth names). Never ignored if supplied. See `<identifiability>`. |
### Details
Smooths of several covariates can be constructed from tensor products of the bases used to represent smooths of one (or sometimes more) of the covariates. To do this ‘marginal’ bases are produced with associated model matrices and penalty matrices. These are reparameterized so that the penalty is zero everywhere, except for some elements on the leading diagonal, which all have the same non-zero value. This reparameterization results in an unpenalized and a penalized subset of parameters, for each marginal basis (see e.g. appendix of Wood, 2004, for details).
The re-parameterized marginal bases are then combined to produce a basis for a single function of all the covariates (dimension given by the product of the dimensions of the marginal bases). In this set up there are multiple penalty matrices — all zero, but for a mixture of a constant and zeros on the leading diagonal. No two penalties have a non-zero entry in the same place.
Essentially the basis for the tensor product can be thought of as being constructed from a set of products of the penalized (range) or unpenalized (null) space bases of the marginal smooths (see Gu, 2002, section 2.4). To construct one of the set, choose either the null space or the range space from each marginal, and from these bases construct a product basis. The result is subject to a ridge penalty (unless it happens to be a product entirely of marginal null spaces). The whole basis for the smooth is constructed from all the different product bases that can be constructed in this way. The separately penalized components of the smooth basis each have an interpretation in terms of the ANOVA - decomposition of the term. See `<pen.edf>` for some further information.
Note that there are two ways to construct the product. When `full=FALSE` then the null space bases are treated as a whole in each product, but when `full=TRUE` each null space column is treated as a separate null space. The latter results in more penalties, but is the strict analog of the SS-ANOVA approach.
Tensor product smooths are especially useful for representing functions of covariates measured in different units, although they are typically not quite as nicely behaved as t.p.r.s. smooths for well scaled covariates.
Note also that GAMs constructed from lower rank tensor product smooths are nested within GAMs constructed from higher rank tensor product smooths if the same marginal bases are used in both cases (the marginal smooths themselves are just special cases of tensor product smooths.)
Note that tensor product smooths should not be centred (have identifiability constraints imposed) if any marginals would not need centering. The constructor for tensor product smooths ensures that this happens.
The function does not evaluate the variable arguments.
### Value
A class `t2.smooth.spec` object defining a tensor product smooth to be turned into a basis and penalties by the `smooth.construct.tensor.smooth.spec` function.
The returned object contains the following items:
| | |
| --- | --- |
| `margin` | A list of `smooth.spec` objects of the type returned by `<s>`, defining the basis from which the tensor product smooth is constructed. |
| `term` | An array of text strings giving the names of the covariates that the term is a function of. |
| `by` | is the name of any `by` variable as text (`"NA"` for none). |
| `fx` | logical array with element for each penalty of the term (tensor product smooths have multiple penalties). `TRUE` if the penalty is to be ignored, `FALSE`, otherwise. |
| `label` | A suitable text label for this smooth term. |
| `dim` | The dimension of the smoother - i.e. the number of covariates that it is a function of. |
| `mp` | `TRUE` is multiple penalties are to be used (default). |
| `np` | `TRUE` to re-parameterize 1-D marginal smooths in terms of function values (defualt). |
| `id` | the `id` argument supplied to `te`. |
| `sp` | the `sp` argument supplied to `te`. |
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected]) and Fabian Scheipl
### References
Wood S.N., F. Scheipl and J.J. Faraway (2013, online Feb 2012) Straightforward intermediate rank tensor product smoothing in mixed models. Statistical Computing. 23(3):341-360
Gu, C. (2002) Smoothing Spline ANOVA, Springer.
Alternative approaches to functional ANOVA decompositions, \*not\* implemented by t2 terms, are discussed in:
Belitz and Lang (2008) Simultaneous selection of variables and smoothing parameters in structured additive regression models. Computational Statistics & Data Analysis, 53(1):61-81
Lee, D-J and M. Durban (2011) P-spline ANOVA type interaction models for spatio-temporal smoothing. Statistical Modelling, 11:49-69
Wood, S.N. (2006) Low-Rank Scale-Invariant Tensor Product Smooths for Generalized Additive Mixed Models. Biometrics 62(4): 1025-1036.
### See Also
`<te>` `<s>`,`<gam>`,`<gamm>`,
### Examples
```
# following shows how tensor product deals nicely with
# badly scaled covariates (range of x 5% of range of z )
require(mgcv)
test1<-function(x,z,sx=0.3,sz=0.4)
{ x<-x*20
(pi**sx*sz)*(1.2*exp(-(x-0.2)^2/sx^2-(z-0.3)^2/sz^2)+
0.8*exp(-(x-0.7)^2/sx^2-(z-0.8)^2/sz^2))
}
n<-500
old.par<-par(mfrow=c(2,2))
x<-runif(n)/20;z<-runif(n);
xs<-seq(0,1,length=30)/20;zs<-seq(0,1,length=30)
pr<-data.frame(x=rep(xs,30),z=rep(zs,rep(30,30)))
truth<-matrix(test1(pr$x,pr$z),30,30)
f <- test1(x,z)
y <- f + rnorm(n)*0.2
b1<-gam(y~s(x,z))
persp(xs,zs,truth);title("truth")
vis.gam(b1);title("t.p.r.s")
b2<-gam(y~t2(x,z))
vis.gam(b2);title("tensor product")
b3<-gam(y~t2(x,z,bs=c("tp","tp")))
vis.gam(b3);title("tensor product")
par(old.par)
test2<-function(u,v,w,sv=0.3,sw=0.4)
{ ((pi**sv*sw)*(1.2*exp(-(v-0.2)^2/sv^2-(w-0.3)^2/sw^2)+
0.8*exp(-(v-0.7)^2/sv^2-(w-0.8)^2/sw^2)))*(u-0.5)^2*20
}
n <- 500
v <- runif(n);w<-runif(n);u<-runif(n)
f <- test2(u,v,w)
y <- f + rnorm(n)*0.2
## tensor product of 2D Duchon spline and 1D cr spline
m <- list(c(1,.5),0)
b <- gam(y~t2(v,w,u,k=c(30,5),d=c(2,1),bs=c("ds","cr"),m=m))
## look at the edf per penalty. "rr" denotes interaction term
## (range space range space). "rn" is interaction of null space
## for u with range space for v,w...
pen.edf(b)
## plot results...
op <- par(mfrow=c(2,2))
vis.gam(b,cond=list(u=0),color="heat",zlim=c(-0.2,3.5))
vis.gam(b,cond=list(u=.33),color="heat",zlim=c(-0.2,3.5))
vis.gam(b,cond=list(u=.67),color="heat",zlim=c(-0.2,3.5))
vis.gam(b,cond=list(u=1),color="heat",zlim=c(-0.2,3.5))
par(op)
b <- gam(y~t2(v,w,u,k=c(25,5),d=c(2,1),bs=c("tp","cr"),full=TRUE),
method="ML")
## more penalties now. numbers in labels like "r1" indicate which
## basis function of a null space is involved in the term.
pen.edf(b)
```
r None
`smooth.construct.so.smooth.spec` Soap film smoother constructer
-----------------------------------------------------------------
### Description
Sets up basis functions and wiggliness penalties for soap film smoothers (Wood, Bravington and Hedley, 2008). Soap film smoothers are based on the idea of constructing a 2-D smooth as a film of soap connecting a smoothly varying closed boundary. Unless smoothing very heavily, the film is distorted towards the data. The smooths are designed not to smooth across boundary features (peninsulas, for example).
The `so` version sets up the full smooth. The `sf` version sets up just the boundary interpolating soap film, while the `sw` version sets up the wiggly component of a soap film (zero on the boundary). The latter two are useful for forming tensor products with soap films, and can be used with `<gamm>` and `gamm4`. To use these to simply set up a basis, then call via the wrapper `[smooth.construct2](smooth.construct)` or `[smoothCon](smoothcon)`.
### Usage
```
## S3 method for class 'so.smooth.spec'
smooth.construct(object,data,knots)
## S3 method for class 'sf.smooth.spec'
smooth.construct(object,data,knots)
## S3 method for class 'sw.smooth.spec'
smooth.construct(object,data,knots)
```
### Arguments
| | |
| --- | --- |
| `object` | A smooth specification object as produced by a `s(...,bs="so",xt=list(bnd=bnd,...))` term in a `gam` formula. Note that the `xt` argument to `s` \*must\* be supplied, and should be a list, containing at least a boundary specification list (see details). `xt` may also contain various options controlling the boundary smooth (see details), and PDE solution grid. The dimension of the bases for boundary loops is specified via the `k` argument of `s`, either as a single number to be used for each boundary loop, or as a vector of different basis dimensions for the various boundary loops. |
| `data` | A list or data frame containing the arguments of the smooth. |
| `knots` | list or data frame with two named columns specifying the knot locations within the boundary. The column names should match the names of the arguments of the smooth. The number of knots defines the \*interior\* basis dimension (i.e. it is \*not\* supplied via argument `k` of `s`). |
### Details
For soap film smooths the following \*must\* be supplied:
* k the basis dimension for each boundary loop smooth.
* xt$bnd the boundary specification for the smooth.
* knots the locations of the interior knots for the smooth.
When used in a GAM then `k` and `xt` are supplied via `s` while `knots` are supplied in the `knots` argument of `<gam>`.
The `bnd` element of the `xt` list is a list of lists (or data frames), specifying the loops that define the boundary. Each boundary loop list must contain 2 columns giving the co-ordinates of points defining a boundary loop (when joined sequentially by line segments). Loops should not intersect (not checked). A point is deemed to be in the region of interest if it is interior to an odd number of boundary loops. Each boundary loop list may also contain a column `f` giving known boundary conditions on a loop.
The `bndSpec` element of `xt`, if non-NULL, should contain
* bs the type of cyclic smoothing basis to use: one of `"cc"` and `"cp"`. If not `"cc"` then a cyclic p-spline is used, and argument `m` must be supplied.
* knot.space set to "even" to get even knot spacing with the "cc" basis.
* m 1 or 2 element array specifying order of "cp" basis and penalty.
Currently the code will not deal with more than one level of nesting of loops, or with separate loops without an outer enclosing loop: if there are known boundary conditions (identifiability constraints get awkward).
Note that the function `[locator](../../graphics/html/locator)` provides a simple means for defining boundaries graphically, using something like `bnd <-as.data.frame(locator(type="l"))`, after producing a plot of the domain of interest (right click to stop). If the real boundary is very complicated, it is probably better to use a simpler smooth boundary enclosing the true boundary, which represents the major boundary features that you don't want to smooth across, but doesn't follow every tiny detail.
Model set up, and prediction, involves evaluating basis functions which are defined as the solution to PDEs. The PDEs are solved numerically on a grid using sparse matrix methods, with bilinear interpolation used to obtain values at any location within the smoothing domain. The dimension of the PDE solution grid can be controlled via element `nmax` (default 200) of the list supplied as argument `xt` of `s` in a `gam` formula: it gives the number of cells to use on the longest grid side.
A little theory: the soap film smooth *f(x,y)* is defined as the solution of
*f\_xx+f\_yy = g*
subject to the condition that *f=s*, on the boundary curve, where *s* is a smooth function (usually a cyclic penalized regression spline). The function *g* is defined as the solution of
*g\_xx+g\_yy=0*
where *g=0* on the boundary curve and *g(x\_k,y\_k)=c\_k* at the ‘knots’ of the surface; the *c\_k* are model coefficients.
In the simplest case, estimation of the coefficients of *f* (boundary coefficients plus *c\_k*'s) is by minimization of
*||z-f||^2 + l\_s J\_s(s) + l\_f J\_f(f)*
where *J\_s* is usually some cubic spline type wiggliness penalty on the boundary smooth and *J\_f* is the integral of *(f\_xx+f\_yy)^2* over the interior of the boundary. Both penalties can be expressed as quadratic forms in the model coefficients. The *l*'s are smoothing parameters, selectable by GCV, REML, AIC, etc. *z* represents noisy observations of *f*.
### Value
A list with all the elements of `object` plus
| | |
| --- | --- |
| `sd` | A list defining the PDE solution grid and domain boundary, and including the sparse LU factorization of the PDE coefficient matrix. |
| `X` | The model matrix: this will have an `"offset"` attribute, if there are any known boundary conditions. |
| `S` | List of smoothing penalty matrices (in smallest non-zero submatrix form). |
| `irng` | A vector of scaling factors that have been applied to the model matrix, to ensure nice conditioning. |
In addition there are all the elements usually added by `smooth.construct` methods.
### WARNINGS
Soap film smooths are quite specialized, and require more setup than most smoothers (e.g. you have to supply the boundary and the interior knots, plus the boundary smooth basis dimension(s)). It is worth looking at the reference.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N., M.V. Bravington and S.L. Hedley (2008) "Soap film smoothing", J.R.Statist.Soc.B 70(5), 931-955.
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`[Predict.matrix.soap.film](predict.matrix.soap.film)`
### Examples
```
require(mgcv)
##########################
## simple test function...
##########################
fsb <- list(fs.boundary())
nmax <- 100
## create some internal knots...
knots <- data.frame(v=rep(seq(-.5,3,by=.5),4),
w=rep(c(-.6,-.3,.3,.6),rep(8,4)))
## Simulate some fitting data, inside boundary...
set.seed(0)
n<-600
v <- runif(n)*5-1;w<-runif(n)*2-1
y <- fs.test(v,w,b=1)
names(fsb[[1]]) <- c("v","w")
ind <- inSide(fsb,x=v,y=w) ## remove outsiders
y <- y + rnorm(n)*.3 ## add noise
y <- y[ind];v <- v[ind]; w <- w[ind]
n <- length(y)
par(mfrow=c(3,2))
## plot boundary with knot and data locations
plot(fsb[[1]]$v,fsb[[1]]$w,type="l");points(knots,pch=20,col=2)
points(v,w,pch=".");
## Now fit the soap film smoother. 'k' is dimension of boundary smooth.
## boundary supplied in 'xt', and knots in 'knots'...
nmax <- 100 ## reduced from default for speed.
b <- gam(y~s(v,w,k=30,bs="so",xt=list(bnd=fsb,nmax=nmax)),knots=knots)
plot(b) ## default plot
plot(b,scheme=1)
plot(b,scheme=2)
plot(b,scheme=3)
vis.gam(b,plot.type="contour")
################################
# Fit same model in two parts...
################################
par(mfrow=c(2,2))
vis.gam(b,plot.type="contour")
b1 <- gam(y~s(v,w,k=30,bs="sf",xt=list(bnd=fsb,nmax=nmax))+
s(v,w,k=30,bs="sw",xt=list(bnd=fsb,nmax=nmax)) ,knots=knots)
vis.gam(b,plot.type="contour")
plot(b1)
##################################################
## Now an example with known boundary condition...
##################################################
## Evaluate known boundary condition at boundary nodes...
fsb[[1]]$f <- fs.test(fsb[[1]]$v,fsb[[1]]$w,b=1,exclude=FALSE)
## Now fit the smooth...
bk <- gam(y~s(v,w,bs="so",xt=list(bnd=fsb,nmax=nmax)),knots=knots)
plot(bk) ## default plot
##########################################
## tensor product example...
##########################################
set.seed(9)
n <- 10000
v <- runif(n)*5-1;w<-runif(n)*2-1
t <- runif(n)
y <- fs.test(v,w,b=1)
y <- y + 4.2
y <- y^(.5+t)
fsb <- list(fs.boundary())
names(fsb[[1]]) <- c("v","w")
ind <- inSide(fsb,x=v,y=w) ## remove outsiders
y <- y[ind];v <- v[ind]; w <- w[ind]; t <- t[ind]
n <- length(y)
y <- y + rnorm(n)*.05 ## add noise
knots <- data.frame(v=rep(seq(-.5,3,by=.5),4),
w=rep(c(-.6,-.3,.3,.6),rep(8,4)))
## notice NULL element in 'xt' list - to indicate no xt object for "cr" basis...
bk <- gam(y~ te(v,w,t,bs=c("sf","cr"),k=c(25,4),d=c(2,1),
xt=list(list(bnd=fsb,nmax=nmax),NULL))+
te(v,w,t,bs=c("sw","cr"),k=c(25,4),d=c(2,1),
xt=list(list(bnd=fsb,nmax=nmax),NULL)),knots=knots)
par(mfrow=c(3,2))
m<-100;n<-50
xm <- seq(-1,3.5,length=m);yn<-seq(-1,1,length=n)
xx <- rep(xm,n);yy<-rep(yn,rep(m,n))
tru <- matrix(fs.test(xx,yy),m,n)+4.2 ## truth
image(xm,yn,tru^.5,col=heat.colors(100),xlab="v",ylab="w",
main="truth")
lines(fsb[[1]]$v,fsb[[1]]$w,lwd=3)
contour(xm,yn,tru^.5,add=TRUE)
vis.gam(bk,view=c("v","w"),cond=list(t=0),plot.type="contour")
image(xm,yn,tru,col=heat.colors(100),xlab="v",ylab="w",
main="truth")
lines(fsb[[1]]$v,fsb[[1]]$w,lwd=3)
contour(xm,yn,tru,add=TRUE)
vis.gam(bk,view=c("v","w"),cond=list(t=.5),plot.type="contour")
image(xm,yn,tru^1.5,col=heat.colors(100),xlab="v",ylab="w",
main="truth")
lines(fsb[[1]]$v,fsb[[1]]$w,lwd=3)
contour(xm,yn,tru^1.5,add=TRUE)
vis.gam(bk,view=c("v","w"),cond=list(t=1),plot.type="contour")
#############################
# nested boundary example...
#############################
bnd <- list(list(x=0,y=0),list(x=0,y=0))
seq(0,2*pi,length=100) -> theta
bnd[[1]]$x <- sin(theta);bnd[[1]]$y <- cos(theta)
bnd[[2]]$x <- .3 + .3*sin(theta);
bnd[[2]]$y <- .3 + .3*cos(theta)
plot(bnd[[1]]$x,bnd[[1]]$y,type="l")
lines(bnd[[2]]$x,bnd[[2]]$y)
## setup knots
k <- 8
xm <- seq(-1,1,length=k);ym <- seq(-1,1,length=k)
x=rep(xm,k);y=rep(ym,rep(k,k))
ind <- inSide(bnd,x,y)
knots <- data.frame(x=x[ind],y=y[ind])
points(knots$x,knots$y)
## a test function
f1 <- function(x,y) {
exp(-(x-.3)^2-(y-.3)^2)
}
## plot the test function within the domain
par(mfrow=c(2,3))
m<-100;n<-100
xm <- seq(-1,1,length=m);yn<-seq(-1,1,length=n)
x <- rep(xm,n);y<-rep(yn,rep(m,n))
ff <- f1(x,y)
ind <- inSide(bnd,x,y)
ff[!ind] <- NA
image(xm,yn,matrix(ff,m,n),xlab="x",ylab="y")
contour(xm,yn,matrix(ff,m,n),add=TRUE)
lines(bnd[[1]]$x,bnd[[1]]$y,lwd=2);lines(bnd[[2]]$x,bnd[[2]]$y,lwd=2)
## Simulate data by noisy sampling from test function...
set.seed(1)
x <- runif(300)*2-1;y <- runif(300)*2-1
ind <- inSide(bnd,x,y)
x <- x[ind];y <- y[ind]
n <- length(x)
z <- f1(x,y) + rnorm(n)*.1
## Fit a soap film smooth to the noisy data
nmax <- 60
b <- gam(z~s(x,y,k=c(30,15),bs="so",xt=list(bnd=bnd,nmax=nmax)),
knots=knots,method="REML")
plot(b) ## default plot
vis.gam(b,plot.type="contour") ## prettier version
## trying out separated fits....
ba <- gam(z~s(x,y,k=c(30,15),bs="sf",xt=list(bnd=bnd,nmax=nmax))+
s(x,y,k=c(30,15),bs="sw",xt=list(bnd=bnd,nmax=nmax)),
knots=knots,method="REML")
plot(ba)
vis.gam(ba,plot.type="contour")
```
| programming_docs |
r None
`Predict.matrix` Prediction methods for smooth terms in a GAM
--------------------------------------------------------------
### Description
Takes `smooth` objects produced by `smooth.construct` methods and obtains the matrix mapping the parameters associated with such a smooth to the predicted values of the smooth at a set of new covariate values.
In practice this method is often called via the wrapper function `[PredictMat](smoothcon)`.
### Usage
```
Predict.matrix(object,data)
Predict.matrix2(object,data)
```
### Arguments
| | |
| --- | --- |
| `object` | is a smooth object produced by a `smooth.construct` method function. The object contains all the information required to specify the basis for a term of its class, and this information is used by the appropriate `Predict.matrix` function to produce a prediction matrix for new covariate values. Further details are given in `<smooth.construct>`. |
| `data` | A data frame containing the values of the (named) covariates at which the smooth term is to be evaluated. Exact requirements are as for `<smooth.construct>` and `smooth.construct2` |
.
### Details
Smooth terms in a GAM formula are turned into smooth specification objects of class `xx.smooth.spec` during processing of the formula. Each of these objects is converted to a smooth object using an appropriate `smooth.construct` function. The `Predict.matrix` functions are used to obtain the matrix that will map the parameters associated with a smooth term to the predicted values for the term at new covariate values.
Note that new smooth classes can be added by writing a new `smooth.construct` method function and a corresponding `[Predict.matrix](predict.matrix)` method function: see the example code provided for `<smooth.construct>` for details.
### Value
A matrix which will map the parameters associated with the smooth to the vector of values of the smooth evaluated at the covariate values given in `object`. If the smooth class is one which generates offsets the corresponding offset is returned as attribute `"offset"` of the matrix.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
### See Also
`<gam>`,`<gamm>`, `<smooth.construct>`, `[PredictMat](smoothcon)`
### Examples
```
# See smooth.construct examples
```
r None
`coxph` Additive Cox Proportional Hazard Model
-----------------------------------------------
### Description
The `cox.ph` family implements the Cox Proportional Hazards model with Peto's correction for ties, optional stratification, and estimation by penalized partial likelihood maximization, for use with `<gam>`. In the model formula, event time is the response. Under stratification the response has two columns: time and a numeric index for stratum. The `weights` vector provides the censoring information (0 for censoring, 1 for event). `cox.ph` deals with the case in which each subject has one event/censoring time and one row of covariate values. When each subject has several time dependent covariates see `[cox.pht](coxpht)`.
See example below for conditional logistic regression.
### Usage
```
cox.ph(link="identity")
```
### Arguments
| | |
| --- | --- |
| `link` | currently (and possibly for ever) only `"identity"` supported. |
### Details
Used with `<gam>` to fit Cox Proportional Hazards models to survival data. The model formula will have event/censoring times on the left hand side and the linear predictor specification on the right hand side. Censoring information is provided by the `weights` argument to `gam`, with 1 indicating an event and 0 indicating censoring.
Stratification is possible, allowing for different baseline hazards in different strata. In that case the response has two columns: the first is event/censoring time and the second is a numeric stratum index. See below for an example.
Prediction from the fitted model object (using the `predict` method) with `type="response"` will predict on the survivor function scale. This requires evaluation times to be provided as well as covariates (see example). Also see example code below for extracting the cumulative baseline hazard/survival directly. The `fitted.values` stored in the model object are survival function estimates for each subject at their event/censoring time.
`deviance`,`martingale`,`score`, or `schoenfeld` residuals can be extracted. See Klein amd Moeschberger (2003) for descriptions. The score residuals are returned as a matrix of the same dimension as the model matrix, with a `"terms"` attribute, which is a list indicating which model matrix columns belong to which model terms. The score residuals are scaled. For parameteric terms this is by the standard deviation of associated model coefficient. For smooth terms the sub matrix of score residuals for the term is postmultiplied by the transposed Cholesky factor of the covariance matrix for the term's coefficients. This is a transformation that makes the coefficients approximately independent, as required to make plots of the score residuals against event time interpretable for checking the proportional hazards assumption (see Klein amd Moeschberger, 2003, p376). Penalization causes drift in the score residuals, which is also removed, to allow the residuals to be approximately interpreted as unpenalized score residuals. Schoenfeld and score residuals are computed by strata. See the examples for simple PH assuption checks by plotting score residuals, and Klein amd Moeschberger (2003, section 11.4) for details. Note that high correlation between terms can undermine these checks.
Estimation of model coefficients is by maximising the log-partial likelihood penalized by the smoothing penalties. See e.g. Hastie and Tibshirani, 1990, section 8.3. for the partial likelihood used (with Peto's approximation for ties), but note that optimization of the partial likelihood does not follow Hastie and Tibshirani. See Klein amd Moeschberger (2003) for estimation of residuals, the cumulative baseline hazard, survival function and associated standard errors (the survival standard error expression has a typo).
The percentage deviance explained reported for Cox PH models is based on the sum of squares of the deviance residuals, as the model deviance, and the sum of squares of the deviance residuals when the covariate effects are set to zero, as the null deviance. The same baseline hazard estimate is used for both.
This family deals efficiently with the case in which each subject has one event/censoring time and one row of covariate values. For studies in which there are multiple time varying covariate measures for each subject then the equivalent Poisson model should be fitted to suitable pseudodata using `bam(...,discrete=TRUE)`. See `[cox.pht](coxpht)`.
### Value
An object inheriting from class `general.family`.
### References
Hastie and Tibshirani (1990) Generalized Additive Models, Chapman and Hall.
Klein, J.P and Moeschberger, M.L. (2003) Survival Analysis: Techniques for Censored and Truncated Data (2nd ed.) Springer.
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
### See Also
`[cox.pht](coxpht)`
### Examples
```
library(mgcv)
library(survival) ## for data
col1 <- colon[colon$etype==1,] ## concentrate on single event
col1$differ <- as.factor(col1$differ)
col1$sex <- as.factor(col1$sex)
b <- gam(time~s(age,by=sex)+sex+s(nodes)+perfor+rx+obstruct+adhere,
family=cox.ph(),data=col1,weights=status)
summary(b)
plot(b,pages=1,all.terms=TRUE) ## plot effects
plot(b$linear.predictors,residuals(b))
## plot survival function for patient j...
np <- 300;j <- 6
newd <- data.frame(time=seq(0,3000,length=np))
dname <- names(col1)
for (n in dname) newd[[n]] <- rep(col1[[n]][j],np)
newd$time <- seq(0,3000,length=np)
fv <- predict(b,newdata=newd,type="response",se=TRUE)
plot(newd$time,fv$fit,type="l",ylim=c(0,1),xlab="time",ylab="survival")
lines(newd$time,fv$fit+2*fv$se.fit,col=2)
lines(newd$time,fv$fit-2*fv$se.fit,col=2)
## crude plot of baseline survival...
plot(b$family$data$tr,exp(-b$family$data$h),type="l",ylim=c(0,1),
xlab="time",ylab="survival")
lines(b$family$data$tr,exp(-b$family$data$h + 2*b$family$data$q^.5),col=2)
lines(b$family$data$tr,exp(-b$family$data$h - 2*b$family$data$q^.5),col=2)
lines(b$family$data$tr,exp(-b$family$data$km),lty=2) ## Kaplan Meier
## Checking the proportional hazards assumption via scaled score plots as
## in Klein and Moeschberger Section 11.4 p374-376...
ph.resid <- function(b,stratum=1) {
## convenience function to plot scaled score residuals against time,
## by term. Reference lines at 5% exceedance prob for Brownian bridge
## (see KS test statistic distribution).
rs <- residuals(b,"score");term <- attr(rs,"term")
if (is.matrix(b$y)) {
ii <- b$y[,2] == stratum;b$y <- b$y[ii,1];rs <- rs[ii,]
}
oy <- order(b$y)
for (i in 1:length(term)) {
ii <- term[[i]]; m <- length(ii)
plot(b$y[oy],rs[oy,ii[1]],ylim=c(-3,3),type="l",ylab="score residuals",
xlab="time",main=names(term)[i])
if (m>1) for (k in 2:m) lines(b$y[oy],rs[oy,ii[k]],col=k);
abline(-1.3581,0,lty=2);abline(1.3581,0,lty=2)
}
}
par(mfrow=c(2,2))
ph.resid(b)
## stratification example, with 2 randomly allocated strata
## so that results should be similar to previous....
col1$strata <- sample(1:2,nrow(col1),replace=TRUE)
bs <- gam(cbind(time,strata)~s(age,by=sex)+sex+s(nodes)+perfor+rx+obstruct
+adhere,family=cox.ph(),data=col1,weights=status)
plot(bs,pages=1,all.terms=TRUE) ## plot effects
## baseline survival plots by strata...
for (i in 1:2) { ## loop over strata
## create index selecting elements of stored hazard info for stratum i...
ind <- which(bs$family$data$tr.strat == i)
if (i==1) plot(bs$family$data$tr[ind],exp(-bs$family$data$h[ind]),type="l",
ylim=c(0,1),xlab="time",ylab="survival",lwd=2,col=i) else
lines(bs$family$data$tr[ind],exp(-bs$family$data$h[ind]),lwd=2,col=i)
lines(bs$family$data$tr[ind],exp(-bs$family$data$h[ind] +
2*bs$family$data$q[ind]^.5),lty=2,col=i) ## upper ci
lines(bs$family$data$tr[ind],exp(-bs$family$data$h[ind] -
2*bs$family$data$q[ind]^.5),lty=2,col=i) ## lower ci
lines(bs$family$data$tr[ind],exp(-bs$family$data$km[ind]),col=i) ## KM
}
## Simple simulated known truth example...
ph.weibull.sim <- function(eta,gamma=1,h0=.01,t1=100) {
lambda <- h0*exp(eta)
n <- length(eta)
U <- runif(n)
t <- (-log(U)/lambda)^(1/gamma)
d <- as.numeric(t <= t1)
t[!d] <- t1
list(t=t,d=d)
}
n <- 500;set.seed(2)
x0 <- runif(n, 0, 1);x1 <- runif(n, 0, 1)
x2 <- runif(n, 0, 1);x3 <- runif(n, 0, 1)
f0 <- function(x) 2 * sin(pi * x)
f1 <- function(x) exp(2 * x)
f2 <- function(x) 0.2*x^11*(10*(1-x))^6+10*(10*x)^3*(1-x)^10
f3 <- function(x) 0*x
f <- f0(x0) + f1(x1) + f2(x2)
g <- (f-mean(f))/5
surv <- ph.weibull.sim(g)
surv$x0 <- x0;surv$x1 <- x1;surv$x2 <- x2;surv$x3 <- x3
b <- gam(t~s(x0)+s(x1)+s(x2,k=15)+s(x3),family=cox.ph,weights=d,data=surv)
plot(b,pages=1)
## Another one, including a violation of proportional hazards for
## effect of x2...
set.seed(2)
h <- exp((f0(x0)+f1(x1)+f2(x2)-10)/5)
t <- rexp(n,h);d <- as.numeric(t<20)
## first with no violation of PH in the simulation...
b <- gam(t~s(x0)+s(x1)+s(x2)+s(x3),family=cox.ph,weights=d)
plot(b,pages=1)
ph.resid(b) ## fine
## Now violate PH for x2 in the simulation...
ii <- t>1.5
h1 <- exp((f0(x0)+f1(x1)+3*f2(x2)-10)/5)
t[ii] <- 1.5 + rexp(sum(ii),h1[ii]);d <- as.numeric(t<20)
b <- gam(t~s(x0)+s(x1)+s(x2)+s(x3),family=cox.ph,weights=d)
plot(b,pages=1)
ph.resid(b) ## The checking plot picks up the problem in s(x2)
## conditional logistic regression models are often estimated using the
## cox proportional hazards partial likelihood with a strata for each
## case-control group. A dummy vector of times is created (all equal).
## The following compares to 'clogit' for a simple case. Note that
## the gam log likelihood is not exact if there is more than one case
## per stratum, corresponding to clogit's approximate method.
library(survival);library(mgcv)
infert$dumt <- rep(1,nrow(infert))
mg <- gam(cbind(dumt,stratum) ~ spontaneous + induced, data=infert,
family=cox.ph,weights=case)
ms <- clogit(case ~ spontaneous + induced + strata(stratum), data=infert,
method="approximate")
summary(mg)$p.table[1:2,]; ms
```
r None
`exclude.too.far` Exclude prediction grid points too far from data
-------------------------------------------------------------------
### Description
Takes two arrays defining the nodes of a grid over a 2D covariate space and two arrays defining the location of data in that space, and returns a logical vector with elements `TRUE` if the corresponding node is too far from data and `FALSE` otherwise. Basically a service routine for `vis.gam` and `plot.gam`.
### Usage
```
exclude.too.far(g1,g2,d1,d2,dist)
```
### Arguments
| | |
| --- | --- |
| `g1` | co-ordinates of grid relative to first axis. |
| `g2` | co-ordinates of grid relative to second axis. |
| `d1` | co-ordinates of data relative to first axis. |
| `d2` | co-ordinates of data relative to second axis. |
| `dist` | how far away counts as too far. Grid and data are first scaled so that the grid lies exactly in the unit square, and `dist` is a distance within this unit square. |
### Details
Linear scalings of the axes are first determined so that the grid defined by the nodes in `g1` and `g2` lies exactly in the unit square (i.e. on [0,1] by [0,1]). These scalings are applied to `g1`, `g2`, `d1` and `d2`. The minimum Euclidean distance from each node to a datum is then determined and if it is greater than `dist` the corresponding entry in the returned array is set to `TRUE` (otherwise to `FALSE`). The distance calculations are performed in compiled code for speed without storage overheads.
### Value
A logical array with `TRUE` indicating a node in the grid defined by `g1`, `g2` that is ‘too far’ from any datum.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<vis.gam>`
### Examples
```
library(mgcv)
x<-rnorm(100);y<-rnorm(100) # some "data"
n<-40 # generate a grid....
mx<-seq(min(x),max(x),length=n)
my<-seq(min(y),max(y),length=n)
gx<-rep(mx,n);gy<-rep(my,rep(n,n))
tf<-exclude.too.far(gx,gy,x,y,0.1)
plot(gx[!tf],gy[!tf],pch=".");points(x,y,col=2)
```
r None
`plot.gam` Default GAM plotting
--------------------------------
### Description
Takes a fitted `gam` object produced by `gam()` and plots the component smooth functions that make it up, on the scale of the linear predictor. Optionally produces term plots for parametric model components as well.
### Usage
```
## S3 method for class 'gam'
plot(x,residuals=FALSE,rug=NULL,se=TRUE,pages=0,select=NULL,scale=-1,
n=100,n2=40,n3=3,pers=FALSE,theta=30,phi=30,jit=FALSE,xlab=NULL,
ylab=NULL,main=NULL,ylim=NULL,xlim=NULL,too.far=0.1,
all.terms=FALSE,shade=FALSE,shade.col="gray80",shift=0,
trans=I,seWithMean=FALSE,unconditional=FALSE,by.resids=FALSE,
scheme=0,...)
```
### Arguments
| | |
| --- | --- |
| `x` | a fitted `gam` object as produced by `gam()`. |
| `residuals` | If `TRUE` then partial residuals are added to plots of 1-D smooths. If `FALSE` then no residuals are added. If this is an array of the correct length then it is used as the array of residuals to be used for producing partial residuals. If `TRUE` then the residuals are the working residuals from the IRLS iteration weighted by the (square root) IRLS weights, in order that they have constant variance if the model is correct. Partial residuals for a smooth term are the residuals that would be obtained by dropping the term concerned from the model, while leaving all other estimates fixed (i.e. the estimates for the term plus the residuals). |
| `rug` | When `TRUE` the covariate to which the plot applies is displayed as a rug plot at the foot of each plot of a 1-d smooth, and the locations of the covariates are plotted as points on the contour plot representing a 2-d smooth. The default of `NULL` sets `rug` to `TRUE` when the dataset size is <= 10000 and `FALSE` otherwise. |
| `se` | when TRUE (default) upper and lower lines are added to the 1-d plots at 2 standard errors above and below the estimate of the smooth being plotted while for 2-d plots, surfaces at +1 and -1 standard errors are contoured and overlayed on the contour plot for the estimate. If a positive number is supplied then this number is multiplied by the standard errors when calculating standard error curves or surfaces. See also `shade`, below. |
| `pages` | (default 0) the number of pages over which to spread the output. For example, if `pages=1` then all terms will be plotted on one page with the layout performed automatically. Set to 0 to have the routine leave all graphics settings as they are. |
| `select` | Allows the plot for a single model term to be selected for printing. e.g. if you just want the plot for the second smooth term set `select=2`. |
| `scale` | set to -1 (default) to have the same y-axis scale for each plot, and to 0 for a different y axis for each plot. Ignored if `ylim` supplied. |
| `n` | number of points used for each 1-d plot - for a nice smooth plot this needs to be several times the estimated degrees of freedom for the smooth. Default value 100. |
| `n2` | Square root of number of points used to grid estimates of 2-d functions for contouring. |
| `n3` | Square root of number of panels to use when displaying 3 or 4 dimensional functions. |
| `pers` | Set to `TRUE` if you want perspective plots for 2-d terms. |
| `theta` | One of the perspective plot angles. |
| `phi` | The other perspective plot angle. |
| `jit` | Set to TRUE if you want rug plots for 1-d terms to be jittered. |
| `xlab` | If supplied then this will be used as the x label for all plots. |
| `ylab` | If supplied then this will be used as the y label for all plots. |
| `main` | Used as title (or z axis label) for plots if supplied. |
| `ylim` | If supplied then this pair of numbers are used as the y limits for each plot. |
| `xlim` | If supplied then this pair of numbers are used as the x limits for each plot. |
| `too.far` | If greater than 0 then this is used to determine when a location is too far from data to be plotted when plotting 2-D smooths. This is useful since smooths tend to go wild away from data. The data are scaled into the unit square before deciding what to exclude, and `too.far` is a distance within the unit square. Setting to zero can make plotting faster for large datasets, but care then needed with interpretation of plots. |
| `all.terms` | if set to `TRUE` then the partial effects of parametric model components are also plotted, via a call to `[termplot](../../stats/html/termplot)`. Only terms of order 1 can be plotted in this way. |
| `shade` | Set to `TRUE` to produce shaded regions as confidence bands for smooths (not avaliable for parametric terms, which are plotted using `termplot`). |
| `shade.col` | define the color used for shading confidence bands. |
| `shift` | constant to add to each smooth (on the scale of the linear predictor) before plotting. Can be useful for some diagnostics, or with `trans`. |
| `trans` | monotonic function to apply to each smooth (after any shift), before plotting. Monotonicity is not checked, but default plot limits assume it. `shift` and `trans` are occasionally useful as a means for getting plots on the response scale, when the model consists only of a single smooth. |
| `seWithMean` | if `TRUE` the component smooths are shown with confidence intervals that include the uncertainty about the overall mean. If `FALSE` then the uncertainty relates purely to the centred smooth itself. If `seWithMean=2` then the intervals include the uncertainty in the mean of the fixed effects (but not in the mean of any uncentred smooths or random effects). Marra and Wood (2012) suggests that `TRUE` results in better coverage performance, and this is also suggested by simulation. |
| `unconditional` | if `TRUE` then the smoothing parameter uncertainty corrected covariance matrix is used to compute uncertainty bands, if available. Otherwise the bands treat the smoothing parameters as fixed. |
| `by.resids` | Should partial residuals be plotted for terms with `by` variables? Usually the answer is no, they would be meaningless. |
| `scheme` | Integer or integer vector selecting a plotting scheme for each plot. See details. |
| `...` | other graphics parameters to pass on to plotting commands. See details for smooth plot specific options. |
### Details
Produces default plot showing the smooth components of a fitted GAM, and optionally parametric terms as well, when these can be handled by `[termplot](../../stats/html/termplot)`.
For smooth terms `plot.gam` actually calls plot method functions depending on the class of the smooth. Currently `<random.effects>`, Markov random fields (`[mrf](smooth.construct.mrf.smooth.spec)`), `[Spherical.Spline](smooth.construct.sos.smooth.spec)` and `[factor.smooth.interaction](smooth.construct.fs.smooth.spec)` terms have special methods (documented in their help files), the rest use the defaults described below.
For plots of 1-d smooths, the x axis of each plot is labelled with the covariate name, while the y axis is labelled `s(cov,edf)` where `cov` is the covariate name, and `edf` the estimated (or user defined for regression splines) degrees of freedom of the smooth. `scheme == 0` produces a smooth curve with dashed curves indicating 2 standard error bounds. `scheme == 1` illustrates the error bounds using a shaded region.
For `scheme==0`, contour plots are produced for 2-d smooths with the x-axes labelled with the first covariate name and the y axis with the second covariate name. The main title of the plot is something like `s(var1,var2,edf)`, indicating the variables of which the term is a function, and the estimated degrees of freedom for the term. When `se=TRUE`, estimator variability is shown by overlaying contour plots at plus and minus 1 s.e. relative to the main estimate. If `se` is a positive number then contour plots are at plus or minus `se` multiplied by the s.e. Contour levels are chosen to try and ensure reasonable separation of the contours of the different plots, but this is not always easy to achieve. Note that these plots can not be modified to the same extent as the other plot.
For 2-d smooths `scheme==1` produces a perspective plot, while `scheme==2` produces a heatmap, with overlaid contours and `scheme==3` a greyscale heatmap (`contour.col` controls the contour colour).
Smooths of 3 and 4 variables are displayed as tiled heatmaps with overlaid contours. In the 3 variable case the third variable is discretized and a contour plot of the first 2 variables is produced for each discrete value. The panels in the lower and upper rows are labelled with the corresponding third variable value. The lowest value is bottom left, and highest at top right. For 4 variables, two of the variables are coarsely discretized and a square array of image plots is produced for each combination of the discrete values. The first two arguments of the smooth are the ones used for the image/contour plots, unless a tensor product term has 2D marginals, in which case the first 2D marginal is image/contour plotted. `n3` controls the number of panels. See also `<vis.gam>`.
Fine control of plots for parametric terms can be obtained by calling `[termplot](../../stats/html/termplot)` directly, taking care to use its `terms` argument.
Note that, if `seWithMean=TRUE`, the confidence bands include the uncertainty about the overall mean. In other words although each smooth is shown centred, the confidence bands are obtained as if every other term in the model was constrained to have average 0, (average taken over the covariate values), except for the smooth concerned. This seems to correspond more closely to how most users interpret componentwise intervals in practice, and also results in intervals with close to nominal (frequentist) coverage probabilities by an extension of Nychka's (1988) results presented in Marra and Wood (2012). There are two possible variants of this approach. In the default variant the extra uncertainty is in the mean of all other terms in the model (fixed and random, including uncentred smooths). Alternatively, if `seWithMean=2` then only the uncertainty in parametric fixed effects is included in the extra uncertainty (this latter option actually tends to lead to wider intervals when the model contains random effects).
Several smooth plots methods using `[image](../../graphics/html/image)` will accept an `hcolors` argument, which can be anything documented in `[heat.colors](../../grdevices/html/palettes)` (in which case something like `hcolors=rainbow(50)` is appropriate), or the `[grey](../../grdevices/html/gray)` function (in which case somthing like `hcolors=grey(0:50/50)` is needed). Another option is `contour.col` which will set the contour colour for some plots. These options are useful for producing grey scale pictures instead of colour.
Sometimes you may want a small change to a default plot, and the arguments to `plot.gam` just won't let you do it. In this case, the quickest option is sometimes to clone the `smooth.construct` and `Predict.matrix` methods for the smooth concerned, modifying only the returned smoother class (e.g. to `foo.smooth`). Then copy the plot method function for the original class (e.g. `mgcv:::plot.mgcv.smooth`), modify the source code to plot exactly as you want and rename the plot method function (e.g. `plot.foo.smooth`). You can then use the cloned smooth in models (e.g. `s(x,bs="foo")`), and it will automatically plot using the modified plotting function.
### Value
The functions main purpose is its side effect of generating plots. It also silently returns a list of the data used to produce the plots, which can be used to generate customized plots.
### WARNING
Note that the behaviour of this function is not identical to `plot.gam()` in S-PLUS.
Plotting can be slow for models fitted to large datasets. Set `rug=FALSE` to improve matters. If it's still too slow set `too.far=0`, but then take care not to overinterpret smooths away from supporting data.
Plots of 2-D smooths with standard error contours shown can not easily be customized.
The function can not deal with smooths of more than 2 variables!
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
Henric Nilsson [[email protected]](mailto:[email protected]) donated the code for the `shade` option.
The design is inspired by the S function of the same name described in Chambers and Hastie (1993) (but is not a clone).
### References
Chambers and Hastie (1993) Statistical Models in S. Chapman & Hall.
Marra, G and S.N. Wood (2012) Coverage Properties of Confidence Intervals for Generalized Additive Model Components. Scandinavian Journal of Statistics.
Nychka (1988) Bayesian Confidence Intervals for Smoothing Splines. Journal of the American Statistical Association 83:1134-1143.
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
### See Also
`<gam>`, `<predict.gam>`, `<vis.gam>`
### Examples
```
library(mgcv)
set.seed(0)
## fake some data...
f1 <- function(x) {exp(2 * x)}
f2 <- function(x) {
0.2*x^11*(10*(1-x))^6+10*(10*x)^3*(1-x)^10
}
f3 <- function(x) {x*0}
n<-200
sig2<-4
x0 <- rep(1:4,50)
x1 <- runif(n, 0, 1)
x2 <- runif(n, 0, 1)
x3 <- runif(n, 0, 1)
e <- rnorm(n, 0, sqrt(sig2))
y <- 2*x0 + f1(x1) + f2(x2) + f3(x3) + e
x0 <- factor(x0)
## fit and plot...
b<-gam(y~x0+s(x1)+s(x2)+s(x3))
plot(b,pages=1,residuals=TRUE,all.terms=TRUE,shade=TRUE,shade.col=2)
plot(b,pages=1,seWithMean=TRUE) ## better coverage intervals
## just parametric term alone...
termplot(b,terms="x0",se=TRUE)
## more use of color...
op <- par(mfrow=c(2,2),bg="blue")
x <- 0:1000/1000
for (i in 1:3) {
plot(b,select=i,rug=FALSE,col="green",
col.axis="white",col.lab="white",all.terms=TRUE)
for (j in 1:2) axis(j,col="white",labels=FALSE)
box(col="white")
eval(parse(text=paste("fx <- f",i,"(x)",sep="")))
fx <- fx-mean(fx)
lines(x,fx,col=2) ## overlay `truth' in red
}
par(op)
## example with 2-d plots, and use of schemes...
b1 <- gam(y~x0+s(x1,x2)+s(x3))
op <- par(mfrow=c(2,2))
plot(b1,all.terms=TRUE)
par(op)
op <- par(mfrow=c(2,2))
plot(b1,all.terms=TRUE,scheme=1)
par(op)
op <- par(mfrow=c(2,2))
plot(b1,all.terms=TRUE,scheme=c(2,1))
par(op)
## 3 and 4 D smooths can also be plotted
dat <- gamSim(1,n=400)
b1 <- gam(y~te(x0,x1,x2,d=c(1,2),k=c(5,15))+s(x3),data=dat)
## Now plot. Use cex.lab and cex.axis to control axis label size,
## n3 to control number of panels, n2 to control panel grid size,
## scheme=1 to get greyscale...
plot(b1,pages=1)
```
| programming_docs |
r None
`te` Define tensor product smooths or tensor product interactions in GAM formulae
----------------------------------------------------------------------------------
### Description
Functions used for the definition of tensor product smooths and interactions within `gam` model formulae. `te` produces a full tensor product smooth, while `ti` produces a tensor product interaction, appropriate when the main effects (and any lower interactions) are also present.
The functions do not evaluate the smooth - they exists purely to help set up a model using tensor product based smooths. Designed to construct tensor products from any marginal smooths with a basis-penalty representation (with the restriction that each marginal smooth must have only one penalty).
### Usage
```
te(..., k=NA,bs="cr",m=NA,d=NA,by=NA,fx=FALSE,
np=TRUE,xt=NULL,id=NULL,sp=NULL,pc=NULL)
ti(..., k=NA,bs="cr",m=NA,d=NA,by=NA,fx=FALSE,
np=TRUE,xt=NULL,id=NULL,sp=NULL,mc=NULL,pc=NULL)
```
### Arguments
| | |
| --- | --- |
| `...` | a list of variables that are the covariates that this smooth is a function of. Transformations whose form depends on the values of the data are best avoided here: e.g. `te(log(x),z)` is fine, but `te(I(x/sd(x)),z)` is not (see `<predict.gam>`). |
| `k` | the dimension(s) of the bases used to represent the smooth term. If not supplied then set to `5^d`. If supplied as a single number then this basis dimension is used for each basis. If supplied as an array then the elements are the dimensions of the component (marginal) bases of the tensor product. See `<choose.k>` for further information. |
| `bs` | array (or single character string) specifying the type for each marginal basis. `"cr"` for cubic regression spline; `"cs"` for cubic regression spline with shrinkage; `"cc"` for periodic/cyclic cubic regression spline; `"tp"` for thin plate regression spline; `"ts"` for t.p.r.s. with extra shrinkage. See `<smooth.terms>` for details and full list. User defined bases can also be used here (see `<smooth.construct>` for an example). If only one basis code is given then this is used for all bases. |
| `m` | The order of the spline and its penalty (for smooth classes that use this) for each term. If a single number is given then it is used for all terms. A vector can be used to supply a different `m` for each margin. For marginals that take vector `m` (e.g. `[p.spline](smooth.construct.ps.smooth.spec)` and `[Duchon.spline](smooth.construct.ds.smooth.spec)`), then a list can be supplied, with a vector element for each margin. `NA` autoinitializes. `m` is ignored by some bases (e.g. `"cr"`). |
| `d` | array of marginal basis dimensions. For example if you want a smooth for 3 covariates made up of a tensor product of a 2 dimensional t.p.r.s. basis and a 1-dimensional basis, then set `d=c(2,1)`. Incompatibilities between built in basis types and dimension will be resolved by resetting the basis type. |
| `by` | a numeric or factor variable of the same dimension as each covariate. In the numeric vector case the elements multiply the smooth evaluated at the corresponding covariate values (a ‘varying coefficient model’ results). In the factor case causes a replicate of the smooth to be produced for each factor level. See `<gam.models>` for further details. May also be a matrix if covariates are matrices: in this case implements linear functional of a smooth (see `<gam.models>` and `<linear.functional.terms>` for details). |
| `fx` | indicates whether the term is a fixed d.f. regression spline (`TRUE`) or a penalized regression spline (`FALSE`). |
| `np` | `TRUE` to use the ‘normal parameterization’ for a tensor product smooth. This represents any 1-d marginal smooths via parameters that are function values at ‘knots’, spread evenly through the data. The parameterization makes the penalties easily interpretable, however it can reduce numerical stability in some cases. |
| `xt` | Either a single object, providing any extra information to be passed to each marginal basis constructor, or a list of such objects, one for each marginal basis. |
| `id` | A label or integer identifying this term in order to link its smoothing parameters to others of the same type. If two or more smooth terms have the same `id` then they will have the same smoothing paramsters, and, by default, the same bases (first occurance defines basis type, but data from all terms used in basis construction). |
| `sp` | any supplied smoothing parameters for this term. Must be an array of the same length as the number of penalties for this smooth. Positive or zero elements are taken as fixed smoothing parameters. Negative elements signal auto-initialization. Over-rides values supplied in `sp` argument to `<gam>`. Ignored by `gamm`. |
| `mc` | For `ti` smooths you can specify which marginals should have centering constraints applied, by supplying 0/1 or `FALSE`/`TRUE` values for each marginal in this vector. By default all marginals are constrained, which is what is appropriate for, e.g., functional ANOVA models. Note that `'ti'` only applies constraints to the marginals, so if you turn off all marginal constraints the term will have no identifiability constraints. Only use this if you really understand how marginal constraints work. |
| `pc` | If not `NULL`, signals a point constraint: the smooth should pass through zero at the point given here (as a vector or list with names corresponding to the smooth names). Never ignored if supplied. See `<identifiability>`. |
### Details
Smooths of several covariates can be constructed from tensor products of the bases used to represent smooths of one (or sometimes more) of the covariates. To do this ‘marginal’ bases are produced with associated model matrices and penalty matrices, and these are then combined in the manner described in `<tensor.prod.model.matrix>` and `[tensor.prod.penalties](tensor.prod.model.matrix)`, to produce a single model matrix for the smooth, but multiple penalties (one for each marginal basis). The basis dimension of the whole smooth is the product of the basis dimensions of the marginal smooths.
An option for operating with a single penalty (The Kronecker product of the marginal penalties) is provided, but it is rarely of practical use, and is deprecated: the penalty is typically so rank deficient that even the smoothest resulting model will have rather high estimated degrees of freedom.
Tensor product smooths are especially useful for representing functions of covariates measured in different units, although they are typically not quite as nicely behaved as t.p.r.s. smooths for well scaled covariates.
It is sometimes useful to investigate smooth models with a main-effects + interactions structure, for example
*f\_1(x) + f\_2(z) + f\_3(x,z)*
This functional ANOVA decomposition is supported by `ti` terms, which produce tensor product interactions from which the main effects have been excluded, under the assumption that they will be included separately. For example the `~ ti(x) + ti(z) + ti(x,z)` would produce the above main effects + interaction structure. This is much better than attempting the same thing with `s`or `te` terms representing the interactions (although mgcv does not forbid it). Technically `ti` terms are very simple: they simply construct tensor product bases from marginal smooths to which identifiability constraints (usually sum-to-zero) have already been applied: correct nesting is then automatic (as with all interactions in a GLM framework). See Wood (2017, section 5.6.3).
The ‘normal parameterization’ (`np=TRUE`) re-parameterizes the marginal smooths of a tensor product smooth so that the parameters are function values at a set of points spread evenly through the range of values of the covariate of the smooth. This means that the penalty of the tensor product associated with any particular covariate direction can be interpreted as the penalty of the appropriate marginal smooth applied in that direction and averaged over the smooth. Currently this is only done for marginals of a single variable. This parameterization can reduce numerical stability when used with marginal smooths other than `"cc"`, `"cr"` and `"cs"`: if this causes problems, set `np=FALSE`.
Note that tensor product smooths should not be centred (have identifiability constraints imposed) if any marginals would not need centering. The constructor for tensor product smooths ensures that this happens.
The function does not evaluate the variable arguments.
### Value
A class `tensor.smooth.spec` object defining a tensor product smooth to be turned into a basis and penalties by the `smooth.construct.tensor.smooth.spec` function.
The returned object contains the following items:
| | |
| --- | --- |
| `margin` | A list of `smooth.spec` objects of the type returned by `<s>`, defining the basis from which the tensor product smooth is constructed. |
| `term` | An array of text strings giving the names of the covariates that the term is a function of. |
| `by` | is the name of any `by` variable as text (`"NA"` for none). |
| `fx` | logical array with element for each penalty of the term (tensor product smooths have multiple penalties). `TRUE` if the penalty is to be ignored, `FALSE`, otherwise. |
| `label` | A suitable text label for this smooth term. |
| `dim` | The dimension of the smoother - i.e. the number of covariates that it is a function of. |
| `mp` | `TRUE` is multiple penalties are to be used (default). |
| `np` | `TRUE` to re-parameterize 1-D marginal smooths in terms of function values (defualt). |
| `id` | the `id` argument supplied to `te`. |
| `sp` | the `sp` argument supplied to `te`. |
| `inter` | `TRUE` if the term was generated by `ti`, `FALSE` otherwise. |
| `mc` | the argument `mc` supplied to `ti`. |
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2006) Low rank scale invariant tensor product smooths for generalized additive mixed models. Biometrics 62(4):1025-1036
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<s>`,`<gam>`,`<gamm>`, `<smooth.construct.tensor.smooth.spec>`
### Examples
```
# following shows how tensor pruduct deals nicely with
# badly scaled covariates (range of x 5% of range of z )
require(mgcv)
test1 <- function(x,z,sx=0.3,sz=0.4) {
x <- x*20
(pi**sx*sz)*(1.2*exp(-(x-0.2)^2/sx^2-(z-0.3)^2/sz^2)+
0.8*exp(-(x-0.7)^2/sx^2-(z-0.8)^2/sz^2))
}
n <- 500
old.par <- par(mfrow=c(2,2))
x <- runif(n)/20;z <- runif(n);
xs <- seq(0,1,length=30)/20;zs <- seq(0,1,length=30)
pr <- data.frame(x=rep(xs,30),z=rep(zs,rep(30,30)))
truth <- matrix(test1(pr$x,pr$z),30,30)
f <- test1(x,z)
y <- f + rnorm(n)*0.2
b1 <- gam(y~s(x,z))
persp(xs,zs,truth);title("truth")
vis.gam(b1);title("t.p.r.s")
b2 <- gam(y~te(x,z))
vis.gam(b2);title("tensor product")
b3 <- gam(y~ ti(x) + ti(z) + ti(x,z))
vis.gam(b3);title("tensor anova")
## now illustrate partial ANOVA decomp...
vis.gam(b3);title("full anova")
b4 <- gam(y~ ti(x) + ti(x,z,mc=c(0,1))) ## note z constrained!
vis.gam(b4);title("partial anova")
plot(b4)
par(old.par)
## now with a multivariate marginal....
test2<-function(u,v,w,sv=0.3,sw=0.4)
{ ((pi**sv*sw)*(1.2*exp(-(v-0.2)^2/sv^2-(w-0.3)^2/sw^2)+
0.8*exp(-(v-0.7)^2/sv^2-(w-0.8)^2/sw^2)))*(u-0.5)^2*20
}
n <- 500
v <- runif(n);w<-runif(n);u<-runif(n)
f <- test2(u,v,w)
y <- f + rnorm(n)*0.2
# tensor product of 2D Duchon spline and 1D cr spline
m <- list(c(1,.5),rep(0,0)) ## example of list form of m
b <- gam(y~te(v,w,u,k=c(30,5),d=c(2,1),bs=c("ds","cr"),m=m))
op <- par(mfrow=c(2,2))
vis.gam(b,cond=list(u=0),color="heat",zlim=c(-0.2,3.5))
vis.gam(b,cond=list(u=.33),color="heat",zlim=c(-0.2,3.5))
vis.gam(b,cond=list(u=.67),color="heat",zlim=c(-0.2,3.5))
vis.gam(b,cond=list(u=1),color="heat",zlim=c(-0.2,3.5))
par(op)
```
r None
`step.gam` Alternatives to step.gam
------------------------------------
### Description
There is no `step.gam` in package `mgcv`. The `mgcv` default for model selection is to use either prediction error criteria such as GCV, GACV, Mallows' Cp/AIC/UBRE or the likelihood based methods of REML or ML. Since the smoothness estimation part of model selection is done in this way it is logically most consistent to perform the rest of model selection in the same way. i.e. to decide which terms to include or omit by looking at changes in GCV, AIC, REML etc.
To facilitate fully automatic model selection the package implements two smooth modification techniques which can be used to allow smooths to be shrunk to zero as part of smoothness selection.
Shrinkage smoothers
are smoothers in which a small multiple of the identity matrix is added to the smoothing penalty, so that strong enough penalization will shrink all the coefficients of the smooth to zero. Such smoothers can effectively be penalized out of the model altogether, as part of smoothing parameter estimation. 2 classes of these shrinkage smoothers are implemented: `"cs"` and `"ts"`, based on cubic regression spline and thin plate regression spline smoothers (see `<s>`)
Null space penalization
An alternative is to construct an extra penalty for each smooth which penalizes the space of functions of zero wiggliness according to its existing penalties. If all the smoothing parameters for such a term tend to infinity then the term is penalized to zero, and is effectively dropped from the model. The advantage of this approach is that it can be implemented automatically for any smooth. The `select` argument to `<gam>` causes this latter approach to be used. Unpenalized terms (e.g. `s(x,fx=TRUE)`) remain unpenalized.
REML and ML smoothness selection are equivalent under this approach, and simulation evidence suggests that they tend to perform a little better than prediction error criteria, for model selection.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Marra, G. and S.N. Wood (2011) Practical variable selection for generalized additive models Computational Statistics and Data Analysis 55,2372-2387
### See Also
`<gam.selection>`
### Examples
```
## an example of GCV based model selection as
## an alternative to stepwise selection, using
## shrinkage smoothers...
library(mgcv)
set.seed(0);n <- 400
dat <- gamSim(1,n=n,scale=2)
dat$x4 <- runif(n, 0, 1)
dat$x5 <- runif(n, 0, 1)
attach(dat)
## Note the increased gamma parameter below to favour
## slightly smoother models...
b<-gam(y~s(x0,bs="ts")+s(x1,bs="ts")+s(x2,bs="ts")+
s(x3,bs="ts")+s(x4,bs="ts")+s(x5,bs="ts"),gamma=1.4)
summary(b)
plot(b,pages=1)
## Same again using REML/ML
b<-gam(y~s(x0,bs="ts")+s(x1,bs="ts")+s(x2,bs="ts")+
s(x3,bs="ts")+s(x4,bs="ts")+s(x5,bs="ts"),method="REML")
summary(b)
plot(b,pages=1)
## And once more, but using the null space penalization
b<-gam(y~s(x0,bs="cr")+s(x1,bs="cr")+s(x2,bs="cr")+
s(x3,bs="cr")+s(x4,bs="cr")+s(x5,bs="cr"),
method="REML",select=TRUE)
summary(b)
plot(b,pages=1)
detach(dat);rm(dat)
```
r None
`blas.thread.test` BLAS thread safety
--------------------------------------
### Description
Most BLAS implementations are thread safe, but some versions of OpenBLAS, for example, are not. This routine is a diagnostic helper function, which you will never need if you don't set `nthreads>1`, and even then are unlikely to need.
### Usage
```
blas.thread.test(n=1000,nt=4)
```
### Arguments
| | |
| --- | --- |
| `n` | Number of iterations to run of parallel BLAS calling code. |
| `nt` | Number of parallel threads to use |
### Details
While single threaded OpenBLAS 0.2.20 was thread safe, versions 0.3.0-0.3.6 are not, and from version 0.3.7 thread safety of the single threaded OpenBLAS requires making it with the option `USE_LOCKING=1`. The reference BLAS is thread safe, as are MKL and ATLAS. This routine repeatedly calls the BLAS from multi-threaded code and is sufficient to detect the problem in single threaded OpenBLAS 0.3.x.
A multi-threaded BLAS is often no faster than a single-threaded BLAS, while judicious use of threading in the code calling the BLAS can still deliver a modest speed improvement. For this reason it is often better to use a single threaded BLAS and the codenthreads options to `<bam>` or `<gam>`. For `bam(...,discrete=TRUE)` using several threads can be a substantial benefit, especially with the reference BLAS.
The MKL BLAS is mutlithreaded by default. Under linux setting environment variable `MKL_NUM_THREADS=1` before starting R gives single threaded operation.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
r None
`gamObject` Fitted gam object
------------------------------
### Description
A fitted GAM object returned by function `gam` and of class `"gam"` inheriting from classes `"glm"` and `"lm"`. Method functions `anova`, `logLik`, `influence`, `plot`, `predict`, `print`, `residuals` and `summary` exist for this class.
All compulsory elements of `"glm"` and `"lm"` objects are present, but the fitting method for a GAM is different to a linear model or GLM, so that the elements relating to the QR decomposition of the model matrix are absent.
### Value
A `gam` object has the following elements:
| | |
| --- | --- |
| `aic` | AIC of the fitted model: bear in mind that the degrees of freedom used to calculate this are the effective degrees of freedom of the model, and the likelihood is evaluated at the maximum of the penalized likelihood in most cases, not at the MLE. |
| `assign` | Array whose elements indicate which model term (listed in `pterms`) each parameter relates to: applies only to non-smooth terms. |
| `boundary` | did parameters end up at boundary of parameter space? |
| `call` | the matched call (allows `update` to be used with `gam` objects, for example). |
| `cmX` | column means of the model matrix (with elements corresponding to smooths set to zero ) — useful for componentwise CI calculation. |
| `coefficients` | the coefficients of the fitted model. Parametric coefficients are first, followed by coefficients for each spline term in turn. |
| `control` | the `gam` control list used in the fit. |
| `converged` | indicates whether or not the iterative fitting method converged. |
| `data` | the original supplied data argument (for class `"glm"` compatibility). Only included if `<gam>` `control` argument element `keepData` is set to `TRUE` (default is `FALSE`). |
| `db.drho` | matrix of first derivatives of model coefficients w.r.t. log smoothing parameters. |
| `deviance` | model deviance (not penalized deviance). |
| `df.null` | null degrees of freedom. |
| `df.residual` | effective residual degrees of freedom of the model. |
| `edf` | estimated degrees of freedom for each model parameter. Penalization means that many of these are less than 1. |
| `edf1` | similar, but using alternative estimate of EDF. Useful for testing. |
| `edf2` | if estimation is by ML or REML then an edf that accounts for smoothing parameter uncertainty can be computed, this is it. `edf1` is a heuristic upper bound for `edf2`. |
| `family` | family object specifying distribution and link used. |
| `fitted.values` | fitted model predictions of expected value for each datum. |
| `formula` | the model formula. |
| `full.sp` | full array of smoothing parameters multiplying penalties (excluding any contribution from `min.sp` argument to `gam`). May be larger than `sp` if some terms share smoothing parameters, and/or some smoothing parameter values were supplied in the `sp` argument of `<gam>`. |
| `F` | Degrees of freedom matrix. This may be removed at some point, and should probably not be used. |
| `gcv.ubre` | The minimized smoothing parameter selection score: GCV, UBRE(AIC), GACV, negative log marginal likelihood or negative log restricted likelihood. |
| `hat` | array of elements from the leading diagonal of the ‘hat’ (or ‘influence’) matrix. Same length as response data vector. |
| `iter` | number of iterations of P-IRLS taken to get convergence. |
| `linear.predictors` | fitted model prediction of link function of expected value for each datum. |
| `method` | One of `"GCV"` or `"UBRE"`, `"REML"`, `"P-REML"`, `"ML"`, `"P-ML"`, `"PQL"`, `"lme.ML"` or `"lme.REML"`, depending on the fitting criterion used. |
| `mgcv.conv` | A list of convergence diagnostics relating to the `"magic"` parts of smoothing parameter estimation - this will not be very meaningful for pure `"outer"` estimation of smoothing parameters. The items are: `full.rank`, The apparent rank of the problem given the model matrix and constraints; `rank`, The numerical rank of the problem; `fully.converged`, `TRUE` is multiple GCV/UBRE converged by meeting convergence criteria and `FALSE` if method stopped with a steepest descent step failure; `hess.pos.def`Was the hessian of the GCV/UBRE score positive definite at smoothing parameter estimation convergence?; `iter` How many iterations were required to find the smoothing parameters? `score.calls`, and how many times did the GCV/UBRE score have to be evaluated?; `rms.grad`, root mean square of the gradient of the GCV/UBRE score at convergence. |
| | |
| --- | --- |
| `min.edf` | Minimum possible degrees of freedom for whole model. |
| `model` | model frame containing all variables needed in original model fit. |
| `na.action` | The `[na.action](../../stats/html/na.action)` used in fitting. |
| `nsdf` | number of parametric, non-smooth, model terms including the intercept. |
| `null.deviance` | deviance for single parameter model. |
| `offset` | model offset. |
| `optimizer` | `optimizer` argument to `<gam>`, or `"magic"` if it's a pure additive model. |
| `outer.info` | If ‘outer’ iteration has been used to fit the model (see `<gam>` argument `optimizer`) then this is present and contains whatever was returned by the optimization routine used (currently `[nlm](../../stats/html/nlm)` or `[optim](../../stats/html/optim)`). |
| `paraPen` | If the `paraPen` argument to `<gam>` was used then this provides information on the parametric penalties. `NULL` otherwise. |
| `pred.formula` | one sided formula containing variables needed for prediction, used by `predict.gam` |
| `prior.weights` | prior weights on observations. |
| `pterms` | `terms` object for strictly parametric part of model. |
| `R` | Factor R from QR decomposition of weighted model matrix, unpivoted to be in same column order as model matrix (so need not be upper triangular). |
| `rank` | apparent rank of fitted model. |
| `reml.scale` | The scale (RE)ML scale parameter estimate, if (P-)(RE)ML used for smoothness estimation. |
| `residuals` | the working residuals for the fitted model. |
| `rV` | If present, `rV%*%t(rV)*sig2` gives the estimated Bayesian covariance matrix. |
| `scale` | when present, the scale (as `sig2`) |
| `scale.estimated` | `TRUE` if the scale parameter was estimated, `FALSE` otherwise. |
| `sig2` | estimated or supplied variance/scale parameter. |
| `smooth` | list of smooth objects, containing the basis information for each term in the model formula in the order in which they appear. These smooth objects are what gets returned by the `<smooth.construct>` objects. |
| `sp` | estimated smoothing parameters for the model. These are the underlying smoothing parameters, subject to optimization. For the full set of smoothing parameters multiplying the penalties see `full.sp`. Divide the scale parameter by the smoothing parameters to get, variance components, but note that this is not valid for smooths that have used rescaling to improve conditioning. |
| `terms` | `terms` object of `model` model frame. |
| `var.summary` | A named list of summary information on the predictor variables. If a parametric variable is a matrix, then the summary is a one row matrix, containing the observed data value closest to the column median, for each matrix column. If the variable is a factor the then summary is the modal factor level, returned as a factor, with levels corresponding to those of the data. For numerics and matrix arguments of smooths, the summary is the mean, nearest observed value to median and maximum, as a numeric vector. Used by `<vis.gam>`, in particular. |
| `Ve` | frequentist estimated covariance matrix for the parameter estimators. Particularly useful for testing whether terms are zero. Not so useful for CI's as smooths are usually biased. |
| `Vp` | estimated covariance matrix for the parameters. This is a Bayesian posterior covariance matrix that results from adopting a particular Bayesian model of the smoothing process. Paricularly useful for creating credible/confidence intervals. |
| `Vc` | Under ML or REML smoothing parameter estimation it is possible to correct the covariance matrix `Vp` for smoothing parameter uncertainty. This is the corrected version. |
| `weights` | final weights used in IRLS iteration. |
| `y` | response data. |
### WARNINGS
This model object is different to that described in Chambers and Hastie (1993) in order to allow smoothing parameter estimation etc.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
A Key Reference on this implementation:
Wood, S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman & Hall/ CRC, Boca Raton, Florida
Key Reference on GAMs generally:
Hastie (1993) in Chambers and Hastie (1993) Statistical Models in S. Chapman and Hall.
Hastie and Tibshirani (1990) Generalized Additive Models. Chapman and Hall.
### See Also
`<gam>`
| programming_docs |
r None
`spasm.construct` Experimental sparse smoothers
------------------------------------------------
### Description
These are experimental sparse smoothing functions, and should be left well alone!
### Usage
```
spasm.construct(object,data)
spasm.sp(object,sp,w=rep(1,object$nobs),get.trH=TRUE,block=0,centre=FALSE)
spasm.smooth(object,X,residual=FALSE,block=0)
```
### Arguments
| | |
| --- | --- |
| `object` | sparse smooth object |
| `data` | data frame |
| `sp` | smoothing parameter value |
| `w` | optional weights |
| `get.trH` | Should (estimated) trace of sparse smoother matrix be returned |
| `block` | index of block, 0 for all blocks |
| `centre` | should sparse smooth be centred? |
| `X` | what to smooth |
| `residual` | apply residual operation? |
### WARNING
It is not recommended to use these yet
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
r None
`bam` Generalized additive models for very large datasets
----------------------------------------------------------
### Description
Fits a generalized additive model (GAM) to a very large data set, the term ‘GAM’ being taken to include any quadratically penalized GLM (the extended families listed in `<family.mgcv>` can also be used). The degree of smoothness of model terms is estimated as part of fitting. In use the function is much like `<gam>`, except that the numerical methods are designed for datasets containing upwards of several tens of thousands of data (see Wood, Goude and Shaw, 2015). The advantage of `bam` is much lower memory footprint than `<gam>`, but it can also be much faster, for large datasets. `bam` can also compute on a cluster set up by the [parallel](../../parallel/html/parallel-package) package.
An alternative fitting approach (Wood et al. 2017, Li and Wood, 2019) is provided by the `discrete==TRUE` method. In this case a method based on discretization of covariate values and C code level parallelization (controlled by the `nthreads` argument instead of the `cluster` argument) is used. This extends both the data set and model size that are practical.
### Usage
```
bam(formula,family=gaussian(),data=list(),weights=NULL,subset=NULL,
na.action=na.omit, offset=NULL,method="fREML",control=list(),
select=FALSE,scale=0,gamma=1,knots=NULL,sp=NULL,min.sp=NULL,
paraPen=NULL,chunk.size=10000,rho=0,AR.start=NULL,discrete=FALSE,
cluster=NULL,nthreads=1,gc.level=1,use.chol=FALSE,samfrac=1,
coef=NULL,drop.unused.levels=TRUE,G=NULL,fit=TRUE,drop.intercept=NULL,...)
```
### Arguments
| | |
| --- | --- |
| `formula` | A GAM formula (see `<formula.gam>` and also `<gam.models>`). This is exactly like the formula for a GLM except that smooth terms, `s` and `te` can be added to the right hand side to specify that the linear predictor depends on smooth functions of predictors (or linear functionals of these). |
| `family` | This is a family object specifying the distribution and link to use in fitting etc. See `[glm](../../stats/html/glm)` and `[family](../../stats/html/family)` for more details. The extended families listed in `<family.mgcv>` can also be used. |
| `data` | A data frame or list containing the model response variable and covariates required by the formula. By default the variables are taken from `environment(formula)`: typically the environment from which `gam` is called. |
| `weights` | prior weights on the contribution of the data to the log likelihood. Note that a weight of 2, for example, is equivalent to having made exactly the same observation twice. If you want to reweight the contributions of each datum without changing the overall magnitude of the log likelihood, then you should normalize the weights (e.g. `weights <- weights/mean(weights)`). |
| `subset` | an optional vector specifying a subset of observations to be used in the fitting process. |
| `na.action` | a function which indicates what should happen when the data contain ‘NA’s. The default is set by the ‘na.action’ setting of ‘options’, and is ‘na.fail’ if that is unset. The “factory-fresh” default is ‘na.omit’. |
| `offset` | Can be used to supply a model offset for use in fitting. Note that this offset will always be completely ignored when predicting, unlike an offset included in `formula` (this used to conform to the behaviour of `lm` and `glm`). |
| `method` | The smoothing parameter estimation method. `"GCV.Cp"` to use GCV for unknown scale parameter and Mallows' Cp/UBRE/AIC for known scale. `"GACV.Cp"` is equivalent, but using GACV in place of GCV. `"REML"` for REML estimation, including of unknown scale, `"P-REML"` for REML estimation, but using a Pearson estimate of the scale. `"ML"` and `"P-ML"` are similar, but using maximum likelihood in place of REML. Default `"fREML"` uses fast REML computation. |
| `control` | A list of fit control parameters to replace defaults returned by `<gam.control>`. Any control parameters not supplied stay at their default values. |
| `select` | Should selection penalties be added to the smooth effects, so that they can in principle be penalized out of the model? See `gamma` to increase penalization. Has the side effect that smooths no longer have a fixed effect component (improper prior from a Bayesian perspective) allowing REML comparison of models with the same fixed effect structure. |
| `scale` | If this is positive then it is taken as the known scale parameter. Negative signals that the scale paraemter is unknown. 0 signals that the scale parameter is 1 for Poisson and binomial and unknown otherwise. Note that (RE)ML methods can only work with scale parameter 1 for the Poisson and binomial cases. |
| `gamma` | Increase above 1 to force smoother fits. `gamma` is used to multiply the effective degrees of freedom in the GCV/UBRE/AIC score (so `log(n)/2` is BIC like). `n/gamma` can be viewed as an effective sample size, which allows it to play a similar role for RE/ML smoothing parameter estimation. |
| `knots` | this is an optional list containing user specified knot values to be used for basis construction. For most bases the user simply supplies the knots to be used, which must match up with the `k` value supplied (note that the number of knots is not always just `k`). See `[tprs](smooth.construct.tp.smooth.spec)` for what happens in the `"tp"/"ts"` case. Different terms can use different numbers of knots, unless they share a covariate. |
| `sp` | A vector of smoothing parameters can be provided here. Smoothing parameters must be supplied in the order that the smooth terms appear in the model formula. Negative elements indicate that the parameter should be estimated, and hence a mixture of fixed and estimated parameters is possible. If smooths share smoothing parameters then `length(sp)` must correspond to the number of underlying smoothing parameters. |
| `min.sp` | Lower bounds can be supplied for the smoothing parameters. Note that if this option is used then the smoothing parameters `full.sp`, in the returned object, will need to be added to what is supplied here to get the smoothing parameters actually multiplying the penalties. `length(min.sp)` should always be the same as the total number of penalties (so it may be longer than `sp`, if smooths share smoothing parameters). |
| `paraPen` | optional list specifying any penalties to be applied to parametric model terms. `<gam.models>` explains more. |
| `chunk.size` | The model matrix is created in chunks of this size, rather than ever being formed whole. Reset to `4*p` if `chunk.size < 4*p` where `p` is the number of coefficients. |
| `rho` | An AR1 error model can be used for the residuals (based on dataframe order), of Gaussian-identity link models. This is the AR1 correlation parameter. Standardized residuals (approximately uncorrelated under correct model) returned in `std.rsd` if non zero. Also usable with other models when `discrete=TRUE`, in which case the AR model is applied to the working residuals and corresponds to a GEE approximation. |
| `AR.start` | logical variable of same length as data, `TRUE` at first observation of an independent section of AR1 correlation. Very first observation in data frame does not need this. If `NULL` then there are no breaks in AR1 correlaion. |
| `discrete` | with `method="fREML"` it is possible to discretize covariates for storage and efficiency reasons. If `discrete` is `TRUE`, a number or a vector of numbers for each smoother term, then discretization happens. If numbers are supplied they give the number of discretization bins. |
| `cluster` | `bam` can compute the computationally dominant QR decomposition in parallel using [parLapply](../../parallel/html/clusterapply) from the `parallel` package, if it is supplied with a cluster on which to do this (a cluster here can be some cores of a single machine). See details and example code. |
| `nthreads` | Number of threads to use for non-cluster computation (e.g. combining results from cluster nodes). If `NA` set to `max(1,length(cluster))`. See details. |
| `gc.level` | to keep the memory footprint down, it helps to call the garbage collector often, but this takes a substatial amount of time. Setting this to zero means that garbage collection only happens when R decides it should. Setting to 2 gives frequent garbage collection. 1 is in between. |
| `use.chol` | By default `bam` uses a very stable QR update approach to obtaining the QR decomposition of the model matrix. For well conditioned models an alternative accumulates the crossproduct of the model matrix and then finds its Choleski decomposition, at the end. This is somewhat more efficient, computationally. |
| `samfrac` | For very large sample size Generalized additive models the number of iterations needed for the model fit can be reduced by first fitting a model to a random sample of the data, and using the results to supply starting values. This initial fit is run with sloppy convergence tolerances, so is typically very low cost. `samfrac` is the sampling fraction to use. 0.1 is often reasonable. |
| `coef` | initial values for model coefficients |
| `drop.unused.levels` | by default unused levels are dropped from factors before fitting. For some smooths involving factor variables you might want to turn this off. Only do so if you know what you are doing. |
| `G` | if not `NULL` then this should be the object returned by a previous call to `bam` with `fit=FALSE`. Causes all other arguments to be ignored except `sp`, `chunk.size`, `gamma`,`nthreads`, `cluster`, `rho`, `gc.level`, `samfrac`, `use.chol`, `method` and `scale` (if >0). |
| `fit` | if `FALSE` then the model is set up for fitting but not estimated, and an object is returned, suitable for passing as the `G` argument to `bam`. |
| `drop.intercept` | Set to `TRUE` to force the model to really not have the a constant in the parametric model part, even with factor variables present. |
| `...` | further arguments for passing on e.g. to `gam.fit` (such as `mustart`). |
### Details
When `discrete=FALSE`, `bam` operates by first setting up the basis characteristics for the smooths, using a representative subsample of the data. Then the model matrix is constructed in blocks using `<predict.gam>`. For each block the factor R, from the QR decomposition of the whole model matrix is updated, along with Q'y. and the sum of squares of y. At the end of block processing, fitting takes place, without the need to ever form the whole model matrix.
In the generalized case, the same trick is used with the weighted model matrix and weighted pseudodata, at each step of the PIRLS. Smoothness selection is performed on the working model at each stage (performance oriented iteration), to maintain the small memory footprint. This is trivial to justify in the case of GCV or Cp/UBRE/AIC based model selection, and for REML/ML is justified via the asymptotic multivariate normality of Q'z where z is the IRLS pseudodata.
For full method details see Wood, Goude and Shaw (2015).
Note that POI is not as stable as the default nested iteration used with `<gam>`, but that for very large, information rich, datasets, this is unlikely to matter much.
Note also that it is possible to spend most of the computational time on basis evaluation, if an expensive basis is used. In practice this means that the default `"tp"` basis should be avoided: almost any other basis (e.g. `"cr"` or `"ps"`) can be used in the 1D case, and tensor product smooths (`te`) are typically much less costly in the multi-dimensional case.
If `cluster` is provided as a cluster set up using `[makeCluster](../../parallel/html/makecluster)` (or `[makeForkCluster](../../parallel/html/makecluster)`) from the `parallel` package, then the rate limiting QR decomposition of the model matrix is performed in parallel using this cluster. Note that the speed ups are often not that great. On a multi-core machine it is usually best to set the cluster size to the number of physical cores, which is often less than what is reported by `[detectCores](../../parallel/html/detectcores)`. Using more than the number of physical cores can result in no speed up at all (or even a slow down). Note that a highly parallel BLAS may negate all advantage from using a cluster of cores. Computing in parallel of course requires more memory than computing in series. See examples.
When `discrete=TRUE` the covariate data are first discretized. Discretization takes place on a smooth by smooth basis, or in the case of tensor product smooths (or any smooth that can be represented as such, such as random effects), separately for each marginal smooth. The required spline bases are then evaluated at the discrete values, and stored, along with index vectors indicating which original observation they relate to. Fitting is by a version of performance oriented iteration/PQL using REML smoothing parameter selection on each iterative working model (as for the default method). The iteration is based on the derivatives of the REML score, without computing the score itself, allowing the expensive computations to be reduced to one parallel block Cholesky decomposition per iteration (plus two basic operations of equal cost, but easily parallelized). Unlike standard POI/PQL, only one step of the smoothing parameter update for the working model is taken at each step (rather than iterating to the optimal set of smoothing parameters for each working model). At each step a weighted model matrix crossproduct of the model matrix is required - this is efficiently computed from the pre-computed basis functions evaluated at the discretized covariate values. Efficient computation with tensor product terms means that some terms within a tensor product may be re-ordered for maximum efficiency. See Wood et al (2017) and Li and Wood (2019) for full details.
When `discrete=TRUE` parallel computation is controlled using the `nthreads` argument. For this method no cluster computation is used, and the `parallel` package is not required. Note that actual speed up from parallelization depends on the BLAS installed and your hardware. With the (R default) reference BLAS using several threads can make a substantial difference, but with a single threaded tuned BLAS, such as openblas, the effect is less marked (since cache use is typically optimized for one thread, and is then sub optimal for several). However the tuned BLAS is usually much faster than using the reference BLAS, however many threads you use. If you have a multi-threaded BLAS installed then you should leave `nthreads` at 1, since calling a multi-threaded BLAS from multiple threads usually slows things down: the only exception to this is that you might choose to form discrete matrix cross products (the main cost in the fitting routine) in a multi-threaded way, but use single threaded code for other computations: this can be achieved by e.g. `nthreads=c(2,1)`, which would use 2 threads for discrete inner products, and 1 for most code calling BLAS. Not that the basic reason that multi-threaded performance is often disappointing is that most computers are heavily memory bandwidth limited, not flop rate limited. It is hard to get data to one core fast enough, let alone trying to get data simultaneously to several cores.
`discrete=TRUE` will often produce identical results to the methods without discretization, since covariates often only take a modest number of discrete values anyway, so no approximation at all is involved in the discretization process. Even when some approximation is involved, the differences are often very small as the algorithms discretize marginally whenever possible. For example each margin of a tensor product smooth is discretized separately, rather than discretizing onto a grid of covariate values (for an equivalent isotropic smooth we would have to discretize onto a grid). The marginal approach allows quite fine scale discretization and hence very low approximation error. Note that when using the smooth `id` mechanism to link smoothing parameters, the discrete method cannot force the linked bases to be identical, so some differences to the none discrete methods will be noticable.
The extended families given in `<family.mgcv>` can also be used. The extra parameters of these are estimated by maximizing the penalized likelihood, rather than the restricted marginal likelihood as in `<gam>`. So estimates may differ slightly from those returned by `<gam>`. Estimation is accomplished by a Newton iteration to find the extra parameters (e.g. the theta parameter of the negative binomial or the degrees of freedom and scale of the scaled t) maximizing the log likelihood given the model coefficients at each iteration of the fitting procedure.
### Value
An object of class `"gam"` as described in `[gamObject](gamobject)`.
### WARNINGS
The routine may be slower than optimal if the default `"tp"` basis is used.
Unless discrete=TRUE, you must have more unique combinations of covariates than the model has total parameters. (Total parameters is sum of basis dimensions plus sum of non-spline terms less the number of spline terms).
This routine is less stable than ‘gam’ for the same dataset.
With `discrete=TRUE`, `te` terms are efficiently computed, but `t2` are not.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N., Goude, Y. & Shaw S. (2015) Generalized additive models for large datasets. Journal of the Royal Statistical Society, Series C 64(1): 139-155. <https://rss.onlinelibrary.wiley.com/doi/full/10.1111/rssc.12068>
Wood, S.N., Li, Z., Shaddick, G. & Augustin N.H. (2017) Generalized additive models for gigadata: modelling the UK black smoke network daily data. Journal of the American Statistical Association. 112(519):1199-1210 doi: [10.1080/01621459.2016.1195744](https://doi.org/10.1080/01621459.2016.1195744)
Li, Z & S.N. Wood (2019) Faster model matrix crossproducts for large generalized linear models with discretized covariates. Statistics and Computing. doi: [10.1007/s11222-019-09864-2](https://doi.org/10.1007/s11222-019-09864-2)
### See Also
`[mgcv.parallel](mgcv-parallel)`, `<mgcv-package>`, `[gamObject](gamobject)`, `<gam.models>`, `<smooth.terms>`, `<linear.functional.terms>`, `<s>`, `<te>` `<predict.gam>`, `<plot.gam>`, `<summary.gam>`, `<gam.side>`, `<gam.selection>`, `<gam.control>` `<gam.check>`, `<linear.functional.terms>` `<negbin>`, `<magic>`,`<vis.gam>`
### Examples
```
library(mgcv)
## See help("mgcv-parallel") for using bam in parallel
## Sample sizes are small for fast run times.
set.seed(3)
dat <- gamSim(1,n=25000,dist="normal",scale=20)
bs <- "cr";k <- 12
b <- bam(y ~ s(x0,bs=bs)+s(x1,bs=bs)+s(x2,bs=bs,k=k)+
s(x3,bs=bs),data=dat)
summary(b)
plot(b,pages=1,rug=FALSE) ## plot smooths, but not rug
plot(b,pages=1,rug=FALSE,seWithMean=TRUE) ## `with intercept' CIs
ba <- bam(y ~ s(x0,bs=bs,k=k)+s(x1,bs=bs,k=k)+s(x2,bs=bs,k=k)+
s(x3,bs=bs,k=k),data=dat,method="GCV.Cp") ## use GCV
summary(ba)
## A Poisson example...
k <- 15
dat <- gamSim(1,n=21000,dist="poisson",scale=.1)
system.time(b1 <- bam(y ~ s(x0,bs=bs)+s(x1,bs=bs)+s(x2,bs=bs,k=k),
data=dat,family=poisson()))
b1
## Similar using faster discrete method...
system.time(b2 <- bam(y ~ s(x0,bs=bs,k=k)+s(x1,bs=bs,k=k)+s(x2,bs=bs,k=k)+
s(x3,bs=bs,k=k),data=dat,family=poisson(),discrete=TRUE))
b2
```
| programming_docs |
r None
`full.score` GCV/UBRE score for use within nlm
-----------------------------------------------
### Description
Evaluates GCV/UBRE score for a GAM, given smoothing parameters. The routine calls `<gam.fit>` to fit the model, and is usually called by `[nlm](../../stats/html/nlm)` to optimize the smoothing parameters.
This is basically a service routine for `<gam>`, and is not usually called directly by users. It is only used in this context for GAMs fitted by outer iteration (see `<gam.outer>`) when the the outer method is `"nlm.fd"` (see `<gam>` argument `optimizer`).
### Usage
```
full.score(sp,G,family,control,gamma,...)
```
### Arguments
| | |
| --- | --- |
| `sp` | The logs of the smoothing parameters |
| `G` | a list returned by `mgcv:::gam.setup` |
| `family` | The family object for the GAM. |
| `control` | a list returned be `<gam.control>` |
| `gamma` | the degrees of freedom inflation factor (usually 1). |
| `...` | other arguments, typically for passing on to `gam.fit`. |
### Value
The value of the GCV/UBRE score, with attribute `"full.gam.object"` which is the full object returned by `<gam.fit>`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
r None
`slanczos` Compute truncated eigen decomposition of a symmetric matrix
-----------------------------------------------------------------------
### Description
Uses Lanczos iteration to find the truncated eigen-decomposition of a symmetric matrix.
### Usage
```
slanczos(A,k=10,kl=-1,tol=.Machine$double.eps^.5,nt=1)
```
### Arguments
| | |
| --- | --- |
| `A` | A symmetric matrix. |
| `k` | Must be non-negative. If `kl` is negative, then the `k` largest magnitude eigenvalues are found, together with the corresponding eigenvectors. If `kl` is non-negative then the `k` highest eigenvalues are found together with their eigenvectors and the `kl` lowest eigenvalues with eigenvectors are also returned. |
| `kl` | If `kl` is non-negative then the `kl` lowest eigenvalues are returned together with their corresponding eigenvectors (in addition to the `k` highest eignevalues + vectors). negative `kl` signals that the `k` largest magnitude eigenvalues should be returned, with eigenvectors. |
| `tol` | tolerance to use for convergence testing of eigenvalues. Error in eigenvalues will be less than the magnitude of the dominant eigenvalue multiplied by `tol` (or the machine precision!). |
| `nt` | number of threads to use for leading order iterative multiplication of A by vector. May show no speed improvement on two processor machine. |
### Details
If `kl` is non-negative, returns the highest `k` and lowest `kl` eigenvalues, with their corresponding eigenvectors. If `kl` is negative, returns the largest magnitude `k` eigenvalues, with corresponding eigenvectors.
The routine implements Lanczos iteration with full re-orthogonalization as described in Demmel (1997). Lanczos iteraction iteratively constructs a tridiagonal matrix, the eigenvalues of which converge to the eigenvalues of `A`, as the iteration proceeds (most extreme first). Eigenvectors can also be computed. For small `k` and `kl` the approach is faster than computing the full symmetric eigendecompostion. The tridiagonal eigenproblems are handled using LAPACK.
The implementation is not optimal: in particular the inner triadiagonal problems could be handled more efficiently, and there would be some savings to be made by not always returning eigenvectors.
### Value
A list with elements `values` (array of eigenvalues); `vectors` (matrix with eigenvectors in its columns); `iter` (number of iterations required).
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Demmel, J. (1997) Applied Numerical Linear Algebra. SIAM
### See Also
`[cyclic.p.spline](smooth.construct.ps.smooth.spec)`
### Examples
```
require(mgcv)
## create some x's and knots...
set.seed(1);
n <- 700;A <- matrix(runif(n*n),n,n);A <- A+t(A)
## compare timings of slanczos and eigen
system.time(er <- slanczos(A,10))
system.time(um <- eigen(A,symmetric=TRUE))
## confirm values are the same...
ind <- c(1:6,(n-3):n)
range(er$values-um$values[ind]);range(abs(er$vectors)-abs(um$vectors[,ind]))
```
r None
`initial.sp` Starting values for multiple smoothing parameter estimation
-------------------------------------------------------------------------
### Description
Finds initial smoothing parameter guesses for multiple smoothing parameter estimation. The idea is to find values such that the estimated degrees of freedom per penalized parameter should be well away from 0 and 1 for each penalized parameter, thus ensuring that the values are in a region of parameter space where the smoothing parameter estimation criterion is varying substantially with smoothing parameter value.
### Usage
```
initial.sp(X,S,off,expensive=FALSE,XX=FALSE)
```
### Arguments
| | |
| --- | --- |
| `X` | is the model matrix. |
| `S` | is a list of of penalty matrices. `S[[i]]` is the ith penalty matrix, but note that it is not stored as a full matrix, but rather as the smallest square matrix including all the non-zero elements of the penalty matrix. Element 1,1 of `S[[i]]` occupies element `off[i]`, `off[i]` of the ith penalty matrix. Each `S[[i]]` must be positive semi-definite. |
| `off` | is an array indicating the first parameter in the parameter vector that is penalized by the penalty involving `S[[i]]`. |
| `expensive` | if `TRUE` then the overall amount of smoothing is adjusted so that the average degrees of freedom per penalized parameter is exactly 0.5: this is numerically costly. |
| `XX` | if `TRUE` then `X` contains *X'X*, rather than *X*. |
### Details
Basically uses a crude approximation to the estimated degrees of freedom per model coefficient, to try and find smoothing parameters which bound these e.d.f.'s away from 0 and 1.
Usually only called by `<magic>` and `<gam>`.
### Value
An array of initial smoothing parameter estimates.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### See Also
`<magic>`, `<gam.outer>`, `<gam>`,
r None
`Predict.matrix.cr.smooth` Predict matrix method functions
-----------------------------------------------------------
### Description
The various built in smooth classes for use with `<gam>` have associate `[Predict.matrix](predict.matrix)` method functions to enable prediction from the fitted model.
### Usage
```
## S3 method for class 'cr.smooth'
Predict.matrix(object, data)
## S3 method for class 'cs.smooth'
Predict.matrix(object, data)
## S3 method for class 'cyclic.smooth'
Predict.matrix(object, data)
## S3 method for class 'pspline.smooth'
Predict.matrix(object, data)
## S3 method for class 'tensor.smooth'
Predict.matrix(object, data)
## S3 method for class 'tprs.smooth'
Predict.matrix(object, data)
## S3 method for class 'ts.smooth'
Predict.matrix(object, data)
## S3 method for class 't2.smooth'
Predict.matrix(object, data)
```
### Arguments
| | |
| --- | --- |
| `object` | a smooth object, usually generated by a `<smooth.construct>` method having processed a smooth specification object generated by an `<s>` or `<te>` term in a `<gam>` formula. |
| `data` | A data frame containing the values of the (named) covariates at which the smooth term is to be evaluated. Exact requirements are as for `<smooth.construct>` and `smooth.construct2` |
.
### Details
The Predict matrix function is not normally called directly, but is rather used internally by `<predict.gam>` etc. to predict from a fitted `<gam>` model. See `[Predict.matrix](predict.matrix)` for more details, or the specific `smooth.construct` pages for details on a particular smooth class.
### Value
A matrix mapping the coeffients for the smooth term to its values at the supplied data values.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
### Examples
```
## see smooth.construct
```
r None
`ldTweedie` Log Tweedie density evaluation
-------------------------------------------
### Description
A function to evaluate the log of the Tweedie density for variance powers between 1 and 2, inclusive. Also evaluates first and second derivatives of log density w.r.t. its scale parameter, `phi`, and `p`, or w.r.t. `rho=log(phi)` and `theta` where `p = (a+b*exp(theta))/(1+exp(theta))`.
### Usage
```
ldTweedie(y,mu=y,p=1.5,phi=1,rho=NA,theta=NA,a=1.001,b=1.999,all.derivs=FALSE)
```
### Arguments
| | |
| --- | --- |
| `y` | values at which to evaluate density. |
| `mu` | corresponding means (either of same length as `y` or a single value). |
| `p` | the variance of `y` is proportional to its mean to the power `p`. `p` must be between 1 and 2. 1 is Poisson like (exactly Poisson if `phi=1`), 2 is gamma. |
| `phi` | The scale parameter. Variance of `y` is `phi*mu^p`. |
| `rho` | optional log scale parameter. Over-rides `phi` if `theta` also supplied. |
| `theta` | parameter such that `p = (a+b*exp(theta))/(1+exp(theta))`. Over-rides `p` if `rho` also supplied. |
| `a` | lower limit parameter (>1) used in definition of `p` from `theta`. |
| `b` | upper limit parameter (<2) used in definition of `p` from `theta`. |
| `all.derivs` | if `TRUE` then derivatives w.r.t. `mu` are also returned. Only available with `rho` and `phi` parameterization. |
### Details
A Tweedie random variable with 1<p<2 is a sum of `N` gamma random variables where `N` has a Poisson distribution. The p=1 case is a generalization of a Poisson distribution and is a discrete distribution supported on integer multiples of the scale parameter. For 1<p<2 the distribution is supported on the positive reals with a point mass at zero. p=2 is a gamma distribution. As p gets very close to 1 the continuous distribution begins to converge on the discretely supported limit at p=1.
`ldTweedie` is based on the series evaluation method of Dunn and Smyth (2005). Without the restriction on `p` the calculation of Tweedie densities is less straightforward. If you really need this case then the `tweedie` package is the place to start.
The `rho`, `theta` parameterization is useful for optimization of `p` and `phi`, in order to keep `p` bounded well away from 1 and 2, and `phi` positive. The derivatives near `p=1` tend to infinity.
Note that if `p` and `phi` (or `theta` and `rho`) both contain only a single unique value, then the underlying code is able to use buffering to avoid repeated calls to expensive log gamma, di-gamma and tri-gamma functions (`mu` can still be a vector of different values). This is much faster than is possible when these parameters are vectors with different values.
### Value
A matrix with 6 columns, or 10 if `all.derivs=TRUE`. The first is the log density of `y` (log probability if `p=1`). The second and third are the first and second derivatives of the log density w.r.t. `phi`. 4th and 5th columns are first and second derivative w.r.t. `p`, final column is second derivative w.r.t. `phi` and `p`.
If `rho` and `theta` were supplied then derivatives are w.r.t. these. In this case, and if `all.derivs=TRUE` then the 7th colmn is the derivative w.r.t. `mu`, the 8th is the 2nd derivative w.r.t. `mu`, the 9th is the mixed derivative w.r.t. `theta` and`mu` and the 10th is the mixed derivative w.r.t. `rho` and `mu`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Dunn, P.K. and G.K. Smith (2005) Series evaluation of Tweedie exponential dispersion model densities. Statistics and Computing 15:267-280
Tweedie, M. C. K. (1984). An index which distinguishes between some important exponential families. Statistics: Applications and New Directions. Proceedings of the Indian Statistical Institute Golden Jubilee International Conference (Eds. J. K. Ghosh and J. Roy), pp. 579-604. Calcutta: Indian Statistical Institute.
### Examples
```
library(mgcv)
## convergence to Poisson illustrated
## notice how p>1.1 is OK
y <- seq(1e-10,10,length=1000)
p <- c(1.0001,1.001,1.01,1.1,1.2,1.5,1.8,2)
phi <- .5
fy <- exp(ldTweedie(y,mu=2,p=p[1],phi=phi)[,1])
plot(y,fy,type="l",ylim=c(0,3),main="Tweedie density as p changes")
for (i in 2:length(p)) {
fy <- exp(ldTweedie(y,mu=2,p=p[i],phi=phi)[,1])
lines(y,fy,col=i)
}
```
r None
`multinom` GAM multinomial logistic regression
-----------------------------------------------
### Description
Family for use with `<gam>`, implementing regression for categorical response data. Categories must be coded 0 to K, where K is a positive integer. `<gam>` should be called with a list of K formulae, one for each category except category zero (extra formulae for shared terms may also be supplied: see `<formula.gam>`). The first formula also specifies the response variable.
### Usage
```
multinom(K=1)
```
### Arguments
| | |
| --- | --- |
| `K` | There are K+1 categories and K linear predictors. |
### Details
The model has K linear predictors, *h\_j*, each dependent on smooth functions of predictor variables, in the usual way. If response variable, y, contains the class labels 0,...,K then the likelihood for y>0 is *exp(h\_y)/(1 + sum\_j exp(h\_j) )*. If y=0 the likelihood is *1/(1 + sum\_j exp(h\_j) )*. In the two class case this is just a binary logistic regression model. The implementation uses the approach to GAMLSS models described in Wood, Pya and Saefken (2016).
The residuals returned for this model are simply the square root of -2 times the deviance for each observation, with a positive sign if the observed y is the most probable class for this observation, and a negative sign otherwise.
Use `predict` with `type="response"` to get the predicted probabilities in each category.
Note that the model is not completely invariant to category relabelling, even if all linear predictors have the same form. Realistically this model is unlikely to be suitable for problems with large numbers of categories. Missing categories are not supported.
### Value
An object of class `general.family`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
### See Also
`<ocat>`
### Examples
```
library(mgcv)
set.seed(6)
## simulate some data from a three class model
n <- 1000
f1 <- function(x) sin(3*pi*x)*exp(-x)
f2 <- function(x) x^3
f3 <- function(x) .5*exp(-x^2)-.2
f4 <- function(x) 1
x1 <- runif(n);x2 <- runif(n)
eta1 <- 2*(f1(x1) + f2(x2))-.5
eta2 <- 2*(f3(x1) + f4(x2))-1
p <- exp(cbind(0,eta1,eta2))
p <- p/rowSums(p) ## prob. of each category
cp <- t(apply(p,1,cumsum)) ## cumulative prob.
## simulate multinomial response with these probabilities
## see also ?rmultinom
y <- apply(cp,1,function(x) min(which(x>runif(1))))-1
## plot simulated data...
plot(x1,x2,col=y+3)
## now fit the model...
b <- gam(list(y~s(x1)+s(x2),~s(x1)+s(x2)),family=multinom(K=2))
plot(b,pages=1)
gam.check(b)
## now a simple classification plot...
expand.grid(x1=seq(0,1,length=40),x2=seq(0,1,length=40)) -> gr
pp <- predict(b,newdata=gr,type="response")
pc <- apply(pp,1,function(x) which(max(x)==x)[1])-1
plot(gr,col=pc+3,pch=19)
```
r None
`identifiability` Identifiability constraints
----------------------------------------------
### Description
Smooth terms are generally only identifiable up to an additive constant. In consequence sum-to-zero identifiability constraints are imposed on most smooth terms. The exceptions are terms with `by` variables which cause the smooth to be identifiable without constraint (that doesn't include factor `by` variables), and random effect terms. Alternatively smooths can be set up to pass through zero at a user specified point.
### Details
By default each smooth term is subject to the sum-to-zero constraint
*sum\_i f(x\_i) = 0.*
The constraint is imposed by reparameterization. The sum-to-zero constraint causes the term to be orthogonal to the intercept: alternative constraints lead to wider confidence bands for the constrained smooth terms.
No constraint is used for random effect terms, since the penalty (random effect covariance matrix) anyway ensures identifiability in this case. Also if a `by` variable means that the smooth is anyway identifiable, then no extra constraint is imposed. Constraints are imposed for factor `by` variables, so that the main effect of the factor must usually be explicitly added to the model (the example below is an exception).
Occasionally it is desirable to substitute the constraint that a particular smooth curve should pass through zero at a particular point: the `pc` argument to `<s>`, `<te>`, `[ti](te)` and `<t2>` allows this: if specified then such constraints are always applied.
### Author(s)
Simon N. Wood ([email protected])
### Examples
```
## Example of three groups, each with a different smooth dependence on x
## but each starting at the same value...
require(mgcv)
set.seed(53)
n <- 100;x <- runif(3*n);z <- runif(3*n)
fac <- factor(rep(c("a","b","c"),each=100))
y <- c(sin(x[1:100]*4),exp(3*x[101:200])/10-.1,exp(-10*(x[201:300]-.5))/
(1+exp(-10*(x[201:300]-.5)))-0.9933071) + z*(1-z)*5 + rnorm(100)*.4
## 'pc' used to constrain smooths to 0 at x=0...
b <- gam(y~s(x,by=fac,pc=0)+s(z))
plot(b,pages=1)
```
r None
`tensor.prod.model.matrix` Row Kronecker product/ tensor product smooth construction
-------------------------------------------------------------------------------------
### Description
Produce model matrices or penalty matrices for a tensor product smooth from the model matrices or penalty matrices for the marginal bases of the smooth (marginals and results can be sparse). The model matrix construction uses row Kronecker products.
### Usage
```
tensor.prod.model.matrix(X)
tensor.prod.penalties(S)
a%.%b
```
### Arguments
| | |
| --- | --- |
| `X` | a list of model matrices for the marginal bases of a smooth. Items can be class `"matrix"` or `"dgCMatrix"`, but not a mixture of the two. |
| `S` | a list of penalties for the marginal bases of a smooth. |
| `a` | a matrix with the same number of rows as `A`. |
| `b` | a matrix with the same number of rows as `B`. |
### Details
If `X[[1]]`, `X[[2]]` ... `X[[m]]` are the model matrices of the marginal bases of a tensor product smooth then the ith row of the model matrix for the whole tensor product smooth is given by `X[[1]][i,]%x%X[[2]][i,]%x% ... X[[m]][i,]`, where `%x%` is the Kronecker product. Of course the routine operates column-wise, not row-wise!
`A%.%B` is the operator form of this ‘row Kronecker product’.
If `S[[1]]`, `S[[2]]` ... `S[[m]]` are the penalty matrices for the marginal bases, and `I[[1]]`, `I[[2]]` ... `I[[m]]` are corresponding identity matrices, each of the same dimension as its corresponding penalty, then the tensor product smooth has m associate penalties of the form:
`S[[1]]%x%I[[2]]%x% ... I[[m]]`,
`I[[1]]%x%S[[2]]%x% ... I[[m]]`
...
`I[[1]]%x%I[[2]]%x% ... S[[m]]`.
Of course it's important that the model matrices and penalty matrices are presented in the same order when constructing tensor product smooths.
### Value
Either a single model matrix for a tensor product smooth (of the same class as the marginals), or a list of penalty terms for a tensor product smooth.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2006) Low rank scale invariant tensor product smooths for Generalized Additive Mixed Models. Biometrics 62(4):1025-1036
### See Also
`<te>`, `<smooth.construct.tensor.smooth.spec>`
### Examples
```
require(mgcv)
## Dense row Kronecker product example...
X <- list(matrix(0:3,2,2),matrix(c(5:8,0,0),2,3))
tensor.prod.model.matrix(X)
X[[1]]%.%X[[2]]
## sparse equivalent...
Xs <- lapply(X,as,"dgCMatrix")
tensor.prod.model.matrix(Xs)
Xs[[1]]%.%Xs[[2]]
S <- list(matrix(c(2,1,1,2),2,2),matrix(c(2,1,0,1,2,1,0,1,2),3,3))
tensor.prod.penalties(S)
## Sparse equivalent...
Ss <- lapply(S,as,"dgCMatrix")
tensor.prod.penalties(Ss)
```
| programming_docs |
r None
`gam.control` Setting GAM fitting defaults
-------------------------------------------
### Description
This is an internal function of package `mgcv` which allows control of the numerical options for fitting a GAM. Typically users will want to modify the defaults if model fitting fails to converge, or if the warnings are generated which suggest a loss of numerical stability during fitting. To change the default choise of fitting method, see `<gam>` arguments `method` and `optimizer`.
### Usage
```
gam.control(nthreads=1,irls.reg=0.0,epsilon = 1e-07, maxit = 200,
mgcv.tol=1e-7,mgcv.half=15, trace = FALSE,
rank.tol=.Machine$double.eps^0.5,nlm=list(),
optim=list(),newton=list(),outerPIsteps=0,
idLinksBases=TRUE,scalePenalty=TRUE,efs.lspmax=15,
efs.tol=.1,keepData=FALSE,scale.est="fletcher",
edge.correct=FALSE)
```
### Arguments
| | |
| --- | --- |
| `nthreads` | Some parts of some smoothing parameter selection methods (e.g. REML) can use some parallelization in the C code if your R installation supports openMP, and `nthreads` is set to more than 1. Note that it is usually better to use the number of physical cores here, rather than the number of hyper-threading cores. |
| `irls.reg` | For most models this should be 0. The iteratively re-weighted least squares method by which GAMs are fitted can fail to converge in some circumstances. For example, data with many zeroes can cause problems in a model with a log link, because a mean of zero corresponds to an infinite range of linear predictor values. Such convergence problems are caused by a fundamental lack of identifiability, but do not show up as lack of identifiability in the penalized linear model problems that have to be solved at each stage of iteration. In such circumstances it is possible to apply a ridge regression penalty to the model to impose identifiability, and `irls.reg` is the size of the penalty. |
| `epsilon` | This is used for judging conversion of the GLM IRLS loop in `<gam.fit>` or `<gam.fit3>`. |
| `maxit` | Maximum number of IRLS iterations to perform. |
| `mgcv.tol` | The convergence tolerance parameter to use in GCV/UBRE optimization. |
| `mgcv.half` | If a step of the GCV/UBRE optimization method leads to a worse GCV/UBRE score, then the step length is halved. This is the number of halvings to try before giving up. |
| `trace` | Set this to `TRUE` to turn on diagnostic output. |
| `rank.tol` | The tolerance used to estimate the rank of the fitting problem. |
| `nlm` | list of control parameters to pass to `[nlm](../../stats/html/nlm)` if this is used for outer estimation of smoothing parameters (not default). See details. |
| `optim` | list of control parameters to pass to `[optim](../../stats/html/optim)` if this is used for outer estimation of smoothing parameters (not default). See details. |
| `newton` | list of control parameters to pass to default Newton optimizer used for outer estimation of log smoothing parameters. See details. |
| `outerPIsteps` | The number of performance interation steps used to initialize outer iteration. |
| `idLinksBases` | If smooth terms have their smoothing parameters linked via the `id` mechanism (see `<s>`), should they also have the same bases. Set this to `FALSE` only if you are sure you know what you are doing (you should almost surely set `scalePenalty` to `FALSE` as well in this case). |
| `scalePenalty` | `<gamm>` is somewhat sensitive to the absolute scaling of the penalty matrices of a smooth relative to its model matrix. This option rescales the penalty matrices to accomodate this problem. Probably should be set to `FALSE` if you are linking smoothing parameters but have set `idLinkBases` to `FALSE`. |
| `efs.lspmax` | maximum log smoothing parameters to allow under extended Fellner Schall smoothing parameter optimization. |
| `efs.tol` | change in REML to count as negligible when testing for EFS convergence. If the step is small and the last 3 steps led to a REML change smaller than this, then stop. |
| `keepData` | Should a copy of the original `data` argument be kept in the `gam` object? Strict compatibility with class `glm` would keep it, but it wastes space to do so. |
| `scale.est` | How to estimate the scale parameter for exponential family models estimated by outer iteration. See `<gam.scale>`. |
| `edge.correct` | With RE/ML smoothing parameter selection in `gam` using the default Newton RE/ML optimizer, it is possible to improve inference at the ‘completely smooth’ edge of the smoothing parameter space, by decreasing smoothing parameters until there is a small increase in the negative RE/ML (e.g. 0.02). Set to `TRUE` or to a number representing the target increase to use. Only changes the corrected smoothing parameter matrix, `Vc`. |
### Details
Outer iteration using `newton` is controlled by the list `newton` with the following elements: `conv.tol` (default 1e-6) is the relative convergence tolerance; `maxNstep` is the maximum length allowed for an element of the Newton search direction (default 5); `maxSstep` is the maximum length allowed for an element of the steepest descent direction (only used if Newton fails - default 2); `maxHalf` is the maximum number of step halvings to permit before giving up (default 30).
If outer iteration using `[nlm](../../stats/html/nlm)` is used for fitting, then the control list `nlm` stores control arguments for calls to routine `[nlm](../../stats/html/nlm)`. The list has the following named elements: (i) `ndigit` is the number of significant digits in the GCV/UBRE score - by default this is worked out from `epsilon`; (ii) `gradtol` is the tolerance used to judge convergence of the gradient of the GCV/UBRE score to zero - by default set to `10*epsilon`; (iii) `stepmax` is the maximum allowable log smoothing parameter step - defaults to 2; (iv) `steptol` is the minimum allowable step length - defaults to 1e-4; (v) `iterlim` is the maximum number of optimization steps allowed - defaults to 200; (vi) `check.analyticals` indicates whether the built in exact derivative calculations should be checked numerically - defaults to `FALSE`. Any of these which are not supplied and named in the list are set to their default values.
Outer iteration using `[optim](../../stats/html/optim)` is controlled using list `optim`, which currently has one element: `factr` which takes default value 1e7.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2011) Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. Journal of the Royal Statistical Society (B) 73(1):3-36
Wood, S.N. (2004) Stable and efficient multiple smoothing parameter estimation for generalized additive models. J. Amer. Statist. Ass.99:673-686.
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<gam>`, `<gam.fit>`, `[glm.control](../../stats/html/glm.control)`
r None
`interpret.gam` Interpret a GAM formula
----------------------------------------
### Description
This is an internal function of package `mgcv`. It is a service routine for `gam` which splits off the strictly parametric part of the model formula, returning it as a formula, and interprets the smooth parts of the model formula.
Not normally called directly.
### Usage
```
interpret.gam(gf, extra.special = NULL)
```
### Arguments
| | |
| --- | --- |
| `gf` | A GAM formula as supplied to `<gam>` or `<gamm>`, or a list of such formulae, as supplied for some `<gam>` families. |
| `extra.special` | Name of any extra special in formula in addition to `s`, `te`, `ti` and `t2`. |
### Value
An object of class `split.gam.formula` with the following items:
| | |
| --- | --- |
| `pf` | A model formula for the strictly parametric part of the model. |
| `pfok` | TRUE if there is a `pf` formula. |
| `smooth.spec` | A list of class `xx.smooth.spec` objects where `xx` depends on the basis specified for the term. (These can be passed to smooth constructor method functions to actually set up penalties and bases.) |
| `full.formula` | An expanded version of the model formula in which the options are fully expanded, and the options do not depend on variables which might not be available later. |
| `fake.formula` | A formula suitable for use in evaluating a model frame. |
| `response` | Name of the response variable. |
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<gam>` `<gamm>`
r None
`totalPenaltySpace` Obtaining (orthogonal) basis for null space and range of the penalty matrix
------------------------------------------------------------------------------------------------
### Description
INTERNAL function to obtain (orthogonal) basis for the null space and range space of the penalty, and obtain actual null space dimension components are roughly rescaled to avoid any dominating.
### Usage
```
totalPenaltySpace(S, H, off, p)
```
### Arguments
| | |
| --- | --- |
| `S` | a list of penalty matrices, in packed form. |
| `H` | the coefficient matrix of an user supplied fixed quadratic penalty on the parameters of the GAM. |
| `off` | a vector where the i-th element is the offset for the i-th matrix. |
| `p` | total number of parameters. |
### Value
A list of matrix square roots such that `S[[i]]=B[[i]]%*%t(B[[i]])`.
### Author(s)
Simon N. Wood <[email protected]>.
r None
`mgcv-parallel` Parallel computation in mgcv.
----------------------------------------------
### Description
`mgcv` can make some use of multiple cores or a cluster.
`<bam>` can use an openMP based parallelization approach alongside discretisation of covariates to achieve substantial speed ups. This is selected using the `discrete=TRUE` option to `bam`, withthe number of threads controlled via the `nthreads` argument. This is the approach that scales best. See example below.
Alternatively, function `<bam>` can use the facilities provided in the [parallel](../../parallel/html/parallel-package) package. See examples below. Note that most multi-core machines are memory bandwidth limited, so parallel speed up tends to be rather variable.
Function `<gam>` can use parallel threads on a (shared memory) multi-core machine via `openMP` (where this is supported). To do this, set the desired number of threads by setting `nthreads` to the number of cores to use, in the `control` argument of `<gam>`. Note that, for the most part, only the dominant *O(np^2)* steps are parallelized (n is number of data, p number of parameters). For additive Gaussian models estimated by GCV, the speed up can be disappointing as these employ an *O(p^3)* SVD step that can also have substantial cost in practice.
`<magic>` can also use multiple cores, but the same comments apply as for the GCV Gaussian additive model.
If `control$nthreads` is set to more than the number of cores detected, then only the number of detected cores is used. Note that using virtual cores usually gives very little speed up, and can even slow computations slightly. For example, many Intel processors reporting 4 cores actually have 2 physical cores, each with 2 virtual cores, so using 2 threads gives a marked increase in speed, while using 4 threads makes little extra difference.
Note that on Intel and similar processors the maximum performance is usually achieved by disabling Hyper-Threading in BIOS, and then setting the number of threads to the number of physical cores used. This prevents the operating system scheduler from sending 2 floating point intensive threads to the same physical core, where they have to share a floating point unit (and cache) and therefore slow each other down. The scheduler tends to do this under the manager - worker multi-threading approach used in mgcv, since the manager thread looks very busy up to the point at which the workers are set to work, and at the point of scheduling the sceduler has no way of knowing that the manager thread actually has nothing more to do until the workers are finished. If you are working on a many cored platform where you can not disable hyper-threading then it may be worth setting the number of threads to one less than the number of physical cores, to reduce the frequency of such scheduling problems.
mgcv's work splitting always makes the simple assumption that all your cores are equal, and you are not sharing them with other floating point intensive threads.
In addition to hyper-threading several features may lead to apparently poor scaling. The first is that many CPUs have a Turbo mode, whereby a few cores can be run at higher frequency, provided the overall power used by the CPU does not exceed design limits, however it is not possible for all cores on the CPU to run at this frequency. So as you add threads eventually the CPU frequency has to be reduced below the Turbo frequency, with the result that you don't get the expected speed up from adding cores. Secondly, most modern CPUs have their frequency set dynamically according to load. You may need to set the system power management policy to favour high performance in order to maximize the chance that all threads run at the speed you were hoping for (you can turn off dynamic power control in BIOS, but then you turn off the possibility of Turbo also).
Because the computational burden in `mgcv` is all in the linear algebra, then parallel computation may provide reduced or no benefit with a tuned BLAS. This is particularly the case if you are using a multi threaded BLAS, but a BLAS that is tuned to make efficient use of a particular cache size may also experience loss of performance if threads have to share the cache.
### Author(s)
Simon Wood <[email protected]>
### References
<https://hpc.llnl.gov/openmp-tutorial>
### Examples
```
## illustration of multi-threading with gam...
require(mgcv);set.seed(9)
dat <- gamSim(1,n=2000,dist="poisson",scale=.1)
k <- 12;bs <- "cr";ctrl <- list(nthreads=2)
system.time(b1<-gam(y~s(x0,bs=bs)+s(x1,bs=bs)+s(x2,bs=bs,k=k)
,family=poisson,data=dat,method="REML"))[3]
system.time(b2<-gam(y~s(x0,bs=bs)+s(x1,bs=bs)+s(x2,bs=bs,k=k),
family=poisson,data=dat,method="REML",control=ctrl))[3]
## Poisson example on a cluster with 'bam'.
## Note that there is some overhead in initializing the
## computation on the cluster, associated with loading
## the Matrix package on each node. Sample sizes are low
## here to keep example quick -- for such a small model
## little or no advantage is likely to be seen.
k <- 13;set.seed(9)
dat <- gamSim(1,n=6000,dist="poisson",scale=.1)
require(parallel)
nc <- 2 ## cluster size, set for example portability
if (detectCores()>1) { ## no point otherwise
cl <- makeCluster(nc)
## could also use makeForkCluster, but read warnings first!
} else cl <- NULL
system.time(b3 <- bam(y ~ s(x0,bs=bs,k=7)+s(x1,bs=bs,k=7)+s(x2,bs=bs,k=k)
,data=dat,family=poisson(),chunk.size=5000,cluster=cl))
fv <- predict(b3,cluster=cl) ## parallel prediction
if (!is.null(cl)) stopCluster(cl)
b3
## Alternative, better scaling example, using the discrete option with bam...
system.time(b4 <- bam(y ~ s(x0,bs=bs,k=7)+s(x1,bs=bs,k=7)+s(x2,bs=bs,k=k)
,data=dat,family=poisson(),discrete=TRUE,nthreads=2))
```
r None
`dDeta` Obtaining derivative w.r.t. linear predictor
-----------------------------------------------------
### Description
INTERNAL function. Distribution families provide derivatives of the deviance and link w.r.t. `mu = inv_link(eta)`. This routine converts these to the required derivatives of the deviance w.r.t. eta, the linear predictor.
### Usage
```
dDeta(y, mu, wt, theta, fam, deriv = 0)
```
### Arguments
| | |
| --- | --- |
| `y` | vector of observations. |
| `mu` | if `eta` is the linear predictor, `mu = inv_link(eta)`. In a traditional GAM `mu=E(y)`. |
| `wt` | vector of weights. |
| `theta` | vector of family parameters that are not regression coefficients (e.g. scale parameters). |
| `fam` | the family object. |
| `deriv` | the order of derivative of the smoothing parameter score required. |
### Value
A list of derivatives.
### Author(s)
Simon N. Wood <[email protected]>.
r None
`vis.gam` Visualization of GAM objects
---------------------------------------
### Description
Produces perspective or contour plot views of `gam` model predictions, fixing all but the values in `view` to the values supplied in `cond`.
### Usage
```
vis.gam(x,view=NULL,cond=list(),n.grid=30,too.far=0,col=NA,
color="heat",contour.col=NULL,se=-1,type="link",
plot.type="persp",zlim=NULL,nCol=50,...)
```
### Arguments
| | |
| --- | --- |
| `x` | a `gam` object, produced by `gam()` |
| `view` | an array containing the names of the two main effect terms to be displayed on the x and y dimensions of the plot. If omitted the first two suitable terms will be used. Note that variables coerced to factors in the model formula won't work as view variables, and `vis.gam` can not detect that this has happened when setting defaults. |
| `cond` | a named list of the values to use for the other predictor terms (not in `view`). Variables omitted from this list will have the closest observed value to the median for continuous variables, or the most commonly occuring level for factors. Parametric matrix variables have all the entries in each column set to the observed column entry closest to the column median. |
| `n.grid` | The number of grid nodes in each direction used for calculating the plotted surface. |
| `too.far` | plot grid nodes that are too far from the points defined by the variables given in `view` can be excluded from the plot. `too.far` determines what is too far. The grid is scaled into the unit square along with the `view` variables and then grid nodes more than `too.far` from the predictor variables are excluded. |
| `col` | The colours for the facets of the plot. If this is `NA` then if `se`>0 the facets are transparent, otherwise the colour scheme specified in `color` is used. If `col` is not `NA` then it is used as the facet colour. |
| `color` | the colour scheme to use for plots when `se`<=0. One of `"topo"`, `"heat"`, `"cm"`, `"terrain"`, `"gray"` or `"bw"`. Schemes `"gray"` and `"bw"` also modify the colors used when `se`>0. |
| `contour.col` | sets the colour of contours when using `plot.type="contour"`. Default scheme used if `NULL`. |
| `se` | if less than or equal to zero then only the predicted surface is plotted, but if greater than zero, then 3 surfaces are plotted, one at the predicted values minus `se` standard errors, one at the predicted values and one at the predicted values plus `se` standard errors. |
| `type` | `"link"` to plot on linear predictor scale and `"response"` to plot on the response scale. |
| `plot.type` | one of `"contour"` or `"persp"`. |
| `zlim` | a two item array giving the lower and upper limits for the z-axis scale. `NULL` to choose automatically. |
| `nCol` | The number of colors to use in color schemes. |
| `...` | other options to pass on to `[persp](../../graphics/html/persp)`, `[image](../../graphics/html/image)` or `[contour](../../graphics/html/contour)`. In particular `ticktype="detailed"` will add proper axes labelling to the plots. |
### Details
The x and y limits are determined by the ranges of the terms named in `view`. If `se`<=0 then a single (height colour coded, by default) surface is produced, otherwise three (by default see-through) meshes are produced at mean and +/- `se` standard errors. Parts of the x-y plane too far from data can be excluded by setting `too.far`
All options to the underlying graphics functions can be reset by passing them as extra arguments `...`: such supplied values will always over-ride the default values used by `vis.gam`.
### Value
Simply produces a plot.
### WARNINGS
The routine can not detect that a variable has been coerced to factor within a model formula, and will therefore fail if such a variable is used as a `view` variable. When setting default `view` variables it can not detect this situation either, which can cause failures if the coerced variables are the first, otherwise suitable, variables encountered.
### Author(s)
Simon Wood [[email protected]](mailto:[email protected])
Based on an original idea and design by Mike Lonergan.
### See Also
`[persp](../../graphics/html/persp)` and `<gam>`.
### Examples
```
library(mgcv)
set.seed(0)
n<-200;sig2<-4
x0 <- runif(n, 0, 1);x1 <- runif(n, 0, 1)
x2 <- runif(n, 0, 1)
y<-x0^2+x1*x2 +runif(n,-0.3,0.3)
g<-gam(y~s(x0,x1,x2))
old.par<-par(mfrow=c(2,2))
# display the prediction surface in x0, x1 ....
vis.gam(g,ticktype="detailed",color="heat",theta=-35)
vis.gam(g,se=2,theta=-35) # with twice standard error surfaces
vis.gam(g, view=c("x1","x2"),cond=list(x0=0.75)) # different view
vis.gam(g, view=c("x1","x2"),cond=list(x0=.75),theta=210,phi=40,
too.far=.07)
# ..... areas where there is no data are not plotted
# contour examples....
vis.gam(g, view=c("x1","x2"),plot.type="contour",color="heat")
vis.gam(g, view=c("x1","x2"),plot.type="contour",color="terrain")
vis.gam(g, view=c("x1","x2"),plot.type="contour",color="topo")
vis.gam(g, view=c("x1","x2"),plot.type="contour",color="cm")
par(old.par)
# Examples with factor and "by" variables
fac<-rep(1:4,20)
x<-runif(80)
y<-fac+2*x^2+rnorm(80)*0.1
fac<-factor(fac)
b<-gam(y~fac+s(x))
vis.gam(b,theta=-35,color="heat") # factor example
z<-rnorm(80)*0.4
y<-as.numeric(fac)+3*x^2*z+rnorm(80)*0.1
b<-gam(y~fac+s(x,by=z))
vis.gam(b,theta=-35,color="heat",cond=list(z=1)) # by variable example
vis.gam(b,view=c("z","x"),theta= -135) # plot against by variable
```
| programming_docs |
r None
`gam.convergence` GAM convergence and performance issues
---------------------------------------------------------
### Description
When fitting GAMs there is a tradeoff between speed of fitting and probability of fit convergence. The fitting methods used by `<gam>` opt for certainty of convergence over speed of fit. `<bam>` opts for speed.
`<gam>` uses a nested iteration method (see `<gam.outer>`), in which each trial set of smoothing parameters proposed by an outer Newton algorithm require an inner Newton algorithm (penalized iteratively re-weighted least squares, PIRLS) to find the corresponding best fit model coefficients. Implicit differentiation is used to find the derivatives of the coefficients with respect to log smoothing parameters, so that the derivatives of the smoothness selection criterion can be obtained, as required by the outer iteration. This approach is less expensive than it at first appears, since excellent starting values for the inner iteration are available as soon as the smoothing parameters start to converge. See Wood (2011) and Wood, Pya and Saefken (2016).
`<bam>` uses an alternative approach similar to ‘performance iteration’ or ‘PQL’. A single PIRLS iteration is run to find the model coefficients. At each step this requires the estimation of a working penalized linear model. Smoothing parameter selection is applied directly to this working model at each step (as if it were a Gaussian additive model). This approach is more straightforward to code and in principle less costly than the nested approach. However it is not guaranteed to converge, since the smoothness selection criterion is changing at each iteration. It is sometimes possible for the algorithm to cycle around a small set of smoothing parameter, coefficient combinations without ever converging. `<bam>` includes some checks to limit this behaviour, and the further checks in the algorithm used by `bam(...,discrete=TRUE)` actually guarantee convergence in some cases, but in general guarantees are not possible. See Wood, Goude and Shaw (2015) and Wood et al. (2017).
`<gam>` when used with ‘general’ families (such as `<multinom>` or `cox.ph`) can also use a potentially faster scheme based on the extended Fellner-Schall method (Wood and Fasiolo, 2017). This also operates with a single iteration and is not guaranteed to converge, theoretically.
There are three things that you can try to speed up GAM fitting. (i) if you have large numbers of smoothing parameters in the generalized case, then try the `"bfgs"` method option in `<gam>` argument `optimizer`: this can be faster than the default. (ii) Try using `<bam>` (iii) For large datasets it may be worth changing the smoothing basis to use `bs="cr"` (see `<s>` for details) for 1-d smooths, and to use `<te>` smooths in place of `<s>` smooths for smooths of more than one variable. This is because the default thin plate regression spline basis `"tp"` is costly to set up for large datasets.
If you have convergence problems, it's worth noting that a GAM is just a (penalized) GLM and the IRLS scheme used to estimate GLMs is not guaranteed to converge. Hence non convergence of a GAM may relate to a lack of stability in the basic IRLS scheme. Therefore it is worth trying to establish whether the IRLS iterations are capable of converging. To do this fit the problematic GAM with all smooth terms specified with `fx=TRUE` so that the smoothing parameters are all fixed at zero. If this ‘largest’ model can converge then, then the maintainer would quite like to know about your problem! If it doesn't converge, then its likely that your model is just too flexible for the IRLS process itself. Having tried increasing `maxit` in `gam.control`, there are several other possibilities for stabilizing the iteration. It is possible to try (i) setting lower bounds on the smoothing parameters using the `min.sp` argument of `gam`: this may or may not change the model being fitted; (ii) reducing the flexibility of the model by reducing the basis dimensions `k` in the specification of `s` and `te` model terms: this obviously changes the model being fitted somewhat.
Usually, a major contributer to fitting difficulties is that the model is a very poor description of the data.
Please report convergence problems, especially if you there is no obvious pathology in the data/model that suggests convergence should fail.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Key References on this implementation:
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models (with discussion). Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
Wood, S.N. (2011) Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. Journal of the Royal Statistical Society (B) 73(1):3-36
Wood, S.N., Goude, Y. & Shaw S. (2015) Generalized additive models for large datasets. Journal of the Royal Statistical Society, Series C 64(1): 139-155.
Wood, S.N., Li, Z., Shaddick, G. & Augustin N.H. (2017) Generalized additive models for gigadata: modelling the UK black smoke network daily data. Journal of the American Statistical Association.
Wood, S.N. and M. Fasiolo (2017) A generalized Fellner-Schall method for smoothing parameter optimization with application to Tweedie location, scale and shape models, Biometrics.
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
r None
`formXtViX` Form component of GAMM covariance matrix
-----------------------------------------------------
### Description
This is a service routine for `<gamm>`. Given, *V*, an estimated covariance matrix obtained using `[extract.lme.cov2](extract.lme.cov)` this routine forms a matrix square root of *X'inv(V)X* as efficiently as possible, given the structure of *V* (usually sparse).
### Usage
```
formXtViX(V,X)
```
### Arguments
| | |
| --- | --- |
| `V` | A data covariance matrix list returned from `[extract.lme.cov2](extract.lme.cov)` |
| `X` | A model matrix. |
### Details
The covariance matrix returned by `[extract.lme.cov2](extract.lme.cov)` may be in a packed and re-ordered format, since it is usually sparse. Hence a special service routine is required to form the required products involving this matrix.
### Value
A matrix, R such that `crossprod(R)` gives *X'inv(V)X*.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
For `lme` see:
Pinheiro J.C. and Bates, D.M. (2000) Mixed effects Models in S and S-PLUS. Springer
For details of how GAMMs are set up for estimation using `lme` see:
Wood, S.N. (2006) Low rank scale invariant tensor product smooths for Generalized Additive Mixed Models. Biometrics 62(4):1025-1036
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<gamm>`, `[extract.lme.cov2](extract.lme.cov)`
### Examples
```
require(mgcv)
library(nlme)
data(ergoStool)
b <- lme(effort ~ Type, data=ergoStool, random=~1|Subject)
V1 <- extract.lme.cov(b, ergoStool)
V2 <- extract.lme.cov2(b, ergoStool)
X <- model.matrix(b, data=ergoStool)
crossprod(formXtViX(V2, X))
t(X)
```
r None
`bandchol` Choleski decomposition of a band diagonal matrix
------------------------------------------------------------
### Description
Computes Choleski decomposition of a (symmetric positive definite) band-diagonal matrix, `A`.
### Usage
```
bandchol(B)
```
### Arguments
| | |
| --- | --- |
| `B` | An n by k matrix containing the diagonals of the matrix `A` to be decomposed. First row is leading diagonal, next is first sub-diagonal, etc. sub-diagonals are zero padded at the end. Alternatively gives `A` directly, i.e. a square matrix with 2k-1 non zero diagonals (those from the lower triangle are not accessed). |
### Details
Calls `dpbtrf` from `LAPACK`. The point of this is that it has *O(k^2n)* computational cost, rather than the *O(n^3)* required by dense matrix methods.
### Value
Let `R` be the factor such that `t(R)%*%R = A`. `R` is upper triangular and if the rows of `B` contained the diagonals of `A` on entry, then what is returned is an n by k matrix containing the diagonals of `R`, packed as `B` was packed on entry. If `B` was square on entry, then `R` is returned directly. See examples.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Anderson, E., Bai, Z., Bischof, C., Blackford, S., Dongarra, J., Du Croz, J., Greenbaum, A., Hammarling, S., McKenney, A. and Sorensen, D., 1999. LAPACK Users' guide (Vol. 9). Siam.
### Examples
```
require(mgcv)
## simulate a banded diagonal matrix
n <- 7;set.seed(8)
A <- matrix(0,n,n)
sdiag(A) <- runif(n);sdiag(A,1) <- runif(n-1)
sdiag(A,2) <- runif(n-2)
A <- crossprod(A)
## full matrix form...
bandchol(A)
chol(A) ## for comparison
## compact storage form...
B <- matrix(0,3,n)
B[1,] <- sdiag(A);B[2,1:(n-1)] <- sdiag(A,1)
B[3,1:(n-2)] <- sdiag(A,2)
bandchol(B)
```
r None
`gamSim` Simulate example data for GAMs
----------------------------------------
### Description
Function used to simulate data sets to illustrate the use of `<gam>` and `<gamm>`. Mostly used in help files to keep down the length of the example code sections.
### Usage
```
gamSim(eg=1,n=400,dist="normal",scale=2,verbose=TRUE)
```
### Arguments
| | |
| --- | --- |
| `eg` | numeric value specifying the example required. |
| `n` | number of data to simulate. |
| `dist` | character string which may be used to specify the distribution of the response. |
| `scale` | Used to set noise level. |
| `verbose` | Should information about simulation type be printed? |
### Details
See the source code for exactly what is simulated in each case.
1. Gu and Wahba 4 univariate term example.
2. A smooth function of 2 variables.
3. Example with continuous by variable.
4. Example with factor by variable.
5. An additive example plus a factor variable.
6. Additive + random effect.
7. As 1 but with correlated covariates.
### Value
Depends on `eg`, but usually a dataframe, which may also contain some information on the underlying truth. Sometimes a list with more items, including a data frame for model fitting. See source code or helpfile examples where the function is used for further information.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### See Also
`<gam>`, `<gamm>`
### Examples
```
## see ?gam
```
r None
`pcls` Penalized Constrained Least Squares Fitting
---------------------------------------------------
### Description
Solves least squares problems with quadratic penalties subject to linear equality and inequality constraints using quadratic programming.
### Usage
```
pcls(M)
```
### Arguments
| | |
| --- | --- |
| `M` | is the single list argument to `pcls`. It should have the following elements: y
The response data vector. w
A vector of weights for the data (often proportional to the reciprocal of the variance). X
The design matrix for the problem, note that `ncol(M$X)` must give the number of model parameters, while `nrow(M$X)` should give the number of data. C
Matrix containing any linear equality constraints on the problem (e.g. *C* in *Cp=c*). If you have no equality constraints initialize this to a zero by zero matrix. Note that there is no need to supply the vector *c*, it is defined implicitly by the initial parameter estimates *p*. S
A list of penalty matrices. `S[[i]]` is the smallest contiguous matrix including all the non-zero elements of the ith penalty matrix. The first parameter it penalizes is given by `off[i]+1` (starting counting at 1). off
Offset values locating the elements of `M$S` in the correct location within each penalty coefficient matrix. (Zero offset implies starting in first location) sp
An array of smoothing parameter estimates. p
An array of feasible initial parameter estimates - these must satisfy the constraints, but should avoid satisfying the inequality constraints as equality constraints. Ain
Matrix for the inequality constraints *A\_in p > b*. bin
vector in the inequality constraints. |
### Details
This solves the problem:
*min || W^0.5 (Xp-y) ||^2 + lambda\_1 p'S\_1 p + lambda\_1 p'S\_2 p + . . .*
subject to constraints *Cp=c* and *A\_in p > b\_in*, w.r.t. *p* given the smoothing parameters *lambda\_i*. *X* is a design matrix, *p* a parameter vector, *y* a data vector, *W* a diagonal weight matrix, *S\_i* a positive semi-definite matrix of coefficients defining the ith penalty and *C* a matrix of coefficients defining the linear equality constraints on the problem. The smoothing parameters are the *lambda\_i*. Note that *X* must be of full column rank, at least when projected into the null space of any equality constraints. *A\_in* is a matrix of coefficients defining the inequality constraints, while *b\_in* is a vector involved in defining the inequality constraints.
Quadratic programming is used to perform the solution. The method used is designed for maximum stability with least squares problems: i.e. *X'X* is not formed explicitly. See Gill et al. 1981.
### Value
The function returns an array containing the estimated parameter vector.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Gill, P.E., Murray, W. and Wright, M.H. (1981) Practical Optimization. Academic Press, London.
Wood, S.N. (1994) Monotonic smoothing splines fitted by cross validation SIAM Journal on Scientific Computing 15(5):1126-1133
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<magic>`, `<mono.con>`
### Examples
```
require(mgcv)
# first an un-penalized example - fit E(y)=a+bx subject to a>0
set.seed(0)
n <- 100
x <- runif(n); y <- x - 0.2 + rnorm(n)*0.1
M <- list(X=matrix(0,n,2),p=c(0.1,0.5),off=array(0,0),S=list(),
Ain=matrix(0,1,2),bin=0,C=matrix(0,0,0),sp=array(0,0),y=y,w=y*0+1)
M$X[,1] <- 1; M$X[,2] <- x; M$Ain[1,] <- c(1,0)
pcls(M) -> M$p
plot(x,y); abline(M$p,col=2); abline(coef(lm(y~x)),col=3)
# Penalized example: monotonic penalized regression spline .....
# Generate data from a monotonic truth.
x <- runif(100)*4-1;x <- sort(x);
f <- exp(4*x)/(1+exp(4*x)); y <- f+rnorm(100)*0.1; plot(x,y)
dat <- data.frame(x=x,y=y)
# Show regular spline fit (and save fitted object)
f.ug <- gam(y~s(x,k=10,bs="cr")); lines(x,fitted(f.ug))
# Create Design matrix, constraints etc. for monotonic spline....
sm <- smoothCon(s(x,k=10,bs="cr"),dat,knots=NULL)[[1]]
F <- mono.con(sm$xp); # get constraints
G <- list(X=sm$X,C=matrix(0,0,0),sp=f.ug$sp,p=sm$xp,y=y,w=y*0+1)
G$Ain <- F$A;G$bin <- F$b;G$S <- sm$S;G$off <- 0
p <- pcls(G); # fit spline (using s.p. from unconstrained fit)
fv<-Predict.matrix(sm,data.frame(x=x))%*%p
lines(x,fv,col=2)
# now a tprs example of the same thing....
f.ug <- gam(y~s(x,k=10)); lines(x,fitted(f.ug))
# Create Design matrix, constriants etc. for monotonic spline....
sm <- smoothCon(s(x,k=10,bs="tp"),dat,knots=NULL)[[1]]
xc <- 0:39/39 # points on [0,1]
nc <- length(xc) # number of constraints
xc <- xc*4-1 # points at which to impose constraints
A0 <- Predict.matrix(sm,data.frame(x=xc))
# ... A0%*%p evaluates spline at xc points
A1 <- Predict.matrix(sm,data.frame(x=xc+1e-6))
A <- (A1-A0)/1e-6
## ... approx. constraint matrix (A%*%p is -ve
## spline gradient at points xc)
G <- list(X=sm$X,C=matrix(0,0,0),sp=f.ug$sp,y=y,w=y*0+1,S=sm$S,off=0)
G$Ain <- A; # constraint matrix
G$bin <- rep(0,nc); # constraint vector
G$p <- rep(0,10); G$p[10] <- 0.1
# ... monotonic start params, got by setting coefs of polynomial part
p <- pcls(G); # fit spline (using s.p. from unconstrained fit)
fv2 <- Predict.matrix(sm,data.frame(x=x))%*%p
lines(x,fv2,col=3)
######################################
## monotonic additive model example...
######################################
## First simulate data...
set.seed(10)
f1 <- function(x) 5*exp(4*x)/(1+exp(4*x));
f2 <- function(x) {
ind <- x > .5
f <- x*0
f[ind] <- (x[ind] - .5)^2*10
f
}
f3 <- function(x) 0.2 * x^11 * (10 * (1 - x))^6 +
10 * (10 * x)^3 * (1 - x)^10
n <- 200
x <- runif(n); z <- runif(n); v <- runif(n)
mu <- f1(x) + f2(z) + f3(v)
y <- mu + rnorm(n)
## Preliminary unconstrained gam fit...
G <- gam(y~s(x)+s(z)+s(v,k=20),fit=FALSE)
b <- gam(G=G)
## generate constraints, by finite differencing
## using predict.gam ....
eps <- 1e-7
pd0 <- data.frame(x=seq(0,1,length=100),z=rep(.5,100),
v=rep(.5,100))
pd1 <- data.frame(x=seq(0,1,length=100)+eps,z=rep(.5,100),
v=rep(.5,100))
X0 <- predict(b,newdata=pd0,type="lpmatrix")
X1 <- predict(b,newdata=pd1,type="lpmatrix")
Xx <- (X1 - X0)/eps ## Xx %*% coef(b) must be positive
pd0 <- data.frame(z=seq(0,1,length=100),x=rep(.5,100),
v=rep(.5,100))
pd1 <- data.frame(z=seq(0,1,length=100)+eps,x=rep(.5,100),
v=rep(.5,100))
X0 <- predict(b,newdata=pd0,type="lpmatrix")
X1 <- predict(b,newdata=pd1,type="lpmatrix")
Xz <- (X1-X0)/eps
G$Ain <- rbind(Xx,Xz) ## inequality constraint matrix
G$bin <- rep(0,nrow(G$Ain))
G$C = matrix(0,0,ncol(G$X))
G$sp <- b$sp
G$p <- coef(b)
G$off <- G$off-1 ## to match what pcls is expecting
## force inital parameters to meet constraint
G$p[11:18] <- G$p[2:9]<- 0
p <- pcls(G) ## constrained fit
par(mfrow=c(2,3))
plot(b) ## original fit
b$coefficients <- p
plot(b) ## constrained fit
## note that standard errors in preceding plot are obtained from
## unconstrained fit
```
r None
`random.effects` Random effects in GAMs
----------------------------------------
### Description
The smooth components of GAMs can be viewed as random effects for estimation purposes. This means that more conventional random effects terms can be incorporated into GAMs in two ways. The first method converts all the smooths into fixed and random components suitable for estimation by standard mixed modelling software. Once the GAM is in this form then conventional random effects are easily added, and the whole model is estimated as a general mixed model. `<gamm>` and `gamm4` from the `gamm4` package operate in this way.
The second method represents the conventional random effects in a GAM in the same way that the smooths are represented — as penalized regression terms. This method can be used with `<gam>` by making use of `s(...,bs="re")` terms in a model: see `<smooth.construct.re.smooth.spec>`, for full details. The basic idea is that, e.g., `s(x,z,g,bs="re")` generates an i.i.d. Gaussian random effect with model matrix given by `model.matrix(~x:z:g-1)` — in principle such terms can take any number of arguments. This simple approach is sufficient for implementing a wide range of commonly used random effect structures. For example if `g` is a factor then `s(g,bs="re")` produces a random coefficient for each level of `g`, with the random coefficients all modelled as i.i.d. normal. If `g` is a factor and `x` is numeric, then `s(x,g,bs="re")` produces an i.i.d. normal random slope relating the response to `x` for each level of `g`. If `h` is another factor then `s(h,g,bs="re")` produces the usual i.i.d. normal `g` - `h` interaction. Note that a rather useful approximate test for zero random effect is also implemented for such terms based on Wood (2013). If the precision matrix is known to within a multiplicative constant, then this can be supplied via the `xt` argument of `s`. See <smooth.construct.re.smooth.spec> for details and example.
Alternatively, but less straightforwardly, the `paraPen` argument to `<gam>` can be used: see `<gam.models>`. If smoothing parameter estimation is by ML or REML (e.g. `gam(...,method="REML")`) then this approach is a completely conventional likelihood based treatment of random effects.
`gam` can be slow for fitting models with large numbers of random effects, because it does not exploit the sparsity that is often a feature of parametric random effects. It can not be used for models with more coefficients than data. However `gam` is often faster and more reliable than `gamm` or `gamm4`, when the number of random effects is modest.
To facilitate the use of random effects with `gam`, `<gam.vcomp>` is a utility routine for converting smoothing parameters to variance components. It also provides confidence intervals, if smoothness estimation is by ML or REML.
Note that treating random effects as smooths does not remove the usual problems associated with testing variance components for equality to zero: see `<summary.gam>` and `<anova.gam>`.
### Author(s)
Simon Wood <[email protected]>
### References
Wood, S.N. (2013) A simple test for random effects in regression models. Biometrika 100:1005-1010
Wood, S.N. (2011) Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. Journal of the Royal Statistical Society (B) 73(1):3-36
Wood, S.N. (2008) Fast stable direct fitting and smoothness selection for generalized additive models. Journal of the Royal Statistical Society (B) 70(3):495-518
Wood, S.N. (2006) Low rank scale invariant tensor product smooths for generalized additive mixed models. Biometrics 62(4):1025-1036
### See Also
`<gam.vcomp>`, `<gam.models>`, `<smooth.terms>`, `<smooth.construct.re.smooth.spec>`, `<gamm>`
### Examples
```
## see also examples for gam.models, gam.vcomp, gamm
## and smooth.construct.re.smooth.spec
## simple comparison of lme and gam
require(mgcv)
require(nlme)
b0 <- lme(travel~1,data=Rail,~1|Rail,method="REML")
b <- gam(travel~s(Rail,bs="re"),data=Rail,method="REML")
intervals(b0)
gam.vcomp(b)
anova(b)
plot(b)
## simulate example...
dat <- gamSim(1,n=400,scale=2) ## simulate 4 term additive truth
fac <- sample(1:20,400,replace=TRUE)
b <- rnorm(20)*.5
dat$y <- dat$y + b[fac]
dat$fac <- as.factor(fac)
rm1 <- gam(y ~ s(fac,bs="re")+s(x0)+s(x1)+s(x2)+s(x3),data=dat,method="ML")
gam.vcomp(rm1)
fv0 <- predict(rm1,exclude="s(fac)") ## predictions setting r.e. to 0
fv1 <- predict(rm1) ## predictions setting r.e. to predicted values
## prediction setting r.e. to 0 and not having to provide 'fac'...
pd <- dat; pd$fac <- NULL
fv0 <- predict(rm1,pd,exclude="s(fac)",newdata.guaranteed=TRUE)
## Prediction with levels of fac not in fit data.
## The effect of the new factor levels (or any interaction involving them)
## is set to zero.
xx <- seq(0,1,length=10)
pd <- data.frame(x0=xx,x1=xx,x2=xx,x3=xx,fac=c(1:10,21:30))
fv <- predict(rm1,pd)
pd$fac <- NULL
fv0 <- predict(rm1,pd,exclude="s(fac)",newdata.guaranteed=TRUE)
```
| programming_docs |
r None
`smooth.info` Generic function to provide extra information about smooth specification
---------------------------------------------------------------------------------------
### Description
Takes a smooth specification object and adds extra basis specific information to it before smooth constructor called. Default method returns supplied object unmodified.
### Usage
```
smooth.info(object)
```
### Arguments
| | |
| --- | --- |
| `object` | is a smooth specification object |
### Details
Sometimes it is necessary to know something about a smoother before it is constructed, beyond what is in the initial smooth specification object. For example, some smooth terms could be set up as tensor product smooths and it is useful for `<bam>` to take advantage of this when discrete covariate methods are used. However, `<bam>` needs to know whether a smoother falls into this category before it is constructed in order to discretize its covariates marginally instead of jointly. Rather than `<bam>` having a hard coded list of such smooth classes it is preferable for the smooth specification object to report this themselves. `smooth.info` method functions are the means for achieving this. When interpreting a gam formula the `smooth.info` function is applied to each smooth specification object as soon as it is produced (in `interpret.gam0`).
### Value
A smooth specification object, which may be modified in some way.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
### See Also
`<bam>`, `<smooth.construct>`, `[PredictMat](smoothcon)`
### Examples
```
# See smooth.construct examples
spec <- s(a,bs="re")
class(spec)
spec$tensor.possible
spec <- smooth.info(spec)
spec$tensor.possible
```
r None
`gamlss.etamu` Transform derivatives wrt mu to derivatives wrt linear predictor
--------------------------------------------------------------------------------
### Description
Mainly intended for internal use in specifying location scale models. Let `g(mu) = lp`, where `lp` is the linear predictor, and `g` is the link function. Assume that we have calculated the derivatives of the log-likelihood wrt `mu`. This function uses the chain rule to calculate the derivatives of the log-likelihood wrt `lp`. See `<trind.generator>` for array packing conventions.
### Usage
```
gamlss.etamu(l1, l2, l3 = NULL, l4 = NULL, ig1, g2, g3 = NULL,
g4 = NULL, i2, i3 = NULL, i4 = NULL, deriv = 0)
```
### Arguments
| | |
| --- | --- |
| `l1` | array of 1st order derivatives of log-likelihood wrt mu. |
| `l2` | array of 2nd order derivatives of log-likelihood wrt mu. |
| `l3` | array of 3rd order derivatives of log-likelihood wrt mu. |
| `l4` | array of 4th order derivatives of log-likelihood wrt mu. |
| `ig1` | reciprocal of the first derivative of the link function wrt the linear predictor. |
| `g2` | array containing the 2nd order derivative of the link function wrt the linear predictor. |
| `g3` | array containing the 3rd order derivative of the link function wrt the linear predictor. |
| `g4` | array containing the 4th order derivative of the link function wrt the linear predictor. |
| `i2` | two-dimensional index array, such that `l2[,i2[i,j]]` contains the partial w.r.t. params indexed by i,j with no restriction on the index values (except that they are in 1,...,ncol(l1)). |
| `i3` | third-dimensional index array, such that `l3[,i3[i,j,k]]` contains the partial w.r.t. params indexed by i,j,k. |
| `i4` | third-dimensional index array, such that `l4[,i4[i,j,k,l]]` contains the partial w.r.t. params indexed by i,j,k,l. |
| `deriv` | if `deriv==0` only first and second order derivatives will be calculated. If `deriv==1` the function goes up to 3rd order, and if `deriv==2` it provides also 4th order derivatives. |
### Value
A list where the arrays `l1`, `l2`, `l3`, `l4` contain the derivatives (up to order four) of the log-likelihood wrt the linear predictor.
### Author(s)
Simon N. Wood <[email protected]>.
### See Also
`<trind.generator>`
r None
`smooth.construct.mrf.smooth.spec` Markov Random Field Smooths
---------------------------------------------------------------
### Description
For data observed over discrete spatial units, a simple Markov random field smoother is sometimes appropriate. These functions provide such a smoother class for `mgcv`. See details for how to deal with regions with missing data.
### Usage
```
## S3 method for class 'mrf.smooth.spec'
smooth.construct(object, data, knots)
## S3 method for class 'mrf.smooth'
Predict.matrix(object, data)
```
### Arguments
| | |
| --- | --- |
| `object` | For the `smooth.construct` method a smooth specification object, usually generated by a term `s(x,...,bs="mrf",xt=list(polys=foo))`. `x` is a factor variable giving labels for geographic districts, and the `xt` argument is obligatory: see details. For the `Predict.Matrix` method an object of class `"mrf.smooth"` produced by the `smooth.construct` method. |
| `data` | a list containing just the data (including any `by` variable) required by this term, with names corresponding to `object$term` (and `object$by`). The `by` variable is the last element. |
| `knots` | If there are more geographic areas than data were observed for, then this argument is used to provide the labels for all the areas (observed and unobserved). |
### Details
A Markov random field smooth over a set of discrete areas is defined using a set of area labels, and a neighbourhood structure for the areas. The covariate of the smooth is the vector of area labels corresponding to each obervation. This covariate should be a factor, or capable of being coerced to a factor.
The neighbourhood structure is supplied in the `xt` argument to `s`. This must contain at least one of the elements `polys`, `nb` or `penalty`.
polys
contains the polygons defining the geographic areas. It is a list with as many elements as there are geographic areas. `names(polys)` must correspond to the levels of the argument of the smooth, in any order (i.e. it gives the area labels). `polys[[i]]` is a 2 column matrix the rows of which specify the vertices of the polygon(s) defining the boundary of the ith area. A boundary may be made up of several closed loops: these must be separated by `NA` rows. A polygon within another is treated as a hole. The first polygon in any `polys[[i]]` should not be a hole. An example of the structure is provided by `[columb.polys](columb)` (which contains an artificial hole in its second element, for illustration). Any list elements with duplicate names are combined into a single NA separated matrix.
Plotting of the smooth is not possible without a `polys` object.
If `polys` is the only element of `xt` provided, then the neighbourhood structure is computed from it automatically. To count as neigbours, polygons must exactly share one of more vertices.
nb
is a named list defining the neighbourhood structure. `names(nb)` must correspond to the levels of the covariate of the smooth (i.e. the area labels), but can be in any order. `nb[[i]]` is a numeric vector indexing the neighbours of the ith area (and should not include `i`). All indices are relative to `nb` itself, but can be translated using `names(nb)`. See example code. As an alternative each `nb[[i]]` can be an array of the names of the neighbours, but these will be converted to the arrays of numeric indices internally.
If no `penalty` is provided then it is computed automatically from this list. The ith row of the penalty matrix will be zero everwhere, except in the ith column, which will contain the number of neighbours of the ith geographic area, and the columns corresponding to those geographic neighbours, which will each contain -1.
penalty
if this is supplied, then it is used as the penalty matrix. It should be positive semi-definite. Its row and column names should correspond to the levels of the covariate.
If no basis dimension is supplied then the constructor produces a full rank MRF, with a coefficient for each geographic area. Otherwise a low rank approximation is obtained based on truncation of the parameterization given in Wood (2017) Section 5.4.2. See Wood (2017, section 5.8.1).
Note that smooths of this class have a built in plot method, and that the utility function `<in.out>` can be useful for working with discrete area data. The plot method has two schemes, `scheme==0` is colour, `scheme==1` is grey scale.
The situation in which there are areas with no data requires special handling. You should set `drop.unused.levels=FALSE` in the model fitting function, `<gam>`, `<bam>` or `<gamm>`, having first ensured that any fixed effect factors do not contain unobserved levels. Also make sure that the basis dimension is set to ensure that the total number of coefficients is less than the number of observations.
### Value
An object of class `"mrf.smooth"` or a matrix mapping the coefficients of the MRF smooth to the predictions for the areas listed in `data`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected]) and Thomas Kneib (Fabian Scheipl prototyped the low rank MRF idea)
### References
Wood S.N. (2017) Generalized additive models: an introduction with R (2nd edition). CRC.
### See Also
`<in.out>`, `<polys.plot>`
### Examples
```
library(mgcv)
## Load Columbus Ohio crime data (see ?columbus for details and credits)
data(columb) ## data frame
data(columb.polys) ## district shapes list
xt <- list(polys=columb.polys) ## neighbourhood structure info for MRF
par(mfrow=c(2,2))
## First a full rank MRF...
b <- gam(crime ~ s(district,bs="mrf",xt=xt),data=columb,method="REML")
plot(b,scheme=1)
## Compare to reduced rank version...
b <- gam(crime ~ s(district,bs="mrf",k=20,xt=xt),data=columb,method="REML")
plot(b,scheme=1)
## An important covariate added...
b <- gam(crime ~ s(district,bs="mrf",k=20,xt=xt)+s(income),
data=columb,method="REML")
plot(b,scheme=c(0,1))
## plot fitted values by district
par(mfrow=c(1,1))
fv <- fitted(b)
names(fv) <- as.character(columb$district)
polys.plot(columb.polys,fv)
## Examine an example neighbourhood list - this one auto-generated from
## 'polys' above.
nb <- b$smooth[[1]]$xt$nb
head(nb)
names(nb) ## these have to match the factor levels of the smooth
## look at the indices of the neighbours of the first entry,
## named '0'...
nb[['0']] ## by name
nb[[1]] ## same by index
## ... and get the names of these neighbours from their indices...
names(nb)[nb[['0']]]
b1 <- gam(crime ~ s(district,bs="mrf",k=20,xt=list(nb=nb))+s(income),
data=columb,method="REML")
b1 ## fit unchanged
plot(b1) ## but now there is no information with which to plot the mrf
```
r None
`choose.k` Basis dimension choice for smooths
----------------------------------------------
### Description
Choosing the basis dimension, and checking the choice, when using penalized regression smoothers.
Penalized regression smoothers gain computational efficiency by virtue of being defined using a basis of relatively modest size, `k`. When setting up models in the `mgcv` package, using `<s>` or `<te>` terms in a model formula, `k` must be chosen: the defaults are essentially arbitrary.
In practice `k-1` (or `k`) sets the upper limit on the degrees of freedom associated with an `<s>` smooth (1 degree of freedom is usually lost to the identifiability constraint on the smooth). For `<te>` smooths the upper limit of the degrees of freedom is given by the product of the `k` values provided for each marginal smooth less one, for the constraint. However the actual effective degrees of freedom are controlled by the degree of penalization selected during fitting, by GCV, AIC, REML or whatever is specified. The exception to this is if a smooth is specified using the `fx=TRUE` option, in which case it is unpenalized.
So, exact choice of `k` is not generally critical: it should be chosen to be large enough that you are reasonably sure of having enough degrees of freedom to represent the underlying ‘truth’ reasonably well, but small enough to maintain reasonable computational efficiency. Clearly ‘large’ and ‘small’ are dependent on the particular problem being addressed.
As with all model assumptions, it is useful to be able to check the choice of `k` informally. If the effective degrees of freedom for a model term are estimated to be much less than `k-1` then this is unlikely to be very worthwhile, but as the EDF approach `k-1`, checking can be important. A useful general purpose approach goes as follows: (i) fit your model and extract the deviance residuals; (ii) for each smooth term in your model, fit an equivalent, single, smooth to the residuals, using a substantially increased `k` to see if there is pattern in the residuals that could potentially be explained by increasing `k`. Examples are provided below.
The obvious, but more costly, alternative is simply to increase the suspect `k` and refit the original model. If there are no statistically important changes as a result of doing this, then `k` was large enough. (Change in the smoothness selection criterion, and/or the effective degrees of freedom, when `k` is increased, provide the obvious numerical measures for whether the fit has changed substantially.)
`<gam.check>` runs a simple simulation based check on the basis dimensions, which can help to flag up terms for which `k` is too low. Grossly too small `k` will also be visible from partial residuals available with `<plot.gam>`.
One scenario that can cause confusion is this: a model is fitted with `k=10` for a smooth term, and the EDF for the term is estimated as 7.6, some way below the maximum of 9. The model is then refitted with `k=20` and the EDF increases to 8.7 - what is happening - how come the EDF was not 8.7 the first time around? The explanation is that the function space with `k=20` contains a larger subspace of functions with EDF 8.7 than did the function space with `k=10`: one of the functions in this larger subspace fits the data a little better than did any function in the smaller subspace. These subtleties seldom have much impact on the statistical conclusions to be drawn from a model fit, however.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). CRC/Taylor & Francis.
<https://www.maths.ed.ac.uk/~swood34/>
### Examples
```
## Simulate some data ....
library(mgcv)
set.seed(1)
dat <- gamSim(1,n=400,scale=2)
## fit a GAM with quite low `k'
b<-gam(y~s(x0,k=6)+s(x1,k=6)+s(x2,k=6)+s(x3,k=6),data=dat)
plot(b,pages=1,residuals=TRUE) ## hint of a problem in s(x2)
## the following suggests a problem with s(x2)
gam.check(b)
## Another approach (see below for more obvious method)....
## check for residual pattern, removeable by increasing `k'
## typically `k', below, chould be substantially larger than
## the original, `k' but certainly less than n/2.
## Note use of cheap "cs" shrinkage smoothers, and gamma=1.4
## to reduce chance of overfitting...
rsd <- residuals(b)
gam(rsd~s(x0,k=40,bs="cs"),gamma=1.4,data=dat) ## fine
gam(rsd~s(x1,k=40,bs="cs"),gamma=1.4,data=dat) ## fine
gam(rsd~s(x2,k=40,bs="cs"),gamma=1.4,data=dat) ## `k' too low
gam(rsd~s(x3,k=40,bs="cs"),gamma=1.4,data=dat) ## fine
## refit...
b <- gam(y~s(x0,k=6)+s(x1,k=6)+s(x2,k=20)+s(x3,k=6),data=dat)
gam.check(b) ## better
## similar example with multi-dimensional smooth
b1 <- gam(y~s(x0)+s(x1,x2,k=15)+s(x3),data=dat)
rsd <- residuals(b1)
gam(rsd~s(x0,k=40,bs="cs"),gamma=1.4,data=dat) ## fine
gam(rsd~s(x1,x2,k=100,bs="ts"),gamma=1.4,data=dat) ## `k' too low
gam(rsd~s(x3,k=40,bs="cs"),gamma=1.4,data=dat) ## fine
gam.check(b1) ## shows same problem
## and a `te' example
b2 <- gam(y~s(x0)+te(x1,x2,k=4)+s(x3),data=dat)
rsd <- residuals(b2)
gam(rsd~s(x0,k=40,bs="cs"),gamma=1.4,data=dat) ## fine
gam(rsd~te(x1,x2,k=10,bs="cs"),gamma=1.4,data=dat) ## `k' too low
gam(rsd~s(x3,k=40,bs="cs"),gamma=1.4,data=dat) ## fine
gam.check(b2) ## shows same problem
## same approach works with other families in the original model
dat <- gamSim(1,n=400,scale=.25,dist="poisson")
bp<-gam(y~s(x0,k=5)+s(x1,k=5)+s(x2,k=5)+s(x3,k=5),
family=poisson,data=dat,method="ML")
gam.check(bp)
rsd <- residuals(bp)
gam(rsd~s(x0,k=40,bs="cs"),gamma=1.4,data=dat) ## fine
gam(rsd~s(x1,k=40,bs="cs"),gamma=1.4,data=dat) ## fine
gam(rsd~s(x2,k=40,bs="cs"),gamma=1.4,data=dat) ## `k' too low
gam(rsd~s(x3,k=40,bs="cs"),gamma=1.4,data=dat) ## fine
rm(dat)
## More obvious, but more expensive tactic... Just increase
## suspicious k until fit is stable.
set.seed(0)
dat <- gamSim(1,n=400,scale=2)
## fit a GAM with quite low `k'
b <- gam(y~s(x0,k=6)+s(x1,k=6)+s(x2,k=6)+s(x3,k=6),
data=dat,method="REML")
b
## edf for 3rd smooth is highest as proportion of k -- increase k
b <- gam(y~s(x0,k=6)+s(x1,k=6)+s(x2,k=12)+s(x3,k=6),
data=dat,method="REML")
b
## edf substantially up, -ve REML substantially down
b <- gam(y~s(x0,k=6)+s(x1,k=6)+s(x2,k=24)+s(x3,k=6),
data=dat,method="REML")
b
## slight edf increase and -ve REML change
b <- gam(y~s(x0,k=6)+s(x1,k=6)+s(x2,k=40)+s(x3,k=6),
data=dat,method="REML")
b
## defintely stabilized (but really k around 20 would have been fine)
```
r None
`smooth.construct.fs.smooth.spec` Factor smooth interactions in GAMs
---------------------------------------------------------------------
### Description
Simple factor smooth interactions, which are efficient when used with `<gamm>`. This smooth class allows a separate smooth for each level of a factor, with the same smoothing parameter for all smooths. It is an alternative to using factor `by` variables.
See the discussion of `by` variables in `<gam.models>` for more general alternatives for factor smooth interactions (including interactions of tensor product smooths with factors).
### Usage
```
## S3 method for class 'fs.smooth.spec'
smooth.construct(object, data, knots)
## S3 method for class 'fs.interaction'
Predict.matrix(object, data)
```
### Arguments
| | |
| --- | --- |
| `object` | For the `smooth.construct` method a smooth specification object, usually generated by a term `s(x,...,bs="fs",)`. May have a `gamm` attribute: see details. For the `predict.Matrix` method an object of class `"fs.interaction"` produced by the `smooth.construct` method. |
| `data` | a list containing just the data (including any `by` variable) required by this term, with names corresponding to `object$term`. |
| `knots` | a list containing any knots supplied for smooth basis setup. |
### Details
This class produces a smooth for each level of a single factor variable. Within a `<gam>` formula this is done with something like `s(x,fac,bs="fs")`, which is almost equivalent to `s(x,by=fac,id=1)` (with the `gam` argument `select=TRUE`). The terms are fully penalized, with separate penalties on each null space component: for this reason they are not centred (no sum-to-zero constraint).
The class is particularly useful for use with `<gamm>`, where estimation efficiently exploits the nesting of the smooth within the factor. Note however that: i) `gamm` only allows one conditioning factor for smooths, so `s(x)+s(z,fac,bs="fs")+s(v,fac,bs="fs")` is OK, but `s(x)+s(z,fac1,bs="fs")+s(v,fac2,bs="fs")` is not; ii) all aditional random effects and correlation structures will be treated as nested within the factor of the smooth factor interaction. To facilitate this the constructor is called from `<gamm>` with an attribute `"gamm"` attached to the smooth specification object. The result differs from that resulting from the case where this is not done.
Note that `gamm4` from the `gamm4` package suffers from none of the restrictions that apply to `gamm`, and `"fs"` terms can be used without side-effects. Construcor is still called with a smooth specification object having a `"gamm"` attribute.
Any singly penalized basis can be used to smooth at each factor level. The default is `"tp"`, but alternatives can be supplied in the `xt` argument of `s` (e.g. `s(x,fac,bs="fs",xt="cr")` or `s(x,fac,bs="fs",xt=list(bs="cr")`). The `k` argument to `s(...,bs="fs")` refers to the basis dimension to use for each level of the factor variable.
Note one computational bottleneck: currently `<gamm>` (or `gamm4`) will produce the full posterior covariance matrix for the smooths, including the smooths at each level of the factor. This matrix can get large and computationally costly if there are more than a few hundred levels of the factor. Even at one or two hundred levels, care should be taken to keep down `k`.
The plot method for this class has two schemes. `scheme==0` is in colour, while `scheme==1` is black and white.
### Value
An object of class `"fs.interaction"` or a matrix mapping the coefficients of the factor smooth interaction to the smooths themselves. The contents of an `"fs.interaction"` object will depend on whether or not `smooth.construct` was called with an object with attribute `gamm`: see below.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### See Also
`<gam.models>`, `<gamm>`
### Examples
```
library(mgcv)
set.seed(0)
## simulate data...
f0 <- function(x) 2 * sin(pi * x)
f1 <- function(x,a=2,b=-1) exp(a * x)+b
f2 <- function(x) 0.2 * x^11 * (10 * (1 - x))^6 + 10 *
(10 * x)^3 * (1 - x)^10
n <- 500;nf <- 25
fac <- sample(1:nf,n,replace=TRUE)
x0 <- runif(n);x1 <- runif(n);x2 <- runif(n)
a <- rnorm(nf)*.2 + 2;b <- rnorm(nf)*.5
f <- f0(x0) + f1(x1,a[fac],b[fac]) + f2(x2)
fac <- factor(fac)
y <- f + rnorm(n)*2
## so response depends on global smooths of x0 and
## x2, and a smooth of x1 for each level of fac.
## fit model (note p-values not available when fit
## using gamm)...
bm <- gamm(y~s(x0)+ s(x1,fac,bs="fs",k=5)+s(x2,k=20))
plot(bm$gam,pages=1)
summary(bm$gam)
## Could also use...
## b <- gam(y~s(x0)+ s(x1,fac,bs="fs",k=5)+s(x2,k=20),method="ML")
## ... but its slower (increasingly so with increasing nf)
## b <- gam(y~s(x0)+ t2(x1,fac,bs=c("tp","re"),k=5,full=TRUE)+
## s(x2,k=20),method="ML"))
## ... is exactly equivalent.
```
| programming_docs |
r None
`bug.reports.mgcv` Reporting mgcv bugs.
----------------------------------------
### Description
`mgcv` works largely because many people have reported bugs over the years. If you find something that looks like a bug, please report it, so that the package can be improved. `mgcv` does not have a large development budget, so it is a big help if bug reports follow the following guidelines.
The ideal report consists of an email to [[email protected]](mailto:[email protected]) with a subject line including `mgcv` somewhere, containing
1. The results of running `[sessionInfo](../../utils/html/sessioninfo)` in the R session where the problem occurs. This provides platform details, R and package version numbers, etc.
2. A brief description of the problem.
3. Short cut and paste-able code that produces the problem, including the code for loading/generating the data (using standard R functions like `load`, `read.table` etc).
4. Any required data files. If you send real data it will only be used for the purposes of de-bugging.
Of course if you have dug deeper and have an idea of what is causing the problem, that is also helpful to know, as is any suggested code fix. (Don't send a fixed package .tar.gz file, however - I can't use this).
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
r None
`smooth.construct.re.smooth.spec` Simple random effects in GAMs
----------------------------------------------------------------
### Description
`<gam>` can deal with simple independent random effects, by exploiting the link between smooths and random effects to treat random effects as smooths. `s(x,bs="re")` implements this. Such terms can can have any number of predictors, which can be any mixture of numeric or factor variables. The terms produce a parametric interaction of the predictors, and penalize the corresponding coefficients with a multiple of the identity matrix, corresponding to an assumption of i.i.d. normality. See details.
### Usage
```
## S3 method for class 're.smooth.spec'
smooth.construct(object, data, knots)
## S3 method for class 'random.effect'
Predict.matrix(object, data)
```
### Arguments
| | |
| --- | --- |
| `object` | For the `smooth.construct` method a smooth specification object, usually generated by a term `s(x,...,bs="re",)`. For the `predict.Matrix` method an object of class `"random.effect"` produced by the `smooth.construct` method. |
| `data` | a list containing just the data (including any `by` variable) required by this term, with names corresponding to `object$term` (and `object$by`). The `by` variable is the last element. |
| `knots` | generically a list containing any knots supplied for basis setup — unused at present. |
### Details
Exactly how the random effects are implemented is best seen by example. Consider the model term `s(x,z,bs="re")`. This will result in the model matrix component corresponding to `~x:z-1` being added to the model matrix for the whole model. The coefficients associated with the model matrix component are assumed i.i.d. normal, with unknown variance (to be estimated). This assumption is equivalent to an identity penalty matrix (i.e. a ridge penalty) on the coefficients. Because such a penalty is full rank, random effects terms do not require centering constraints.
If the nature of the random effect specification is not clear, consider a couple more examples: `s(x,bs="re")` results in `model.matrix(~x-1)` being appended to the overall model matrix, while `s(x,v,w,bs="re")` would result in `model.matrix(~x:v:w-1)` being appended to the model matrix. In both cases the corresponding model coefficients are assumed i.i.d. normal, and are hence subject to ridge penalties.
If the random effect precision matrix is of the form *sum\_j p\_j S\_j* for known matrices *S\_j* and unknown parameters *p\_j*, then a list containing the *S\_j* can be supplied in the `xt` argument of `<s>`. In this case an array `rank` should also be supplied in `xt` giving the ranks of the *S\_j* matrices. See simple example below.
Note that smooth `id`s are not supported for random effect terms. Unlike most smooth terms, side conditions are never applied to random effect terms in the event of nesting (since they are identifiable without side conditions).
Random effects implemented in this way do not exploit the sparse structure of many random effects, and may therefore be relatively inefficient for models with large numbers of random effects, when `gamm4` or `<gamm>` may be better alternatives. Note also that `<gam>` will not support models with more coefficients than data.
The situation in which factor variable random effects intentionally have unobserved levels requires special handling. You should set `drop.unused.levels=FALSE` in the model fitting function, `<gam>`, `<bam>` or `<gamm>`, having first ensured that any fixed effect factors do not contain unobserved levels.
The implementation is designed so that supplying random effect factor levels to `<predict.gam>` that were not levels of the factor when fitting, will result in the corresponding random effect (or interactions involving it) being set to zero (with zero standard error) for prediction. See `<random.effects>` for an example. This is achieved by the `Predict.matrix` method zeroing any rows of the prediction matrix involving factors that are `NA`. `<predict.gam>` will set any factor observation to `NA` if it is a level not present in the fit data.
### Value
An object of class `"random.effect"` or a matrix mapping the coefficients of the random effect to the random effects themselves.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2008) Fast stable direct fitting and smoothness selection for generalized additive models. Journal of the Royal Statistical Society (B) 70(3):495-518
### See Also
`<gam.vcomp>`, `<gamm>`
### Examples
```
## see ?gam.vcomp
require(mgcv)
## simulate simple random effect example
set.seed(4)
nb <- 50; n <- 400
b <- rnorm(nb)*2 ## random effect
r <- sample(1:nb,n,replace=TRUE) ## r.e. levels
y <- 2 + b[r] + rnorm(n)
r <- factor(r)
## fit model....
b <- gam(y ~ s(r,bs="re"),method="REML")
gam.vcomp(b)
## example with supplied precision matrices...
b <- c(rnorm(nb/2)*2,rnorm(nb/2)*.5) ## random effect now with 2 variances
r <- sample(1:nb,n,replace=TRUE) ## r.e. levels
y <- 2 + b[r] + rnorm(n)
r <- factor(r)
## known precision matrix components...
S <- list(diag(rep(c(1,0),each=nb/2)),diag(rep(c(0,1),each=nb/2)))
b <- gam(y ~ s(r,bs="re",xt=list(S=S,rank=c(nb/2,nb/2))),method="REML")
gam.vcomp(b)
summary(b)
```
r None
`trichol` Choleski decomposition of a tri-diagonal matrix
----------------------------------------------------------
### Description
Computes Choleski decomposition of a (symmetric positive definite) tri-diagonal matrix stored as a leading diagonal and sub/super diagonal.
### Usage
```
trichol(ld,sd)
```
### Arguments
| | |
| --- | --- |
| `ld` | leading diagonal of matrix |
| `sd` | sub-super diagonal of matrix |
### Details
Calls `dpttrf` from `LAPACK`. The point of this is that it has *O(n)* computational cost, rather than the *O(n^3)* required by dense matrix methods.
### Value
A list with elements `ld` and `sd`. `ld` is the leading diagonal and `sd` is the super diagonal of bidiagonal matrix *B* where *B'B=T* and *T* is the original tridiagonal matrix.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Anderson, E., Bai, Z., Bischof, C., Blackford, S., Dongarra, J., Du Croz, J., Greenbaum, A., Hammarling, S., McKenney, A. and Sorensen, D., 1999. LAPACK Users' guide (Vol. 9). Siam.
### See Also
`<bandchol>`
### Examples
```
require(mgcv)
## simulate some diagonals...
set.seed(19); k <- 7
ld <- runif(k)+1
sd <- runif(k-1) -.5
## get diagonals of chol factor...
trichol(ld,sd)
## compare to dense matrix result...
A <- diag(ld);for (i in 1:(k-1)) A[i,i+1] <- A[i+1,i] <- sd[i]
R <- chol(A)
diag(R);diag(R[,-1])
```
r None
`gaulss` Gaussian location-scale model family
----------------------------------------------
### Description
The `gaulss` family implements Gaussian location scale additive models in which the mean and the logb of the standard deviation (see details) can depend on additive smooth predictors. Useable only with `<gam>`, the linear predictors are specified via a list of formulae.
### Usage
```
gaulss(link=list("identity","logb"),b=0.01)
```
### Arguments
| | |
| --- | --- |
| `link` | two item list specifying the link for the mean and the standard deviation. See details. |
| `b` | The minumum standard deviation, for the `"logb"` link. |
### Details
Used with `<gam>` to fit Gaussian location - scale models. `gam` is called with a list containing 2 formulae, the first specifies the response on the left hand side and the structure of the linear predictor for the mean on the right hand side. The second is one sided, specifying the linear predictor for the standard deviation on the right hand side.
Link functions `"identity"`, `"inverse"`, `"log"` and `"sqrt"` are available for the mean. For the standard deviation only the `"logb"` link is implemented: *eta = log(sigma-b)* and *sigma = b + exp(eta)*. This link is designed to avoid singularities in the likelihood caused by the standard deviation tending to zero. Note that internally the family is parameterized in terms of the *tau=1/sigma*, i.e. the standard deviation of the precision, so the link and inverse link are coded to reflect this, however the reltaionships between the linear predictor and the standard deviation are as given above.
The fitted values for this family will be a two column matrix. The first column is the mean, and the second column is the inverse of the standard deviation. Predictions using `<predict.gam>` will also produce 2 column matrices for `type` `"link"` and `"response"`. The second column when `type="response"` is again on the reciprocal standard deviation scale (i.e. the square root precision scale). The second column when `type="link"` is *log(sigma-b)*. Also `<plot.gam>` will plot smooths relating to *sigma* on the *log(sigma-b)* scale (so high values correspond to high standard deviation and low values to low standard deviation). Similarly the smoothing penalties are applied on the (log) standard deviation scale, not the log precision scale.
The null deviance reported for this family is the sum of squares of the difference between the response and the mean response divided by the standard deviation of the response according to the model. The deviance is the sum of squares of residuals divided by model standard deviations.
### Value
An object inheriting from class `general.family`.
### References
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
### Examples
```
library(mgcv);library(MASS)
b <- gam(list(accel~s(times,k=20,bs="ad"),~s(times)),
data=mcycle,family=gaulss())
summary(b)
plot(b,pages=1,scale=0)
```
r None
`smooth.construct.t2.smooth.spec` Tensor product smoothing constructor
-----------------------------------------------------------------------
### Description
A special `smooth.construct` method function for creating tensor product smooths from any combination of single penalty marginal smooths, using the construction of Wood, Scheipl and Faraway (2013).
### Usage
```
## S3 method for class 't2.smooth.spec'
smooth.construct(object, data, knots)
```
### Arguments
| | |
| --- | --- |
| `object` | a smooth specification object of class `t2.smooth.spec`, usually generated by a term like `t2(x,z)` in a `<gam>` model formula |
| `data` | a list containing just the data (including any `by` variable) required by this term, with names corresponding to `object$term` (and `object$by`). The `by` variable is the last element. |
| `knots` | a list containing any knots supplied for basis setup — in same order and with same names as `data`. Can be `NULL`. See details for further information. |
### Details
Tensor product smooths are smooths of several variables which allow the degree of smoothing to be different with respect to different variables. They are useful as smooth interaction terms, as they are invariant to linear rescaling of the covariates, which means, for example, that they are insensitive to the measurement units of the different covariates. They are also useful whenever isotropic smoothing is inappropriate. See `<t2>`, `<te>`, `<smooth.construct>` and `<smooth.terms>`. The construction employed here produces tensor smooths for which the smoothing penalties are non-overlapping portions of the identity matrix. This makes their estimation by mixed modelling software rather easy.
### Value
An object of class `"t2.smooth"`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N., F. Scheipl and J.J. Faraway (2013) Straightforward intermediate rank tensor product smoothing in mixed models. Statistics and Computing 23: 341-360.
### See Also
`<t2>`
### Examples
```
## see ?t2
```
r None
`gam.fit3` P-IRLS GAM estimation with GCV \& UBRE/AIC or RE/ML derivative calculation
--------------------------------------------------------------------------------------
### Description
Estimation of GAM smoothing parameters is most stable if optimization of the UBRE/AIC, GCV, GACV, REML or ML score is outer to the penalized iteratively re-weighted least squares scheme used to estimate the model given smoothing parameters.
This routine estimates a GAM (any quadratically penalized GLM) given log smoothing paramaters, and evaluates derivatives of the smoothness selection scores of the model with respect to the log smoothing parameters. Calculation of exact derivatives is generally faster than approximating them by finite differencing, as well as generally improving the reliability of GCV/UBRE/AIC/REML score minimization.
The approach is to run the P-IRLS to convergence, and only then to iterate for first and second derivatives.
Not normally called directly, but rather service routines for `<gam>`.
### Usage
```
gam.fit3(x, y, sp, Eb ,UrS=list(),
weights = rep(1, nobs), start = NULL, etastart = NULL,
mustart = NULL, offset = rep(0, nobs), U1 = diag(ncol(x)),
Mp = -1, family = gaussian(), control = gam.control(),
intercept = TRUE,deriv=2,gamma=1,scale=1,
printWarn=TRUE,scoreType="REML",null.coef=rep(0,ncol(x)),
pearson.extra=0,dev.extra=0,n.true=-1,Sl=NULL,...)
```
### Arguments
| | |
| --- | --- |
| `x` | The model matrix for the GAM (or any penalized GLM). |
| `y` | The response variable. |
| `sp` | The log smoothing parameters. |
| `Eb` | A balanced version of the total penalty matrix: usd for numerical rank determination. |
| `UrS` | List of square root penalties premultiplied by transpose of orthogonal basis for the total penalty. |
| `weights` | prior weights for fitting. |
| `start` | optional starting parameter guesses. |
| `etastart` | optional starting values for the linear predictor. |
| `mustart` | optional starting values for the mean. |
| `offset` | the model offset |
| `U1` | An orthogonal basis for the range space of the penalty — required for ML smoothness estimation only. |
| `Mp` | The dimension of the total penalty null space — required for ML smoothness estimation only. |
| `family` | the family - actually this routine would never be called with `gaussian()` |
| `control` | control list as returned from `[glm.control](../../stats/html/glm.control)` |
| `intercept` | does the model have and intercept, `TRUE` or `FALSE` |
| `deriv` | Should derivatives of the GCV and UBRE/AIC scores be calculated? 0, 1 or 2, indicating the maximum order of differentiation to apply. |
| `gamma` | The weight given to each degree of freedom in the GCV and UBRE scores can be varied (usually increased) using this parameter. |
| `scale` | The scale parameter - needed for the UBRE/AIC score. |
| `printWarn` | Set to `FALSE` to suppress some warnings. Useful in order to ensure that some warnings are only printed if they apply to the final fitted model, rather than an intermediate used in optimization. |
| `scoreType` | specifies smoothing parameter selection criterion to use. |
| `null.coef` | coefficients for a model which gives some sort of upper bound on deviance. This allows immediate divergence problems to be controlled. |
| `pearson.extra` | Extra component to add to numerator of pearson statistic in P-REML/P-ML smoothness selection criteria. |
| `dev.extra` | Extra component to add to deviance for REML/ML type smoothness selection criteria. |
| `n.true` | Number of data to assume in smoothness selection criteria. <=0 indicates that it should be the number of rows of `X`. |
| `Sl` | A smooth list suitable for passing to gam.fit5. |
| `...` | Other arguments: ignored. |
### Details
This routine is basically `[glm.fit](../../stats/html/glm)` with some modifications to allow (i) for quadratic penalties on the log likelihood; (ii) derivatives of the model coefficients with respect to log smoothing parameters to be obtained by use of the implicit function theorem and (iii) derivatives of the GAM GCV, UBRE/AIC, REML or ML scores to be evaluated at convergence.
In addition the routines apply step halving to any step that increases the penalized deviance substantially.
The most costly parts of the calculations are performed by calls to compiled C code (which in turn calls LAPACK routines) in place of the compiled code that would usually perform least squares estimation on the working model in the IRLS iteration.
Estimation of smoothing parameters by optimizing GCV scores obtained at convergence of the P-IRLS iteration was proposed by O'Sullivan et al. (1986), and is here termed ‘outer’ iteration.
Note that use of non-standard families with this routine requires modification of the families as described in `<fix.family.link>`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
The routine has been modified from `glm.fit` in R 2.0.1, written by the R core (see `[glm.fit](../../stats/html/glm)` for further credits).
### References
Wood, S.N. (2011) Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. Journal of the Royal Statistical Society (B) 73(1):3-36
O 'Sullivan, Yandall & Raynor (1986) Automatic smoothing of regression functions in generalized linear models. J. Amer. Statist. Assoc. 81:96-103.
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<gam.fit>`, `<gam>`, `<magic>`
r None
`smooth.construct.tp.smooth.spec` Penalized thin plate regression splines in GAMs
----------------------------------------------------------------------------------
### Description
`<gam>` can use isotropic smooths of any number of variables, specified via terms like `s(x,z,bs="tp",m=3)` (or just `s(x,z)` as this is the default basis). These terms are based on thin plate regression splines. `m` specifies the order of the derivatives in the thin plate spline penalty.
If `m` is a vector of length 2 and the second element is zero, then the penalty null space of the smooth is not included in the smooth: this is useful if you need to test whether a smooth could be replaced by a linear term, or construct models with odd nesting structures.
Thin plate regression splines are constructed by starting with the basis and penalty for a full thin plate spline and then truncating this basis in an optimal manner, to obtain a low rank smoother. Details are given in Wood (2003). One key advantage of the approach is that it avoids the knot placement problems of conventional regression spline modelling, but it also has the advantage that smooths of lower rank are nested within smooths of higher rank, so that it is legitimate to use conventional hypothesis testing methods to compare models based on pure regression splines. Note that the basis truncation does not change the meaning of the thin plate spline penalty (it penalizes exactly what it would have penalized for a full thin plate spline).
The t.p.r.s. basis and penalties can become expensive to calculate for large datasets. For this reason the default behaviour is to randomly subsample `max.knots` unique data locations if there are more than `max.knots` such, and to use the sub-sample for basis construction. The sampling is always done with the same random seed to ensure repeatability (does not reset R RNG). `max.knots` is 2000, by default. Both seed and `max.knots` can be modified using the `xt` argument to `s`. Alternatively the user can supply knots from which to construct a basis.
The `"ts"` smooths are t.p.r.s. with the penalty modified so that the term is shrunk to zero for high enough smoothing parameter, rather than being shrunk towards a function in the penalty null space (see details).
### Usage
```
## S3 method for class 'tp.smooth.spec'
smooth.construct(object, data, knots)
## S3 method for class 'ts.smooth.spec'
smooth.construct(object, data, knots)
```
### Arguments
| | |
| --- | --- |
| `object` | a smooth specification object, usually generated by a term `s(...,bs="tp",...)` or `s(...,bs="ts",...)` |
| `data` | a list containing just the data (including any `by` variable) required by this term, with names corresponding to `object$term` (and `object$by`). The `by` variable is the last element. |
| `knots` | a list containing any knots supplied for basis setup — in same order and with same names as `data`. Can be `NULL` |
### Details
The default basis dimension for this class is `k=M+k.def` where `M` is the null space dimension (dimension of unpenalized function space) and `k.def` is 8 for dimension 1, 27 for dimension 2 and 100 for higher dimensions. This is essentially arbitrary, and should be checked, but as with all penalized regression smoothers, results are statistically insensitive to the exact choise, provided it is not so small that it forces oversmoothing (the smoother's degrees of freedom are controlled primarily by its smoothing parameter).
The default is to set `m` (the order of derivative in the thin plate spline penalty) to the smallest value satisfying `2m > d+1` where `d` if the number of covariates of the term: this yields ‘visually smooth’ functions. In any case `2m>d` must be satisfied.
The constructor is not normally called directly, but is rather used internally by `<gam>`. To use for basis setup it is recommended to use `[smooth.construct2](smooth.construct)`.
For these classes the specification `object` will contain information on how to handle large datasets in their `xt` field. The default is to randomly subsample 2000 ‘knots’ from which to produce a tprs basis, if the number of unique predictor variable combinations in excess of 2000. The default can be modified via the `xt` argument to `<s>`. This is supplied as a list with elements `max.knots` and `seed` containing a number to use in place of 2000, and the random number seed to use (either can be missing).
For these bases `knots` has two uses. Firstly, as mentioned already, for large datasets the calculation of the `tp` basis can be time-consuming. The user can retain most of the advantages of the t.p.r.s. approach by supplying a reduced set of covariate values from which to obtain the basis - typically the number of covariate values used will be substantially smaller than the number of data, and substantially larger than the basis dimension, `k`. This approach is the one taken automatically if the number of unique covariate values (combinations) exceeds `max.knots`. The second possibility is to avoid the eigen-decomposition used to find the t.p.r.s. basis altogether and simply use the basis implied by the chosen knots: this will happen if the number of knots supplied matches the basis dimension, `k`. For a given basis dimension the second option is faster, but gives poorer results (and the user must be quite careful in choosing knot locations).
The shrinkage version of the smooth, eigen-decomposes the wiggliness penalty matrix, and sets its zero eigenvalues to small multiples of the smallest strictly positive eigenvalue. The penalty is then set to the matrix with eigenvectors corresponding to those of the original penalty, but eigenvalues set to the peturbed versions. This penalty matrix has full rank and shrinks the curve to zero at high enough smoothing parameters.
### Value
An object of class `"tprs.smooth"` or `"ts.smooth"`. In addition to the usual elements of a smooth class documented under `<smooth.construct>`, this object will contain:
| | |
| --- | --- |
| `shift` | A record of the shift applied to each covariate in order to center it around zero and avoid any co-linearity problems that might otehrwise occur in the penalty null space basis of the term. |
| `Xu` | A matrix of the unique covariate combinations for this smooth (the basis is constructed by first stripping out duplicate locations). |
| `UZ` | The matrix mapping the t.p.r.s. parameters back to the parameters of a full thin plate spline. |
| `null.space.dimension` | The dimension of the space of functions that have zero wiggliness according to the wiggliness penalty for this term. |
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2003) Thin plate regression splines. J.R.Statist.Soc.B 65(1):95-114
### Examples
```
require(mgcv); n <- 100; set.seed(2)
x <- runif(n); y <- x + x^2*.2 + rnorm(n) *.1
## is smooth significantly different from straight line?
summary(gam(y~s(x,m=c(2,0))+x,method="REML")) ## not quite
## is smooth significatly different from zero?
summary(gam(y~s(x),method="REML")) ## yes!
## Fool bam(...,discrete=TRUE) into (strange) nested
## model fit...
set.seed(2) ## simulate some data...
dat <- gamSim(1,n=400,dist="normal",scale=2)
dat$x1a <- dat$x1 ## copy x1 so bam allows 2 copies of x1
## Following removes identifiability problem, by removing
## linear terms from second smooth, and then re-inserting
## the one that was not a duplicate (x2)...
b <- bam(y~s(x0,x1)+s(x1a,x2,m=c(2,0))+x2,data=dat,discrete=TRUE)
## example of knot based tprs...
k <- 10; m <- 2
y <- y[order(x)];x <- x[order(x)]
b <- gam(y~s(x,k=k,m=m),method="REML",
knots=list(x=seq(0,1,length=k)))
X <- model.matrix(b)
par(mfrow=c(1,2))
plot(x,X[,1],ylim=range(X),type="l")
for (i in 2:ncol(X)) lines(x,X[,i],col=i)
## compare with eigen based (default)
b1 <- gam(y~s(x,k=k,m=m),method="REML")
X1 <- model.matrix(b1)
plot(x,X1[,1],ylim=range(X1),type="l")
for (i in 2:ncol(X1)) lines(x,X1[,i],col=i)
## see ?gam
```
| programming_docs |
r None
`sdiag` Extract or modify diagonals of a matrix
------------------------------------------------
### Description
Extracts or modifies sub- or super- diagonals of a matrix.
### Usage
```
sdiag(A,k=0)
sdiag(A,k=0) <- value
```
### Arguments
| | |
| --- | --- |
| `A` | a matrix |
| `k` | sub- (negative) or super- (positive) diagonal of a matrix. 0 is the leading diagonal. |
| `value` | single value, or vector of the same length as the diagonal. |
### Value
A vector containing the requested diagonal, or a matrix with the requested diagonal replaced by `value`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### Examples
```
require(mgcv)
A <- matrix(1:35,7,5)
A
sdiag(A,1) ## first super diagonal
sdiag(A,-1) ## first sub diagonal
sdiag(A) <- 1 ## leading diagonal set to 1
sdiag(A,3) <- c(-1,-2) ## set 3rd super diagonal
```
r None
`gam.scale` Scale parameter estimation in GAMs
-----------------------------------------------
### Description
Scale parameter estimation in `<gam>` depends on the type of `family`. For extended families then the RE/ML estimate is used. For conventional exponential families, estimated by the default outer iteration, the scale estimator can be controlled using argument `scale.est` in `<gam.control>`. The options are `"fletcher"` (default), `"pearson"` or `"deviance"`. The Pearson estimator is the (weighted) sum of squares of the pearson residuals, divided by the effective residual degrees of freedom. The Fletcher (2012) estimator is an improved version of the Pearson estimator. The deviance estimator simply substitutes deviance residuals for Pearson residuals.
Usually the Pearson estimator is recommended for GLMs, since it is asymptotically unbiased. However, it can also be unstable at finite sample sizes, if a few Pearson residuals are very large. For example, a very low Poisson mean with a non zero count can give a huge Pearson residual, even though the deviance residual is much more modest. The Fletcher (2012) estimator is designed to reduce these problems.
For performance iteration the Pearson estimator is always used.
`<gamm>` uses the estimate of the scale parameter from the underlying call to `lme`. `<bam>` uses the REML estimator if the method is `"fREML"`. Otherwise the estimator is a Pearson estimator.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected]) with help from Mark Bravington and David Peel
### References
Fletcher, David J. (2012) Estimating overdispersion when fitting a generalized linear model to sparse data. Biometrika 99(1), 230-237.
### See Also
`<gam.control>`
r None
`psum.chisq` Evaluate the c.d.f. of a weighted sum of chi-squared deviates
---------------------------------------------------------------------------
### Description
Evaluates the c.d.f. of a weighted sum of chi-squared random variables by the method of Davies (1973, 1980). That is it computes
*P(q < sum\_j lb[j] X\_j + sigz Z)*
where *X\_j* is a chi-squared random variable with `df[j]` (integer) degrees of freedom and non-centrality parameter `nc[j]`, while *Z* is a standard normal deviate.
### Usage
```
psum.chisq(q,lb,df=rep(1,length(lb)),nc=rep(0,length(lb)),sigz=0,
lower.tail=FALSE,tol=2e-5,nlim=100000,trace=FALSE)
```
### Arguments
| | |
| --- | --- |
| `q` | is the vector of quantile values at which to evaluate. |
| `lb` | contains *lb[i]*, the weight for deviate `i`. Weights can be positive and/or negative. |
| `df` | is the integer vector of chi-squared degrees of freedom. |
| `nc` | is the vector of non-centrality parameters for the chi-squared deviates. |
| `sigz` | is the multiplier for the standard normal deviate. Non- positive to exclude this term. |
| `lower.tail` | indicates whether lower of upper tail probabilities are required. |
| `tol` | is the numerical tolerance to work to. |
| `nlim` | is the maximum number of integration steps to allow |
| `trace` | can be set to `TRUE` to return some trace information and a fault code as attributes. |
### Details
This calls a C translation of the original Algol60 code from Davies (1980), which numerically inverts the characteristic function of the distribution (see Davies, 1973). Some modifications have been made to remove `goto` statements and global variables, to use a slightly more efficient sorting of `lb` and to use R functions for `log(1+x)`. In addition the integral and associated error are accumulated in single terms, rather than each being split into 2, since only their sums are ever used. If `q` is a vector then `psum.chisq` calls the algorithm separately for each `q[i]`.
If the Davies algorithm returns an error then an attempt will be made to use the approximation of Liu et al (2009) and a warning will be issued. If that is not possible then an `NA` is returned. A warning will also be issued if the algorithm detects that round off errors may be significant.
If `trace` is set to `TRUE` then the result will have two attributes. `"ifault"` is 0 for no problem, 1 if the desired accuracy can not be obtained, 2 if round-off error may be significant, 3 is invalid parameters have been supplied or 4 if integration parameters can not be located. `"trace"` is a 7 element vector: 1. absolute value sum; 2. total number of integration terms; 3. number of integrations; 4. integration interval in main integration; 5. truncation point in initial integration; 6. sd of convergence factor term; 7. number of cycles to locate integration parameters. See Davies (1980) for more details. Note that for vector `q` these attributes relate to the final element of `q`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Davies, R. B. (1973). Numerical inversion of a characteristic function. Biometrika, 60(2), 415-417.
Davies, R. B. (1980) Algorithm AS 155: The Distribution of a Linear Combination of Chi-squared Random Variables. J. R. Statist. Soc. C 29, 323-333
Liu, H.; Tang, Y. & Zhang, H. H (2009) A new chi-square approximation to the distribution of non-negative definite quadratic forms in non-central normal variables. Computational Statistics & Data Analysis 53,853-856
### Examples
```
require(mgcv)
lb <- c(4.1,1.2,1e-3,-1) ## weights
df <- c(2,1,1,1) ## degrees of freedom
nc <- c(1,1.5,4,1) ## non-centrality parameter
q <- c(1,6,20) ## quantiles to evaluate
psum.chisq(q,lb,df,nc)
## same by simulation...
psc.sim <- function(q,lb,df=lb*0+1,nc=df*0,ns=10000) {
r <- length(lb);p <- q
X <- rowSums(rep(lb,each=ns) *
matrix(rchisq(r*ns,rep(df,each=ns),rep(nc,each=ns)),ns,r))
apply(matrix(q),1,function(q) mean(X>q))
} ## psc.sim
psum.chisq(q,lb,df,nc)
psc.sim(q,lb,df,nc,100000)
```
r None
`smooth.construct.bs.smooth.spec` Penalized B-splines in GAMs
--------------------------------------------------------------
### Description
`<gam>` can use smoothing splines based on univariate B-spline bases with derivative based penalties, specified via terms like `s(x,bs="bs",m=c(3,2))`. `m[1]` controls the spline order, with `m[1]=3` being a cubic spline, `m[1]=2` being quadratic, and so on. The integrated square of the `m[2]`th derivative is used as the penalty. So `m=c(3,2)` is a conventional cubic spline. Any further elements of `m`, after the first 2, define the order of derivative in further penalties. If `m` is supplied as a single number, then it is taken to be `m[1]` and `m[2]=m[1]-1`, which is only a conventional smoothing spline in the `m=3`, cubic spline case. Notice that the definition of the spline order in terms of `m[1]` is intuitive, but differs to that used with the `[tprs](smooth.construct.tp.smooth.spec)` and `[p.spline](smooth.construct.ps.smooth.spec)` bases. See details for options for controlling the interval over which the penalty is evaluated (which can matter if it is necessary to extrapolate).
### Usage
```
## S3 method for class 'bs.smooth.spec'
smooth.construct(object, data, knots)
## S3 method for class 'Bspline.smooth'
Predict.matrix(object, data)
```
### Arguments
| | |
| --- | --- |
| `object` | a smooth specification object, usually generated by a term `s(x,bs="bs",...)` |
| `data` | a list containing just the data (including any `by` variable) required by this term, with names corresponding to `object$term` (and `object$by`). The `by` variable is the last element. |
| `knots` | a list containing any knots supplied for basis setup — in same order and with same names as `data`. Can be `NULL`. See details for further information. |
### Details
The basis and penalty are sparse (although sparse matrices are not used to represent them). `m[2]>m[1]` will generate an error, since in that case the penalty would be based on an undefined derivative of the basis, which makes no sense. The terms can have multiple penalties of different orders, for example `s(x,bs="bs",m=c(3,2,1,0))` specifies a cubic basis with 3 penalties: a conventional cubic spline penalty, an integrated square of first derivative penalty, and an integrated square of function value penalty.
The default basis dimension, `k`, is the larger of 10 and `m[1]`. `m[1]` is the lower limit on basis dimension. If knots are supplied, then the number of supplied knots should be `k + m[1] + 1`, and the range of the middle `k-m[1]+1` knots should include all the covariate values. Alternatively, 2 knots can be supplied, denoting the lower and upper limits between which the spline can be evaluated (making this range too wide mean that there is no information about some basis coefficients, because the corresponding basis functions have a span that includes no data). Unlike P-splines, splines with derivative based penalties can have uneven knot spacing, without a problem.
Another option is to supply 4 knots. Then the outer 2 define the interval over which the penalty is to be evaluated, while the inner 2 define an interval within which all but the outermost 2 knots should lie. Normally the outer 2 knots would be the interval over which predictions might be required, while the inner 2 knots define the interval within which the data lie. This option allows the penalty to apply over a wider interval than the data, while still placing most of the basis functions where the data are. This is useful in situations in which it is necessary to extrapolate slightly with a smooth. Only applying the penalty over the interval containing the data amounts to a model in which the function could be less smooth outside the interval than within it, and leads to very wide extrapolation confidence intervals. However the alternative of evaluating the penalty over the whole real line amounts to asserting certainty that the function has some derivative zeroed away from the data, which is equally unreasonable. It is prefereable to build a model in which the same smoothness assumtions apply over both data and extrapolation intervals, but not over the whole real line. See example code for practical illustration.
Linear extrapolation is used for prediction that requires extrapolation (i.e. prediction outside the range of the interior `k-m[1]+1` knots — the interval over which the penalty is evaluated). Such extrapolation is not allowed in basis construction, but is when predicting.
It is possible to set a `deriv` flag in a smooth specification or smooth object, so that a model or prediction matrix produces the requested derivative of the spline, rather than evaluating it.
### Value
An object of class `"Bspline.smooth"`. See `<smooth.construct>`, for the elements that this object will contain.
### WARNING
`m[1]` directly controls the spline order here, which is intuitively sensible, but different to other bases.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected]). Extrapolation ideas joint with David Miller.
### References
Wood, S.N. (2017) P-splines with derivative based penalties and tensor product smoothing of unevenly distributed data. Statistics and Computing. 27(4) 985-989 <https://arxiv.org/abs/1605.02446>
### See Also
`[p.spline](smooth.construct.ps.smooth.spec)`
### Examples
```
require(mgcv)
set.seed(5)
dat <- gamSim(1,n=400,dist="normal",scale=2)
bs <- "bs"
## note the double penalty on the s(x2) term...
b <- gam(y~s(x0,bs=bs,m=c(4,2))+s(x1,bs=bs)+s(x2,k=15,bs=bs,m=c(4,3,0))+
s(x3,bs=bs,m=c(1,0)),data=dat,method="REML")
plot(b,pages=1)
## Extrapolation example, illustrating the importance of considering
## the penalty carefully if extrapolating...
f3 <- function(x) 0.2 * x^11 * (10 * (1 - x))^6 + 10 * (10 * x)^3 *
(1 - x)^10 ## test function
n <- 100;x <- runif(n)
y <- f3(x) + rnorm(n)*2
## first a model with first order penalty over whole real line (red)
b0 <- gam(y~s(x,m=1,k=20),method="ML")
## now a model with first order penalty evaluated over (-.5,1.5) (black)
op <- options(warn=-1)
b <- gam(y~s(x,bs="bs",m=c(3,1),k=20),knots=list(x=c(-.5,0,1,1.5)),
method="ML")
options(op)
## and the equivalent with same penalty over data range only (blue)
b1 <- gam(y~s(x,bs="bs",m=c(3,1),k=20),method="ML")
pd <- data.frame(x=seq(-.7,1.7,length=200))
fv <- predict(b,pd,se=TRUE)
ul <- fv$fit + fv$se.fit*2; ll <- fv$fit - fv$se.fit*2
plot(x,y,xlim=c(-.7,1.7),ylim=range(c(y,ll,ul)),main=
"Order 1 penalties: red tps; blue bs on (0,1); black bs on (-.5,1.5)")
## penalty defined on (-.5,1.5) gives plausible predictions and intervals
## over this range...
lines(pd$x,fv$fit);lines(pd$x,ul,lty=2);lines(pd$x,ll,lty=2)
fv <- predict(b0,pd,se=TRUE)
ul <- fv$fit + fv$se.fit*2; ll <- fv$fit - fv$se.fit*2
## penalty defined on whole real line gives constant width intervals away
## from data, as slope there must be zero, to avoid infinite penalty:
lines(pd$x,fv$fit,col=2)
lines(pd$x,ul,lty=2,col=2);lines(pd$x,ll,lty=2,col=2)
fv <- predict(b1,pd,se=TRUE)
ul <- fv$fit + fv$se.fit*2; ll <- fv$fit - fv$se.fit*2
## penalty defined only over the data interval (0,1) gives wild and wide
## extrapolation since penalty has been `turned off' outside data range:
lines(pd$x,fv$fit,col=4)
lines(pd$x,ul,lty=2,col=4);lines(pd$x,ll,lty=2,col=4)
## construct smooth of x. Model matrix sm$X and penalty
## matrix sm$S[[1]] will have many zero entries...
x <- seq(0,1,length=100)
sm <- smoothCon(s(x,bs="bs"),data.frame(x))[[1]]
## another example checking penalty numerically...
m <- c(4,2); k <- 15; b <- runif(k)
sm <- smoothCon(s(x,bs="bs",m=m,k=k),data.frame(x),
scale.penalty=FALSE)[[1]]
sm$deriv <- m[2]
h0 <- 1e-3; xk <- sm$knots[(m[1]+1):(k+1)]
Xp <- PredictMat(sm,data.frame(x=seq(xk[1]+h0/2,max(xk)-h0/2,h0)))
sum((Xp%*%b)^2*h0) ## numerical approximation to penalty
b%*%sm$S[[1]]%*%b ## `exact' version
## ...repeated with uneven knot spacing...
m <- c(4,2); k <- 15; b <- runif(k)
## produce the required 20 unevenly spaced knots...
knots <- data.frame(x=c(-.4,-.3,-.2,-.1,-.001,.05,.15,
.21,.3,.32,.4,.6,.65,.75,.9,1.001,1.1,1.2,1.3,1.4))
sm <- smoothCon(s(x,bs="bs",m=m,k=k),data.frame(x),
knots=knots,scale.penalty=FALSE)[[1]]
sm$deriv <- m[2]
h0 <- 1e-3; xk <- sm$knots[(m[1]+1):(k+1)]
Xp <- PredictMat(sm,data.frame(x=seq(xk[1]+h0/2,max(xk)-h0/2,h0)))
sum((Xp%*%b)^2*h0) ## numerical approximation to penalty
b%*%sm$S[[1]]%*%b ## `exact' version
```
r None
`pen.edf` Extract the effective degrees of freedom associated with each penalty in a gam fit
---------------------------------------------------------------------------------------------
### Description
Finds the coefficients penalized by each penalty and adds up their effective degrees of freedom. Very useful for `<t2>` terms, but hard to interpret for terms where the penalties penalize overlapping sets of parameters (e.g. `<te>` terms).
### Usage
```
pen.edf(x)
```
### Arguments
| | |
| --- | --- |
| `x` | an object inheriting from `gam` |
### Details
Useful for models containing `<t2>` terms, since it splits the EDF for the term up into parts due to different components of the smooth. This is useful for figuring out which interaction terms are actually needed in a model.
### Value
A vector of EDFs, named with labels identifying which penalty each EDF relates to.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### See Also
`<t2>`
### Examples
```
require(mgcv)
set.seed(20)
dat <- gamSim(1,n=400,scale=2) ## simulate data
## following `t2' smooth basically separates smooth
## of x0,x1 into main effects + interaction....
b <- gam(y~t2(x0,x1,bs="tp",m=1,k=7)+s(x2)+s(x3),
data=dat,method="ML")
pen.edf(b)
## label "rr" indicates interaction edf (range space times range space)
## label "nr" (null space for x0 times range space for x1) is main
## effect for x1.
## label "rn" is main effect for x0
## clearly interaction is negligible
## second example with higher order marginals.
b <- gam(y~t2(x0,x1,bs="tp",m=2,k=7,full=TRUE)
+s(x2)+s(x3),data=dat,method="ML")
pen.edf(b)
## In this case the EDF is negligible for all terms in the t2 smooth
## apart from the `main effects' (r2 and 2r). To understand the labels
## consider the following 2 examples....
## "r1" relates to the interaction of the range space of the first
## marginal smooth and the first basis function of the null
## space of the second marginal smooth
## "2r" relates to the interaction of the second basis function of
## the null space of the first marginal smooth with the range
## space of the second marginal smooth.
```
r None
`smooth.construct` Constructor functions for smooth terms in a GAM
-------------------------------------------------------------------
### Description
Smooth terms in a GAM formula are turned into smooth specification objects of class `xx.smooth.spec` during processing of the formula. Each of these objects is converted to a smooth object using an appropriate `smooth.construct` function. New smooth classes can be added by writing a new `smooth.construct` method function and a corresponding `[Predict.matrix](predict.matrix)` method function (see example code below).
In practice, `smooth.construct` is usually called via `smooth.construct2` and the wrapper function `[smoothCon](smoothcon)`, in order to handle `by` variables and centering constraints (see the `[smoothCon](smoothcon)` documentation if you need to handle these things directly, for a user defined smooth class).
### Usage
```
smooth.construct(object,data,knots)
smooth.construct2(object,data,knots)
```
### Arguments
| | |
| --- | --- |
| `object` | is a smooth specification object, generated by an `<s>` or `<te>` term in a GAM formula. Objects generated by `s` terms have class `xx.smooth.spec` where `xx` is given by the `bs` argument of `s` (this convention allows the user to add their own smoothers). If `object` is not class `tensor.smooth.spec` it will have the following elements: term
The names of the covariates for this smooth, in an array. bs.dim
Argument `k` of the `s` term generating the object. This is the dimension of the basis used to represent the term (or, arguably, 1 greater than the basis dimension for `cc` terms). `bs.dim<0` indicates that the constructor should set this to the default value. fixed
`TRUE` if the term is to be unpenalized, otherwise `FALSE`. dim
the number covariates of which this smooth is a function. p.order
the order of the smoothness penalty or `NA` for autoselection of this. This is argument `m` of the `s` term that generated `object`. by
the name of any `by` variable to multiply this term as supplied as an argument to `s`. `"NA"` if there is no such term. label
A suitable label for use with this term. xt
An object containing information that may be needed for basis setup (used, e.g. by `"tp"` smooths to pass optional information on big dataset handling). id
Any identity associated with this term — used for linking bases and smoothing parameters. `NULL` by default, indicating no linkage. sp
Smoothing parameters for the term. Any negative are estimated, otherwise they are fixed at the supplied value. Unless `NULL` (default), over-rides `sp` argument to `<gam>`. If `object` is of class `tensor.smooth.spec` then it was generated by a `te` term in the GAM formula, and specifies a smooth of several variables with a basis generated as a tensor product of lower dimensional bases. In this case the object will be different and will have the following elements: margin
is a list of smooth specification objects of the type listed above, defining the bases which have their tensor product formed in order to construct this term. term
is the array of names of the covariates that are arguments of the smooth. by
is the name of any `by` variable, or `"NA"`. fx
is an array, the elements of which indicate whether (`TRUE`) any of the margins in the tensor product should be unpenalized. label
A suitable label for use with this term. dim
is the number of covariates of which this smooth is a function. mp
`TRUE` if multiple penalties are to be used. np
`TRUE` if 1-D marginal smooths are to be re-parameterized in terms of function values. id
Any identity associated with this term — used for linking bases and smoothing parameters. `NULL` by default, indicating no linkage. sp
Smoothing parameters for the term. Any negative are estimated, otherwise they are fixed at the supplied value. Unless `NULL` (default), over-rides `sp` argument to `<gam>`. |
| `data` | For `smooth.construct` a data frame or list containing the evaluation of the elements of `object$term`, with names given by `object$term`. The last entry will be the `by` variable, if `object$by` is not `"NA"`. For `smooth.construct2` `data` need only be an object within which `object$term` can be evaluated, the variables can be in any order, and there can be irrelevant variables present as well. |
| `knots` | an optional data frame or list containing the knots relating to `object$term`. If it is `NULL` then the knot locations are generated automatically. The structure of `knots` should be as for `data`, depending on whether `smooth.construct` or `smooth.construct2` is used. |
### Details
There are built in methods for objects with the following classes: `tp.smooth.spec` (thin plate regression splines: see `[tprs](smooth.construct.tp.smooth.spec)`); `ts.smooth.spec` (thin plate regression splines with shrinkage-to-zero); `cr.smooth.spec` (cubic regression splines: see `[cubic.regression.spline](smooth.construct.cr.smooth.spec)`; `cs.smooth.spec` (cubic regression splines with shrinkage-to-zero); `cc.smooth.spec` (cyclic cubic regression splines); `ps.smooth.spec` (Eilers and Marx (1986) style P-splines: see `[p.spline](smooth.construct.ps.smooth.spec)`); `cp.smooth.spec` (cyclic P-splines); `ad.smooth.spec` (adaptive smooths of 1 or 2 variables: see `[adaptive.smooth](smooth.construct.ad.smooth.spec)`); `re.smooth.spec` (simple random effect terms); `mrf.smooth.spec` (Markov random field smoothers for smoothing over discrete districts); `tensor.smooth.spec` (tensor product smooths).
There is an implicit assumption that the basis only depends on the knots and/or the set of unique covariate combinations; i.e. that the basis is the same whether generated from the full set of covariates, or just the unique combinations of covariates.
Plotting of smooths is handled by plot methods for smooth objects. A default `mgcv.smooth` method is used if there is no more specific method available. Plot methods can be added for specific smooth classes, see source code for `mgcv:::plot.sos.smooth`, `mgcv:::plot.random.effect`, `mgcv:::plot.mgcv.smooth` for example code.
### Value
The input argument `object`, assigned a new class to indicate what type of smooth it is and with at least the following items added:
| | |
| --- | --- |
| `X` | The model matrix from this term. This may have an `"offset"` attribute: a vector of length `nrow(X)` containing any contribution of the smooth to the model offset term. `by` variables do not need to be dealt with here, but if they are then an item `by.done` must be added to the `object`. |
| `S` | A list of positive semi-definite penalty matrices that apply to this term. The list will be empty if the term is to be left un-penalized. |
| `rank` | An array giving the ranks of the penalties. |
| `null.space.dim` | The dimension of the penalty null space (before centering). |
The following items may be added:
| | |
| --- | --- |
| `C` | The matrix defining any identifiability constraints on the term, for use when fitting. If this is `NULL` then `smoothCon` will add an identifiability constraint that each term should sum to zero over the covariate values. Set to a zero row matrix if no constraints are required. If a supplied `C` has an attribute `"always.apply"` then it is never ignored, even if any `by` variables of a smooth imply that no constraint is actually needed. Code for creating `C` should check whether the specification object already contains a zero row matrix, and leave this unchanged if it is (since this signifies no constraint should be produced). |
| `Cp` | An optional matrix supplying alternative identifiability constraints for use when predicting. By default the fitting constrants are used. This option is useful when some sort of simple sparse constraint is required for fitting, but the usual sum-to-zero constraint is required for prediction so that, e.g. the CIs for model components are as narrow as possible. |
| `no.rescale` | if this is non-NULL then the penalty coefficient matrix of the smooth will not be rescaled for enhaced numerical stability (rescaling is the default, because `<gamm>` requires it). Turning off rescaling is useful if the values of the smoothing parameters should be interpretable in a model, for example because they are inverse variance components. |
| `df` | the degrees of freedom associated with this term (when unpenalized and unconstrained). If this is null then `smoothCon` will set it to the basis dimension. `smoothCon` will reduce this by the number of constraints. |
| `te.ok` | `0` if this term should not be used as a tensor product marginal, `1` if it can be used and plotted, and `2` is it can be used but not plotted. Set to `1` if `NULL`. |
| `plot.me` | Set to `FALSE` if this smooth should not be plotted by `<plot.gam>`. Set to `TRUE` if `NULL`. |
| `side.constrain` | Set to `FALSE` to ensure that the smooth is never subject to side constraints as a result of nesting. |
| `L` | smooths may depend on fewer ‘underlying’ smoothing parameters than there are elements of `S`. In this case `L` is the matrix mapping the vector of underlying log smoothing parameters to the vector of logs of the smoothing parameters actually multiplying the `S[[i]]`. `L=NULL` signifies that there is one smoothing parameter per `S[[i]]`. |
Usually the returned object will also include extra information required to define the basis, and used by `[Predict.matrix](predict.matrix)` methods to make predictions using the basis. See the `Details` section for links to the information included for the built in smooth classes.
`tensor.smooth` returned objects will additionally have each element of the `margin` list updated in the same way. `tensor.smooths` also have a list, `XP`, containing re-parameterization matrices for any 1-D marginal terms re-parameterized in terms of function values. This list will have `NULL` entries for marginal smooths that are not re-parameterized, and is only long enough to reach the last re-parameterized marginal in the list.
### WARNING
User defined smooth objects should avoid having attributes names `"qrc"` or `"nCons"` as these are used internally to provide constraint free parameterizations.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2003) Thin plate regression splines. J.R.Statist.Soc.B 65(1):95-114
Wood, S.N. (2006) Low rank scale invariant tensor product smooths for generalized additive mixed models. Biometrics 62(4):1025-1036
The code given in the example is based on the smooths advocated in:
Ruppert, D., M.P. Wand and R.J. Carroll (2003) Semiparametric Regression. Cambridge University Press.
However if you want p-splines, rather than splines with derivative based penalties, then the built in "ps" class is probably a marginally better bet. It's based on
Eilers, P.H.C. and B.D. Marx (1996) Flexible Smoothing with B-splines and Penalties. Statistical Science, 11(2):89-121
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<s>`,`<get.var>`, `<gamm>`, `<gam>`, `[Predict.matrix](predict.matrix)`, `[smoothCon](smoothcon)`, `[PredictMat](smoothcon)`
### Examples
```
## Adding a penalized truncated power basis class and methods
## as favoured by Ruppert, Wand and Carroll (2003)
## Semiparametric regression CUP. (No advantage to actually
## using this, since mgcv can happily handle non-identity
## penalties.)
smooth.construct.tr.smooth.spec<-function(object,data,knots) {
## a truncated power spline constructor method function
## object$p.order = null space dimension
m <- object$p.order[1]
if (is.na(m)) m <- 2 ## default
if (m<1) stop("silly m supplied")
if (object$bs.dim<0) object$bs.dim <- 10 ## default
nk<-object$bs.dim-m-1 ## number of knots
if (nk<=0) stop("k too small for m")
x <- data[[object$term]] ## the data
x.shift <- mean(x) # shift used to enhance stability
k <- knots[[object$term]] ## will be NULL if none supplied
if (is.null(k)) # space knots through data
{ n<-length(x)
k<-quantile(x[2:(n-1)],seq(0,1,length=nk+2))[2:(nk+1)]
}
if (length(k)!=nk) # right number of knots?
stop(paste("there should be ",nk," supplied knots"))
x <- x - x.shift # basis stabilizing shift
k <- k - x.shift # knots treated the same!
X<-matrix(0,length(x),object$bs.dim)
for (i in 1:(m+1)) X[,i] <- x^(i-1)
for (i in 1:nk) X[,i+m+1]<-(x-k[i])^m*as.numeric(x>k[i])
object$X<-X # the finished model matrix
if (!object$fixed) # create the penalty matrix
{ object$S[[1]]<-diag(c(rep(0,m+1),rep(1,nk)))
}
object$rank<-nk # penalty rank
object$null.space.dim <- m+1 # dim. of unpenalized space
## store "tr" specific stuff ...
object$knots<-k;object$m<-m;object$x.shift <- x.shift
object$df<-ncol(object$X) # maximum DoF (if unconstrained)
class(object)<-"tr.smooth" # Give object a class
object
}
Predict.matrix.tr.smooth<-function(object,data) {
## prediction method function for the `tr' smooth class
x <- data[[object$term]]
x <- x - object$x.shift # stabilizing shift
m <- object$m; # spline order (3=cubic)
k<-object$knots # knot locations
nk<-length(k) # number of knots
X<-matrix(0,length(x),object$bs.dim)
for (i in 1:(m+1)) X[,i] <- x^(i-1)
for (i in 1:nk) X[,i+m+1] <- (x-k[i])^m*as.numeric(x>k[i])
X # return the prediction matrix
}
# an example, using the new class....
require(mgcv)
set.seed(100)
dat <- gamSim(1,n=400,scale=2)
b<-gam(y~s(x0,bs="tr",m=2)+s(x1,bs="ps",m=c(1,3))+
s(x2,bs="tr",m=3)+s(x3,bs="tr",m=2),data=dat)
plot(b,pages=1)
b<-gamm(y~s(x0,bs="tr",m=2)+s(x1,bs="ps",m=c(1,3))+
s(x2,bs="tr",m=3)+s(x3,bs="tr",m=2),data=dat)
plot(b$gam,pages=1)
# another example using tensor products of the new class
dat <- gamSim(2,n=400,scale=.1)$data
b <- gam(y~te(x,z,bs=c("tr","tr"),m=c(2,2)),data=dat)
vis.gam(b)
```
| programming_docs |
r None
`one.se.rule` The one standard error rule for smoother models
--------------------------------------------------------------
### Description
The ‘one standard error rule’ (see e.g. Hastie, Tibshirani and Friedman, 2009) is a way of producing smoother models than those directly estimated by automatic smoothing parameter selection methods. In the single smoothing parameter case, we select the largest smoothing parameter within one standard error of the optimum of the smoothing parameter selection criterion. This approach can be generalized to multiple smoothing parameters estimated by REML or ML.
### Details
Under REML or ML smoothing parameter selection an asyptotic distributional approximation is available for the log smoothing parameters. Let *r* denote the log smoothing parameters that we want to increase to obtain a smoother model. The large sample distribution of the estimator of *r* is *N(r,V)* where *V* is the matrix returned by `<sp.vcov>`. Drop any elements of *r* that are already at ‘effective infinity’, along with the corresponding rows and columns of *V*. The standard errors of the log smoothing parameters can be obtained from the leading diagonal of *V*. Let the vector of these be *d*. Now suppose that we want to increase the estimated log smoothing parameters by an amount *a\*d*. We choose *a* so that *a d'V^{-1}d = (2p)^0.5*, where p is the dimension of d and 2p the variance of a chi-squared r.v. with p degrees of freedom.
The idea is that we increase the log smoothing parameters in proportion to their standard deviation, until the RE/ML is increased by 1 standard deviation according to its asypmtotic distribution.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Hastie, T, R. Tibshirani and J. Friedman (2009) The Elements of Statistical Learning 2nd ed. Springer.
### See Also
`<gam>`
### Examples
```
require(mgcv)
set.seed(2) ## simulate some data...
dat <- gamSim(1,n=400,dist="normal",scale=2)
b <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),data=dat,method="REML")
b
## only the first 3 smoothing parameters are candidates for
## increasing here...
V <- sp.vcov(b)[1:3,1:3] ## the approx cov matrix of sps
d <- diag(V)^.5 ## sp se.
## compute the log smoothing parameter step...
d <- sqrt(2*length(d))/d
sp <- b$sp ## extract original sp estimates
sp[1:3] <- sp[1:3]*exp(d) ## apply the step
## refit with the increased smoothing parameters...
b1 <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),data=dat,method="REML",sp=sp)
b;b1 ## compare fits
```
r None
`fs.test` FELSPLINE test function
----------------------------------
### Description
Implements a finite area test function based on one proposed by Tim Ramsay (2002).
### Usage
```
fs.test(x,y,r0=.1,r=.5,l=3,b=1,exclude=TRUE)
fs.boundary(r0=.1,r=.5,l=3,n.theta=20)
```
### Arguments
| | |
| --- | --- |
| `x,y` | Points at which to evaluate the test function. |
| `r0` | The test domain is a sort of bent sausage. This is the radius of the inner bend |
| `r` | The radius of the curve at the centre of the sausage. |
| `l` | The length of an arm of the sausage. |
| `b` | The rate at which the function increases per unit increase in distance along the centre line of the sausage. |
| `exclude` | Should exterior points be set to `NA`? |
| `n.theta` | How many points to use in a piecewise linear representation of a quarter of a circle, when generating the boundary curve. |
### Details
The function details are not given in the source article: but this is pretty close. The function is modified from Ramsay (2002), in that it bulges, rather than being flat: this makes a better test of the smoother.
### Value
`fs.test` returns function evaluations, or `NA`s for points outside the boundary. `fs.boundary` returns a list of `x,y` points to be jointed up in order to define/draw the boundary.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Tim Ramsay (2002) "Spline smoothing over difficult regions" J.R.Statist. Soc. B 64(2):307-319
### Examples
```
require(mgcv)
## plot the function, and its boundary...
fsb <- fs.boundary()
m<-300;n<-150
xm <- seq(-1,4,length=m);yn<-seq(-1,1,length=n)
xx <- rep(xm,n);yy<-rep(yn,rep(m,n))
tru <- matrix(fs.test(xx,yy),m,n) ## truth
image(xm,yn,tru,col=heat.colors(100),xlab="x",ylab="y")
lines(fsb$x,fsb$y,lwd=3)
contour(xm,yn,tru,levels=seq(-5,5,by=.25),add=TRUE)
```
r None
`gam.outer` Minimize GCV or UBRE score of a GAM using ‘outer’ iteration
------------------------------------------------------------------------
### Description
Estimation of GAM smoothing parameters is most stable if optimization of the smoothness selection score (GCV, GACV, UBRE/AIC, REML, ML etc) is outer to the penalized iteratively re-weighted least squares scheme used to estimate the model given smoothing parameters.
This routine optimizes a smoothness selection score in this way. Basically the score is evaluated for each trial set of smoothing parameters by estimating the GAM for those smoothing parameters. The score is minimized w.r.t. the parameters numerically, using `newton` (default), `bfgs`, `optim` or `nlm`. Exact (first and second) derivatives of the score can be used by fitting with `<gam.fit3>`. This improves efficiency and reliability relative to relying on finite difference derivatives.
Not normally called directly, but rather a service routine for `<gam>`.
### Usage
```
gam.outer(lsp,fscale,family,control,method,optimizer,
criterion,scale,gamma,G,start=NULL,...)
```
### Arguments
| | |
| --- | --- |
| `lsp` | The log smoothing parameters. |
| `fscale` | Typical scale of the GCV or UBRE/AIC score. |
| `family` | the model family. |
| `control` | control argument to pass to `<gam.fit>` if pure finite differencing is being used. |
| `method` | method argument to `<gam>` defining the smoothness criterion to use (but depending on whether or not scale known). |
| `optimizer` | The argument to `<gam>` defining the numerical optimization method to use. |
| `criterion` | Which smoothness selction criterion to use. One of `"UBRE"`, `"GCV"`, `"GACV"`, `"REML"` or `"P-REML"`. |
| `scale` | Supplied scale parameter. Positive indicates known. |
| `gamma` | The degree of freedom inflation factor for the GCV/UBRE/AIC score. |
| `G` | List produced by `mgcv:::gam.setup`, containing most of what's needed to actually fit a GAM. |
| `start` | starting parameter values. |
| `...` | other arguments, typically for passing on to `gam.fit3` (ultimately). |
### Details
See Wood (2008) for full details on ‘outer iteration’.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2011) Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. Journal of the Royal Statistical Society (B) 73(1):3-36
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<gam.fit3>`, `<gam>`, `<magic>`
r None
`ziP` GAM zero-inflated (hurdle) Poisson regression family
-----------------------------------------------------------
### Description
Family for use with `<gam>` or `<bam>`, implementing regression for zero inflated Poisson data when the complimentary log log of the zero probability is linearly dependent on the log of the Poisson parameter. Use with great care, noting that simply having many zero response observations is not an indication of zero inflation: the question is whether you have too many zeroes given the specified model.
This sort of model is really only appropriate when none of your covariates help to explain the zeroes in your data. If your covariates predict which observations are likely to have zero mean then adding a zero inflated model on top of this is likely to lead to identifiability problems. Identifiability problems may lead to fit failures, or absurd values for the linear predictor or predicted values.
### Usage
```
ziP(theta = NULL, link = "identity",b=0)
```
### Arguments
| | |
| --- | --- |
| `theta` | the 2 parameters controlling the slope and intercept of the linear transform of the mean controlling the zero inflation rate. If supplied then treated as fixed parameters (*theta\_1* and *theta\_2*), otherwise estimated. |
| `link` | The link function: only the `"identity"` is currently supported. |
| `b` | a non-negative constant, specifying the minimum dependence of the zero inflation rate on the linear predictor. |
### Details
The probability of a zero count is given by *1- p*, whereas the probability of count *y>0* is given by the truncated Poisson probability function *(pmu^y/((exp(mu)-1)y!)*. The linear predictor gives *log(mu)*, while *eta=log(-log(1-p))* and *eta = theta\_1 + (b+exp(theta\_2)) log(mu)*. The `theta` parameters are estimated alongside the smoothing parameters. Increasing the `b` parameter from zero can greatly reduce identifiability problems, particularly when there are very few non-zero data.
The fitted values for this model are the log of the Poisson parameter. Use the `predict` function with `type=="response"` to get the predicted expected response. Note that the theta parameters reported in model summaries are *theta\_1* and *b + exp(theta\_2)*.
These models should be subject to very careful checking, especially if fitting has not converged. It is quite easy to set up models with identifiability problems, particularly if the data are not really zero inflated, but simply have many zeroes because the mean is very low in some parts of the covariate space. See example for some obvious checks. Take convergence warnings seriously.
### Value
An object of class `extended.family`.
### WARNINGS
Zero inflated models are often over-used. Having lots of zeroes in the data does not in itself imply zero inflation. Having too many zeroes \*given the model mean\* may imply zero inflation.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
### See Also
`<ziplss>`
### Examples
```
rzip <- function(gamma,theta= c(-2,.3)) {
## generate zero inflated Poisson random variables, where
## lambda = exp(gamma), eta = theta[1] + exp(theta[2])*gamma
## and 1-p = exp(-exp(eta)).
y <- gamma; n <- length(y)
lambda <- exp(gamma)
eta <- theta[1] + exp(theta[2])*gamma
p <- 1- exp(-exp(eta))
ind <- p > runif(n)
y[!ind] <- 0
np <- sum(ind)
## generate from zero truncated Poisson, given presence...
y[ind] <- qpois(runif(np,dpois(0,lambda[ind]),1),lambda[ind])
y
}
library(mgcv)
## Simulate some ziP data...
set.seed(1);n<-400
dat <- gamSim(1,n=n)
dat$y <- rzip(dat$f/4-1)
b <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=ziP(),data=dat)
b$outer.info ## check convergence!!
b
plot(b,pages=1)
plot(b,pages=1,unconditional=TRUE) ## add s.p. uncertainty
gam.check(b)
## more checking...
## 1. If the zero inflation rate becomes decoupled from the linear predictor,
## it is possible for the linear predictor to be almost unbounded in regions
## containing many zeroes. So examine if the range of predicted values
## is sane for the zero cases?
range(predict(b,type="response")[b$y==0])
## 2. Further plots...
par(mfrow=c(2,2))
plot(predict(b,type="response"),residuals(b))
plot(predict(b,type="response"),b$y);abline(0,1,col=2)
plot(b$linear.predictors,b$y)
qq.gam(b,rep=20,level=1)
## 3. Refit fixing the theta parameters at their estimated values, to check we
## get essentially the same fit...
thb <- b$family$getTheta()
b0 <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=ziP(theta=thb),data=dat)
b;b0
## Example fit forcing minimum linkage of prob present and
## linear predictor. Can fix some identifiability problems.
b2 <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=ziP(b=.3),data=dat)
```
r None
`polys.plot` Plot geographic regions defined as polygons
---------------------------------------------------------
### Description
Produces plots of geographic regions defined by polygons, optionally filling the polygons with a color or grey shade dependent on a covariate.
### Usage
```
polys.plot(pc,z=NULL,scheme="heat",lab="",...)
```
### Arguments
| | |
| --- | --- |
| `pc` | A named list of matrices. Each matrix has two columns. The matrix rows each define the vertex of a boundary polygon. If a boundary is defined by several polygons, then each of these must be separated by an `NA` row in the matrix. See `[mrf](smooth.construct.mrf.smooth.spec)` for an example. |
| `z` | A vector of values associated with each area (item) of `pc`. If the vector elements have names then these are used to match elements of `z` to areas defined in `pc`. Otherwise `pc` and `z` are assumed to be in the same order. If `z` is `NULL` then polygons are not filled. |
| `scheme` | One of `"heat"` or `"grey"`, indicating how to fill the polygons in accordance with the value of `z`. |
| `lab` | label for plot. |
| `...` | other arguments to pass to plot (currently only if `z` is `NULL`). |
### Details
Any polygon within another polygon counts as a hole in the area. Further nesting is dealt with by treating any point that is interior to an odd number of polygons as being within the area, and all other points as being exterior. The routine is provided to facilitate plotting with models containing `[mrf](smooth.construct.mrf.smooth.spec)` smooths.
### Value
Simply produces a plot.
### Author(s)
Simon Wood [[email protected]](mailto:[email protected])
### See Also
`[mrf](smooth.construct.mrf.smooth.spec)` and `[columb.polys](columb)`.
### Examples
```
## see also ?mrf for use of z
require(mgcv)
data(columb.polys)
polys.plot(columb.polys)
```
r None
`fix.family.link` Modify families for use in GAM fitting and checking
----------------------------------------------------------------------
### Description
Generalized Additive Model fitting by ‘outer’ iteration, requires extra derivatives of the variance and link functions to be added to family objects. The first 3 functions add what is needed. Model checking can be aided by adding quantile and random deviate generating functions to the family. The final two functions do this.
### Usage
```
fix.family.link(fam)
fix.family.var(fam)
fix.family.ls(fam)
fix.family.qf(fam)
fix.family.rd(fam)
```
### Arguments
| | |
| --- | --- |
| `fam` | A `family`. |
### Details
Consider the first 3 function first.
Outer iteration GAM estimation requires derivatives of the GCV, UBRE/gAIC, GACV, REML or ML score, which are obtained by finding the derivatives of the model coefficients w.r.t. the log smoothing parameters, using the implicit function theorem. The expressions for the derivatives require the second and third derivatives of the link w.r.t. the mean (and the 4th derivatives if Fisher scoring is not used). Also required are the first and second derivatives of the variance function w.r.t. the mean (plus the third derivative if Fisher scoring is not used). Finally REML or ML estimation of smoothing parameters requires the log saturated likelihood and its first two derivatives w.r.t. the scale parameter. These functions add functions evaluating these quantities to a family.
If the family already has functions `dvar`, `d2var`, `d3var`, `d2link`, `d3link`, `d4link` and for RE/ML `ls`, then these functions simply return the family unmodified: this allows non-standard links to be used with `<gam>` when using outer iteration (performance iteration operates with unmodified families). Note that if you only need Fisher scoring then `d4link` and `d3var` can be dummy, as they are ignored. Similalry `ls` is only needed for RE/ML.
The `dvar` function is a function of a mean vector, `mu`, and returns a vector of corresponding first derivatives of the family variance function. The `d2link` function is also a function of a vector of mean values, `mu`: it returns a vector of second derivatives of the link, evaluated at `mu`. Higher derivatives are defined similarly.
If modifying your own family, note that you can often get away with supplying only a `dvar` and `d2var`, function if your family only requires links that occur in one of the standard families.
The second two functions are useful for investigating the distribution of residuals and are used by `<qq.gam>`. If possible the functions add quantile (`qf`) or random deviate (`rd`) generating functions to the family. If a family already has `qf` or `rd` functions then it is left unmodified. `qf` functions are only available for some families, and for quasi families neither type of function is available.
### Value
A family object with extra component functions `dvar`, `d2var`, `d2link`, `d3link`, `d4link`, `ls`, and possibly `qf` and `rd`, depending on which functions are called. `fix.family.var` also adds a variable `scale` set to negative to indicate that family has a free scale parameter.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### See Also
`<gam.fit3>`, `<qq.gam>`
r None
`place.knots` Automatically place a set of knots evenly through covariate values
---------------------------------------------------------------------------------
### Description
Given a univariate array of covariate values, places a set of knots for a regression spline evenly through the covariate values.
### Usage
```
place.knots(x,nk)
```
### Arguments
| | |
| --- | --- |
| `x` | array of covariate values (need not be sorted). |
| `nk` | integer indicating the required number of knots. |
### Details
Places knots evenly throughout a set of covariates. For example, if you had 11 covariate values and wanted 6 knots then a knot would be placed at the first (sorted) covariate value and every second (sorted) value thereafter. With less convenient numbers of data and knots the knots are placed within intervals between data in order to achieve even coverage, where even means having approximately the same number of data between each pair of knots.
### Value
An array of knot locations.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`[smooth.construct.cc.smooth.spec](smooth.construct.cr.smooth.spec)`
### Examples
```
require(mgcv)
x<-runif(30)
place.knots(x,7)
rm(x)
```
r None
`gam.mh` Simple posterior simulation with gam fits
---------------------------------------------------
### Description
GAM coefficients can be simulated directly from the Gaussian approximation to the posterior for the coefficients, or using a simple Metropolis Hastings sampler. See also `<ginla>`.
### Usage
```
gam.mh(b,ns=10000,burn=1000,t.df=40,rw.scale=.25,thin=1)
```
### Arguments
| | |
| --- | --- |
| `b` | a fitted model object from `<gam>`. `<bam>` fits are not supported. |
| `ns` | the number of samples to generate. |
| `burn` | the length of any initial burn in period to discard (in addition to codens). |
| `t.df` | degrees of freedom for static multivariate t proposal. Lower for heavier tailed proposals. |
| `rw.scale` | Factor by which to scale posterior covariance matrix when generating random walk proposals. Negative or non finite to skip the random walk step. |
| `thin` | retain only every `thin` samples. |
### Details
Posterior simulation is particularly useful for making inferences about non-linear functions of the model coefficients. Simulate random draws from the posterior, compute the function for each draw, and you have a draw from the posterior for the function. In many cases the Gaussian approximation to the posterior of the model coefficients is accurate, and samples generated from it can be treated as samples from the posterior for the coefficients. See example code below. This approach is computationally very efficient.
In other cases the Gaussian approximation can become poor. A typical example is in a spatial model with a log or logit link when there is a large area of observations containing only zeroes. In this case the linear predictor is poorly identified and the Gaussian approximation can become useless (an example is provided below). In that case it can sometimes be useful to simulate from the posterior using a Metropolis Hastings sampler. A simple approach alternates fixed proposals, based on the Gaussian approximation to the posterior, with random walk proposals, based on a shrunken version of the approximate posterior covariane matrix. `gam.mh` implements this. The fixed proposal often promotes rapid mixing, while the random walk component ensures that the chain does not become stuck in regions for which the fixed Gaussian proposal density is much lower than the posterior density.
The function reports the acceptance rate of the two types of step. If the random walk acceptance probability is higher than a quarter then `rw.step` should probably be increased. Similarly if the acceptance rate is too low, it should be decreased. The random walk steps can be turned off altogether (see above), but it is important to check the chains for stuck sections if this is done.
### Value
A list containing the retained simulated coefficients in matrix `bs` and two entries for the acceptance probabilities.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2015) Core Statistics, Cambridge
### Examples
```
library(mgcv)
set.seed(3);n <- 400
############################################
## First example: simulated Tweedie model...
############################################
dat <- gamSim(1,n=n,dist="poisson",scale=.2)
dat$y <- rTweedie(exp(dat$f),p=1.3,phi=.5) ## Tweedie response
b <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=tw(),
data=dat,method="REML")
## simulate directly from Gaussian approximate posterior...
br <- rmvn(1000,coef(b),vcov(b))
## Alternatively use MH sampling...
br <- gam.mh(b,thin=2,ns=2000,rw.scale=.15)$bs
## If 'coda' installed, can check effective sample size
## require(coda);effectiveSize(as.mcmc(br))
## Now compare simulation results and Gaussian approximation for
## smooth term confidence intervals...
x <- seq(0,1,length=100)
pd <- data.frame(x0=x,x1=x,x2=x,x3=x)
X <- predict(b,newdata=pd,type="lpmatrix")
par(mfrow=c(2,2))
for(i in 1:4) {
plot(b,select=i,scale=0,scheme=1)
ii <- b$smooth[[i]]$first.para:b$smooth[[i]]$last.para
ff <- X[,ii]%*%t(br[,ii]) ## posterior curve sample
fq <- apply(ff,1,quantile,probs=c(.025,.16,.84,.975))
lines(x,fq[1,],col=2,lty=2);lines(x,fq[4,],col=2,lty=2)
lines(x,fq[2,],col=2);lines(x,fq[3,],col=2)
}
###############################################################
## Second example, where Gaussian approximation is a failure...
###############################################################
y <- c(rep(0, 89), 1, 0, 1, 0, 0, 1, rep(0, 13), 1, 0, 0, 1,
rep(0, 10), 1, 0, 0, 1, 1, 0, 1, rep(0,4), 1, rep(0,3),
1, rep(0, 3), 1, rep(0, 10), 1, rep(0, 4), 1, 0, 1, 0, 0,
rep(1, 4), 0, rep(1, 5), rep(0, 4), 1, 1, rep(0, 46))
set.seed(3);x <- sort(c(0:10*5,rnorm(length(y)-11)*20+100))
b <- gam(y ~ s(x, k = 15),method = 'REML', family = binomial)
br <- gam.mh(b,thin=2,ns=2000,rw.scale=.4)$bs
X <- model.matrix(b)
par(mfrow=c(1,1))
plot(x, y, col = rgb(0,0,0,0.25), ylim = c(0,1))
ff <- X%*%t(br) ## posterior curve sample
linv <- b$family$linkinv
## Get intervals for the curve on the response scale...
fq <- linv(apply(ff,1,quantile,probs=c(.025,.16,.5,.84,.975)))
lines(x,fq[1,],col=2,lty=2);lines(x,fq[5,],col=2,lty=2)
lines(x,fq[2,],col=2);lines(x,fq[4,],col=2)
lines(x,fq[3,],col=4)
## Compare to the Gaussian posterior approximation
fv <- predict(b,se=TRUE)
lines(x,linv(fv$fit))
lines(x,linv(fv$fit-2*fv$se.fit),lty=3)
lines(x,linv(fv$fit+2*fv$se.fit),lty=3)
## ... Notice the useless 95% CI (black dotted) based on the
## Gaussian approximation!
```
| programming_docs |
r None
`negbin` GAM negative binomial families
----------------------------------------
### Description
The `gam` modelling function is designed to be able to use the `<negbin>` family (a modification of MASS library `negative.binomial` family by Venables and Ripley), or the `[nb](negbin)` function designed for integrated estimation of parameter `theta`. *θ* is the parameter such that *var(y) = μ + μ^2/θ*, where *μ = E(y)*.
Two approaches to estimating `theta` are available (with `<gam>` only):
* With `negbin` then if ‘performance iteration’ is used for smoothing parameter estimation (see `<gam>`), then smoothing parameters are chosen by GCV and `theta` is chosen in order to ensure that the Pearson estimate of the scale parameter is as close as possible to 1, the value that the scale parameter should have.
* If ‘outer iteration’ is used for smoothing parameter selection with the `nb` family then `theta` is estimated alongside the smoothing parameters by ML or REML.
To use the first option, set the `optimizer` argument of `<gam>` to `"perf"` (it can sometimes fail to converge).
### Usage
```
negbin(theta = stop("'theta' must be specified"), link = "log")
nb(theta = NULL, link = "log")
```
### Arguments
| | |
| --- | --- |
| `theta` | Either i) a single value known value of theta or ii) two values of theta specifying the endpoints of an interval over which to search for theta (this is an option only for `negbin`, and is deprecated). For `nb` then a positive supplied `theta` is treated as a fixed known parameter, otherwise it is estimated (the absolute value of a negative `theta` is taken as a starting value). |
| `link` | The link function: one of `"log"`, `"identity"` or `"sqrt"` |
### Details
`nb` allows estimation of the `theta` parameter alongside the model smoothing parameters, but is only usable with `<gam>` or `<bam>` (not `gamm`).
For `negbin`, if a single value of `theta` is supplied then it is always taken as the known fixed value and this is useable with `<bam>` and `<gamm>`. If `theta` is two numbers (`theta[2]>theta[1]`) then they are taken as specifying the range of values over which to search for the optimal theta. This option is deprecated and should only be used with performance iteration estimation (see `<gam>` argument `optimizer`), in which case the method of estimation is to choose *theta* so that the GCV (Pearson) estimate of the scale parameter is one (since the scale parameter is one for the negative binomial). In this case *theta* estimation is nested within the IRLS loop used for GAM fitting. After each call to fit an iteratively weighted additive model to the IRLS pseudodata, the *theta* estimate is updated. This is done by conditioning on all components of the current GCV/Pearson estimator of the scale parameter except *theta* and then searching for the *theta* which equates this conditional estimator to one. The search is a simple bisection search after an initial crude line search to bracket one. The search will terminate at the upper boundary of the search region is a Poisson fit would have yielded an estimated scale parameter <1.
### Value
For `negbin` an object inheriting from class `family`, with additional elements
| | |
| --- | --- |
| `dvar` | the function giving the first derivative of the variance function w.r.t. `mu`. |
| `d2var` | the function giving the second derivative of the variance function w.r.t. `mu`. |
| `getTheta` | A function for retrieving the value(s) of theta. This also useful for retriving the estimate of `theta` after fitting (see example). |
For `nb` an object inheriting from class `extended.family`.
### WARNINGS
`<gamm>` does not support `theta` estimation
The negative binomial functions from the MASS library are no longer supported.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected]) modified from Venables and Ripley's `negative.binomial` family.
### References
Venables, B. and B.R. Ripley (2002) Modern Applied Statistics in S, Springer.
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
### Examples
```
library(mgcv)
set.seed(3)
n<-400
dat <- gamSim(1,n=n)
g <- exp(dat$f/5)
## negative binomial data...
dat$y <- rnbinom(g,size=3,mu=g)
## known theta fit ...
b0 <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=negbin(3),data=dat)
plot(b0,pages=1)
print(b0)
## same with theta estimation...
b <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=nb(),data=dat)
plot(b,pages=1)
print(b)
b$family$getTheta(TRUE) ## extract final theta estimate
## another example...
set.seed(1)
f <- dat$f
f <- f - min(f)+5;g <- f^2/10
dat$y <- rnbinom(g,size=3,mu=g)
b2 <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=nb(link="sqrt"),
data=dat,method="REML")
plot(b2,pages=1)
print(b2)
rm(dat)
```
r None
`ls.size` Size of list elements
--------------------------------
### Description
Produces a named array giving the size, in bytes, of the elements of a list.
### Usage
```
ls.size(x)
```
### Arguments
| | |
| --- | --- |
| `x` | A list. |
### Value
A numeric vector giving the size in bytes of each element of the list `x`. The elements of the array have the same names as the elements of the list. If `x` is not a list then its size in bytes is returned, un-named.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
<https://www.maths.ed.ac.uk/~swood34/>
### Examples
```
library(mgcv)
b <- list(M=matrix(runif(100),10,10),quote=
"The world is ruled by idiots because only an idiot would want to rule the world.",
fam=binomial())
ls.size(b)
```
r None
`columb` Reduced version of Columbus OH crime data
---------------------------------------------------
### Description
By district crime data from Columbus OH, together with polygons describing district shape. Useful for illustrating use of simple Markov Random Field smoothers.
### Usage
```
data(columb)
data(columb.polys)
```
### Format
`columb` is a 49 row data frame with the following columns
area
land area of district
home.value
housing value in 1000USD.
income
household income in 1000USD.
crime
residential burglaries and auto thefts per 1000 households.
open.space
measure of open space in district.
district
code identifying district, and matching `names(columb.polys)`.
`columb.polys` contains the polygons defining the areas in the format described below.
### Details
The data frame `columb` relates to the districts whose boundaries are coded in `columb.polys`. `columb.polys[[i]]` is a 2 column matrix, containing the vertices of the polygons defining the boundary of the ith district. `columb.polys[[2]]` has an artificial hole inserted to illustrate how holes in districts can be spefified. Different polygons defining the boundary of a district are separated by NA rows in `columb.polys[[1]]`, and a polygon enclosed within another is treated as a hole in that region (a hole should never come first). `names(columb.polys)` matches `columb$district` (order unimportant).
### Source
The data are adapted from the `columbus` example in the `spdep` package, where the original source is given as:
Anselin, Luc. 1988. Spatial econometrics: methods and models. Dordrecht: Kluwer Academic, Table 12.1 p. 189.
### Examples
```
## see ?mrf help files
```
r None
`smooth.construct.ps.smooth.spec` P-splines in GAMs
----------------------------------------------------
### Description
`<gam>` can use univariate P-splines as proposed by Eilers and Marx (1996), specified via terms like `s(x,bs="ps")`. These terms use B-spline bases penalized by discrete penalties applied directly to the basis coefficients. Cyclic P-splines are specified by model terms like `s(x,bs="cp",...)`. These bases can be used in tensor product smooths (see `<te>`).
The advantage of P-splines is the flexible way that penalty and basis order can be mixed (but see also `[d.spline](smooth.construct.bs.smooth.spec)`). This often provides a useful way of ‘taming’ an otherwise poorly behave smooth. However, in regular use, splines with derivative based penalties (e.g. `"tp"` or `"cr"` bases) tend to result in slightly better MSE performance, presumably because the good approximation theoretic properties of splines are rather closely connected to the use of derivative penalties.
### Usage
```
## S3 method for class 'ps.smooth.spec'
smooth.construct(object, data, knots)
## S3 method for class 'cp.smooth.spec'
smooth.construct(object, data, knots)
```
### Arguments
| | |
| --- | --- |
| `object` | a smooth specification object, usually generated by a term `s(x,bs="ps",...)` or `s(x,bs="cp",...)` |
| `data` | a list containing just the data (including any `by` variable) required by this term, with names corresponding to `object$term` (and `object$by`). The `by` variable is the last element. |
| `knots` | a list containing any knots supplied for basis setup — in same order and with same names as `data`. Can be `NULL`. See details for further information. |
### Details
A smooth term of the form `s(x,bs="ps",m=c(2,3))` specifies a 2nd order P-spline basis (cubic spline), with a third order difference penalty (0th order is a ridge penalty) on the coefficients. If `m` is a single number then it is taken as the basis order and penalty order. The default is the ‘cubic spline like’ `m=c(2,2)`.
The default basis dimension, `k`, is the larger of 10 and `m[1]+1` for a `"ps"` terms and the larger of 10 and `m[1]` for a `"cp"` term. `m[1]+1` and `m[1]` are the lower limits on basis dimension for the two types.
If knots are supplied, then the number of knots should be one more than the basis dimension (i.e. `k+1`) for a `"cp"`smooth. For the `"ps"` basis the number of supplied knots should be `k + m[1] + 2`, and the range of the middle `k-m[1]` knots should include all the covariate values. See example.
Alternatively, for both types of smooth, 2 knots can be supplied, denoting the lower and upper limits between which the spline can be evaluated (Don't make this range too wide, however, or you can end up with no information about some basis coefficients, because the corresponding basis functions have a span that includes no data!). Note that P-splines don't make much sense with uneven knot spacing.
Linear extrapolation is used for prediction that requires extrapolation (i.e. prediction outside the range of the interior `k-m[1]` knots). Such extrapolation is not allowed in basis construction, but is when predicting.
For the `"ps"` basis it is possible to set flags in the smooth specification object, requesting setup according to the SCOP-spline monotonic smoother construction of Pya and Wood (2015). As yet this is not supported by any modelling functions in `mgcv` (see package `scam`). Similarly it is possible to set a `deriv` flag in a smooth specification or smooth object, so that a model or prediction matrix produces the requested derivative of the spline, rather than evaluating it. See examples below.
### Value
An object of class `"pspline.smooth"` or `"cp.smooth"`. See `<smooth.construct>`, for the elements that this object will contain.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Eilers, P.H.C. and B.D. Marx (1996) Flexible Smoothing with B-splines and Penalties. Statistical Science, 11(2):89-121
Pya, N., and Wood, S.N. (2015). Shape constrained additive models. Statistics and Computing, 25(3), 543-559.
### See Also
`[cSplineDes](csplinedes)`, `[adaptive.smooth](smooth.construct.ad.smooth.spec)`, `[d.spline](smooth.construct.bs.smooth.spec)`
### Examples
```
## see ?gam
## cyclic example ...
require(mgcv)
set.seed(6)
x <- sort(runif(200)*10)
z <- runif(200)
f <- sin(x*2*pi/10)+.5
y <- rpois(exp(f),exp(f))
## finished simulating data, now fit model...
b <- gam(y ~ s(x,bs="cp") + s(z,bs="ps"),family=poisson)
## example with supplied knot ranges for x and z (can do just one)
b <- gam(y ~ s(x,bs="cp") + s(z,bs="ps"),family=poisson,
knots=list(x=c(0,10),z=c(0,1)))
## example with supplied knots...
bk <- gam(y ~ s(x,bs="cp",k=12) + s(z,bs="ps",k=13),family=poisson,
knots=list(x=seq(0,10,length=13),z=(-3):13/10))
## plot results...
par(mfrow=c(2,2))
plot(b,select=1,shade=TRUE);lines(x,f-mean(f),col=2)
plot(b,select=2,shade=TRUE);lines(z,0*z,col=2)
plot(bk,select=1,shade=TRUE);lines(x,f-mean(f),col=2)
plot(bk,select=2,shade=TRUE);lines(z,0*z,col=2)
## Example using montonic constraints via the SCOP-spline
## construction, and of computng derivatives...
x <- seq(0,1,length=100); dat <- data.frame(x)
sspec <- s(x,bs="ps")
sspec$mono <- 1
sm <- smoothCon(sspec,dat)[[1]]
sm$deriv <- 1
Xd <- PredictMat(sm,dat)
## generate random coeffients in the unconstrainted
## parameterization...
b <- runif(10)*3-2.5
## exponentiate those parameters indicated by sm$g.index
## to obtain coefficients meeting the constraints...
b[sm$g.index] <- exp(b[sm$g.index])
## plot monotonic spline and its derivative
par(mfrow=c(2,2))
plot(x,sm$X%*%b,type="l",ylab="f(x)")
plot(x,Xd%*%b,type="l",ylab="f'(x)")
## repeat for decrease...
sspec$mono <- -1
sm1 <- smoothCon(sspec,dat)[[1]]
sm1$deriv <- 1
Xd1 <- PredictMat(sm1,dat)
plot(x,sm1$X%*%b,type="l",ylab="f(x)")
plot(x,Xd1%*%b,type="l",ylab="f'(x)")
## Now with sum to zero constraints as well...
sspec$mono <- 1
sm <- smoothCon(sspec,dat,absorb.cons=TRUE)[[1]]
sm$deriv <- 1
Xd <- PredictMat(sm,dat)
b <- b[-1] ## dropping first param
plot(x,sm$X%*%b,type="l",ylab="f(x)")
plot(x,Xd%*%b,type="l",ylab="f'(x)")
sspec$mono <- -1
sm1 <- smoothCon(sspec,dat,absorb.cons=TRUE)[[1]]
sm1$deriv <- 1
Xd1 <- PredictMat(sm1,dat)
plot(x,sm1$X%*%b,type="l",ylab="f(x)")
plot(x,Xd1%*%b,type="l",ylab="f'(x)")
```
r None
`gam.reparam` Finding stable orthogonal re-parameterization of the square root penalty.
----------------------------------------------------------------------------------------
### Description
INTERNAL function for finding an orthogonal re-parameterization which avoids "dominant machine zero leakage" between components of the square root penalty.
### Usage
```
gam.reparam(rS, lsp, deriv)
```
### Arguments
| | |
| --- | --- |
| `rS` | list of the square root penalties: last entry is root of fixed penalty, if `fixed.penalty==TRUE` (i.e. `length(rS)>length(sp)`). The assumption here is that `rS[[i]]` are in a null space of total penalty already; see e.g. `totalPenaltySpace` and `mini.roots`. |
| `lsp` | vector of log smoothing parameters. |
| `deriv` | if `deriv==1` also the first derivative of the log-determinant of the penalty matrix is returned, if `deriv>1` also the second derivative is returned. |
### Value
A list containing
* `S`: the total penalty matrix similarity transformed for stability.
* `rS`: the component square roots, transformed in the same way.
* `Qs`: the orthogonal transformation matrix `S = t(Qs)%*%S0%*%Qs`, where `S0` is the untransformed total penalty implied by `sp` and `rS` on input.
* `det`: log|S|.
* `det1`: dlog|S|/dlog(sp) if `deriv >0`.
* `det2`: hessian of log|S| wrt log(sp) if `deriv>1`.
### Author(s)
Simon N. Wood <[email protected]>.
r None
`ldetS` Getting log generalized determinant of penalty matrices
----------------------------------------------------------------
### Description
INTERNAL function calculating the log generalized determinant of penalty matrix S stored blockwise in an Sl list (which is the output of `Sl.setup`).
### Usage
```
ldetS(Sl, rho, fixed, np, root = FALSE, repara = TRUE,
nt = 1,deriv=2,sparse=FALSE)
```
### Arguments
| | |
| --- | --- |
| `Sl` | the output of `Sl.setup`. |
| `rho` | the log smoothing parameters. |
| `fixed` | an array indicating whether the smoothing parameters are fixed (or free). |
| `np` | number of coefficients. |
| `root` | indicates whether or not to return the matrix square root, `E`, of the total penalty S\_tot. |
| `repara` | if TRUE multi-term blocks will be re-parameterized using `gam.reparam`, and a re-parameterization object supplied in the returned object. |
| `nt` | number of parallel threads to use. |
| `deriv` | order of derivative to use |
| `sparse` | should `E` be sparse? |
### Value
A list containing:
* `ldetS`: the log-determinant of S.
* `ldetS1`: the gradient of the log-determinant of S.
* `ldetS2`: the Hessian of the log-determinant of S.
* `Sl`: with modified rS terms, if needed and rho added to each block
* `rp`: a re-parameterization list.
* `rp`: E a total penalty square root such that `t(E)%*%E = S_tot` (if `root==TRUE`).
### Author(s)
Simon N. Wood <[email protected]>.
r None
`linear.functional.terms` Linear functionals of a smooth in GAMs
-----------------------------------------------------------------
### Description
`<gam>` allows the response variable to depend on linear functionals of smooth terms. Specifically dependancies of the form
*g(mu\_i) = ... + sum\_j L\_ij f(x\_ij) +...*
are allowed, where the *x\_ij* are covariate values and the *L\_ij* are fixed weights. i.e. the response can depend on the weighted sum of the same smooth evaluated at different covariate values. This allows, for example, for the response to depend on the derivatives or integrals of a smooth (approximated by finite differencing or quadrature, respectively). It also allows dependence on predictor functions (sometimes called ‘signal regression’).
The mechanism by which this is achieved is to supply matrices of covariate values to the model smooth terms specified by `<s>` or `<te>` terms in the model formula. Each column of the covariate matrix gives rise to a corresponding column of predictions from the smooth. Let the resulting matrix of evaluated smooth values be F (F will have the same dimension as the covariate matrices). In the absense of a `by` variable then these columns are simply summed and added to the linear predictor. i.e. the contribution of the term to the linear predictor is `rowSums(F)`. If a `by` variable is present then it must be a matrix, L,say, of the same dimension as F (and the covariate matrices), and it contains the weights *L\_ij* in the summation given above. So in this case the contribution to the linear predictor is `rowSums(L*F)`.
Note that if a *L1* (i.e. `rowSums(L)`) is a constant vector, or there is no `by` variable then the smooth will automatically be centred in order to ensure identifiability. Otherwise it will not be. Note also that for centred smooths it can be worth replacing the constant term in the model with `rowSums(L)` in order to ensure that predictions are automatically on the right scale.
`<predict.gam>` can accept matrix predictors for prediction with such terms, in which case its `newdata` argument will need to be a list. However when predicting from the model it is not necessary to provide matrix covariate and `by` variable values. For example to simply examine the underlying smooth function one would use vectors of covariate values and vector `by` variables, with the `by` variable and equivalent of `L1`, above, set to vectors of ones.
The mechanism is usable with random effect smooths which take factor arguments, by using a trick to create a 2D array of factors. Simply create a factor vector containing the columns of the factor matrix stacked end to end (column major order). Then reset the dimensions of this vector to create the appropriate 2D array: the first dimension should be the number of response data and the second the number of columns of the required factor matrix. You can not use `matrix` or `data.matrix` to set up the required matrix of factor levels. See example below.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### Examples
```
### matrix argument `linear operator' smoothing
library(mgcv)
set.seed(0)
###############################
## simple summation example...#
###############################
n<-400
sig<-2
x <- runif(n, 0, .9)
f2 <- function(x) 0.2*x^11*(10*(1-x))^6+10*(10*x)^3*(1-x)^10
x1 <- x + .1
f <- f2(x) + f2(x1) ## response is sum of f at two adjacent x values
y <- f + rnorm(n)*sig
X <- matrix(c(x,x1),n,2) ## matrix covariate contains both x values
b <- gam(y~s(X))
plot(b) ## reconstruction of f
plot(f,fitted(b))
## example of prediction with summation convention...
predict(b,list(X=X[1:3,]))
## example of prediction that simply evaluates smooth (no summation)...
predict(b,data.frame(X=c(.2,.3,.7)))
######################################################################
## Simple random effect model example.
## model: y[i] = f(x[i]) + b[k[i]] - b[j[i]] + e[i]
## k[i] and j[i] index levels of i.i.d. random effects, b.
######################################################################
set.seed(7)
n <- 200
x <- runif(n) ## a continuous covariate
## set up a `factor matrix'...
fac <- factor(sample(letters,n*2,replace=TRUE))
dim(fac) <- c(n,2)
## simulate data from such a model...
nb <- length(levels(fac))
b <- rnorm(nb)
y <- 20*(x-.3)^4 + b[fac[,1]] - b[fac[,2]] + rnorm(n)*.5
L <- matrix(-1,n,2);L[,1] <- 1 ## the differencing 'by' variable
mod <- gam(y ~ s(x) + s(fac,by=L,bs="re"),method="REML")
gam.vcomp(mod)
plot(mod,page=1)
## example of prediction using matrices...
dat <- list(L=L[1:20,],fac=fac[1:20,],x=x[1:20],y=y[1:20])
predict(mod,newdata=dat)
######################################################################
## multivariate integral example. Function `test1' will be integrated#
## (by midpoint quadrature) over 100 equal area sub-squares covering #
## the unit square. Noise is added to the resulting simulated data. #
## `test1' is estimated from the resulting data using two alternative#
## smooths. #
######################################################################
test1 <- function(x,z,sx=0.3,sz=0.4)
{ (pi**sx*sz)*(1.2*exp(-(x-0.2)^2/sx^2-(z-0.3)^2/sz^2)+
0.8*exp(-(x-0.7)^2/sx^2-(z-0.8)^2/sz^2))
}
## create quadrature (integration) grid, in useful order
ig <- 5 ## integration grid within square
mx <- mz <- (1:ig-.5)/ig
ix <- rep(mx,ig);iz <- rep(mz,rep(ig,ig))
og <- 10 ## observarion grid
mx <- mz <- (1:og-1)/og
ox <- rep(mx,og);ox <- rep(ox,rep(ig^2,og^2))
oz <- rep(mz,rep(og,og));oz <- rep(oz,rep(ig^2,og^2))
x <- ox + ix/og;z <- oz + iz/og ## full grid, subsquare by subsquare
## create matrix covariates...
X <- matrix(x,og^2,ig^2,byrow=TRUE)
Z <- matrix(z,og^2,ig^2,byrow=TRUE)
## create simulated test data...
dA <- 1/(og*ig)^2 ## quadrature square area
F <- test1(X,Z) ## evaluate on grid
f <- rowSums(F)*dA ## integrate by midpoint quadrature
y <- f + rnorm(og^2)*5e-4 ## add noise
## ... so each y is a noisy observation of the integral of `test1'
## over a 0.1 by 0.1 sub-square from the unit square
## Now fit model to simulated data...
L <- X*0 + dA
## ... let F be the matrix of the smooth evaluated at the x,z values
## in matrices X and Z. rowSums(L*F) gives the model predicted
## integrals of `test1' corresponding to the observed `y'
L1 <- rowSums(L) ## smooths are centred --- need to add in L%*%1
## fit models to reconstruct `test1'....
b <- gam(y~s(X,Z,by=L)+L1-1) ## (L1 and const are confounded here)
b1 <- gam(y~te(X,Z,by=L)+L1-1) ## tensor product alternative
## plot results...
old.par<-par(mfrow=c(2,2))
x<-runif(n);z<-runif(n);
xs<-seq(0,1,length=30);zs<-seq(0,1,length=30)
pr<-data.frame(x=rep(xs,30),z=rep(zs,rep(30,30)))
truth<-matrix(test1(pr$x,pr$z),30,30)
contour(xs,zs,truth)
plot(b)
vis.gam(b,view=c("X","Z"),cond=list(L1=1,L=1),plot.type="contour")
vis.gam(b1,view=c("X","Z"),cond=list(L1=1,L=1),plot.type="contour")
####################################
## A "signal" regression example...#
####################################
rf <- function(x=seq(0,1,length=100)) {
## generates random functions...
m <- ceiling(runif(1)*5) ## number of components
f <- x*0;
mu <- runif(m,min(x),max(x));sig <- (runif(m)+.5)*(max(x)-min(x))/10
for (i in 1:m) f <- f+ dnorm(x,mu[i],sig[i])
f
}
x <- seq(0,1,length=100) ## evaluation points
## example functional predictors...
par(mfrow=c(3,3));for (i in 1:9) plot(x,rf(x),type="l",xlab="x")
## simulate 200 functions and store in rows of L...
L <- matrix(NA,200,100)
for (i in 1:200) L[i,] <- rf() ## simulate the functional predictors
f2 <- function(x) { ## the coefficient function
(0.2*x^11*(10*(1-x))^6+10*(10*x)^3*(1-x)^10)/10
}
f <- f2(x) ## the true coefficient function
y <- L%*%f + rnorm(200)*20 ## simulated response data
## Now fit the model E(y) = L%*%f(x) where f is a smooth function.
## The summation convention is used to evaluate smooth at each value
## in matrix X to get matrix F, say. Then rowSum(L*F) gives E(y).
## create matrix of eval points for each function. Note that
## `smoothCon' is smart and will recognize the duplication...
X <- matrix(x,200,100,byrow=TRUE)
b <- gam(y~s(X,by=L,k=20))
par(mfrow=c(1,1))
plot(b,shade=TRUE);lines(x,f,col=2)
```
| programming_docs |
r None
`gam.fit` GAM P-IRLS estimation with GCV/UBRE smoothness estimation
--------------------------------------------------------------------
### Description
This is an internal function of package `mgcv`. It is a modification of the function `glm.fit`, designed to be called from `gam` when perfomance iteration is selected (not the default). The major modification is that rather than solving a weighted least squares problem at each IRLS step, a weighted, penalized least squares problem is solved at each IRLS step with smoothing parameters associated with each penalty chosen by GCV or UBRE, using routine `<magic>`. For further information on usage see code for `gam`. Some regularization of the IRLS weights is also permitted as a way of addressing identifiability related problems (see `<gam.control>`). Negative binomial parameter estimation is supported.
The basic idea of estimating smoothing parameters at each step of the P-IRLS is due to Gu (1992), and is termed ‘performance iteration’ or 'performance oriented iteration'.
### Usage
```
gam.fit(G, start = NULL, etastart = NULL,
mustart = NULL, family = gaussian(),
control = gam.control(),gamma=1,
fixedSteps=(control$maxit+1),...)
```
### Arguments
| | |
| --- | --- |
| `G` | An object of the type returned by `<gam>` when `fit=FALSE`. |
| `start` | Initial values for the model coefficients. |
| `etastart` | Initial values for the linear predictor. |
| `mustart` | Initial values for the expected response. |
| `family` | The family object, specifying the distribution and link to use. |
| `control` | Control option list as returned by `<gam.control>`. |
| `gamma` | Parameter which can be increased to up the cost of each effective degree of freedom in the GCV or AIC/UBRE objective. |
| `fixedSteps` | How many steps to take: useful when only using this routine to get rough starting values for other methods. |
| `...` | Other arguments: ignored. |
### Value
A list of fit information.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Gu (1992) Cross-validating non-Gaussian data. J. Comput. Graph. Statist. 1:169-179
Gu and Wahba (1991) Minimizing GCV/GML scores with multiple smoothing parameters via the Newton method. SIAM J. Sci. Statist. Comput. 12:383-398
Wood, S.N. (2000) Modelling and Smoothing Parameter Estimation with Multiple Quadratic Penalties. J.R.Statist.Soc.B 62(2):413-428
Wood, S.N. (2004) Stable and efficient multiple smoothing parameter estimation for generalized additive models. J. Amer. Statist. Ass. 99:637-686
### See Also
`<gam.fit3>`, `<gam>`, `<magic>`
r None
`shash` Sinh-arcsinh location scale and shape model family
-----------------------------------------------------------
### Description
The `shash` family implements the four-parameter sinh-arcsinh (shash) distribution of Jones and Pewsey (2009). The location, scale, skewness and kurtosis of the density can depend on additive smooth predictors. Useable only with gam, the linear predictors are specified via a list of formulae. It is worth carefully considering whether the data are sufficient to support estimation of such a flexible model before using it.
### Usage
```
shash(link = list("identity", "logeb", "identity", "identity"),
b = 1e-2, phiPen = 1e-3)
```
### Arguments
| | |
| --- | --- |
| `link` | vector of four characters indicating the link function for location, scale, skewness and kurtosis parameters. |
| `b` | positive parameter of the logeb link function, see Details. |
| `phiPen` | positive multiplier of a ridge penalty on kurtosis parameter. Do not touch it unless you know what you are doing, see Details. |
### Details
The density function of the shash family is
*p(y|μ,σ,ε,δ)=C(z) exp{-S(z)^2/2} / σ{2π(1+z^2)}^1/2,*
where *C(z)={1+S(z)^2}^1/2* , *S(z)=sinh{δ sinh^(-1)(z)-ε}* and *z=(y-μ)/(σδ)*. Here *μ* and *σ > 0* control, respectively, location and scale, *ε* determines skewness, while *δ > 0* controls tailweight. `shash` can model skewness to either side, depending on the sign of *ε*. Also, shash can have tails that are lighter (*δ>1*) or heavier (*0<δ<1*) that a normal. For fitting purposes, here we are using *τ = log(σ)* and *φ = log(δ)*.
The link function used for *τ* is logeb with is *η = log{exp(τ)-b}* so that the inverse link is *τ = log(σ) = log{exp(η)+b}*. The point is that we are don't allow *σ* to become smaller than a small constant b. The likelihood includes a ridge penalty *- phiPen \* φ^2*, which shrinks *φ* toward zero. When sufficient data is available the ridge penalty does not change the fit much, but it is useful to include it when fitting the model to small data sets, to avoid *φ* diverging to +infinity (a problem already identified by Jones and Pewsey (2009)).
### Value
An object inheriting from class `general.family`.
### Author(s)
Matteo Fasiolo <[email protected]> and Simon N. Wood.
### References
Jones, M. and A. Pewsey (2009). Sinh-arcsinh distributions. Biometrika 96 (4), 761-780. Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
### Examples
```
###############
# Shash dataset
###############
## Simulate some data from shash
set.seed(847)
n <- 1000
x <- seq(-4, 4, length.out = n)
X <- cbind(1, x, x^2)
beta <- c(4, 1, 1)
mu <- X %*% beta
sigma = .5+0.4*(x+4)*.5 # Scale
eps = 2*sin(x) # Skewness
del = 1 + 0.2*cos(3*x) # Kurtosis
dat <- mu + (del*sigma)*sinh((1/del)*asinh(qnorm(runif(n))) + (eps/del))
dataf <- data.frame(cbind(dat, x))
names(dataf) <- c("y", "x")
plot(x, dat, xlab = "x", ylab = "y")
## Fit model
fit <- gam(list(y ~ s(x), # <- model for location
~ s(x), # <- model for log-scale
~ s(x), # <- model for skewness
~ s(x, k = 20)), # <- model for log-kurtosis
data = dataf,
family = shash, # <- new family
optimizer = "efs")
## Plotting truth and estimates for each parameters of the density
muE <- fit$fitted[ , 1]
sigE <- exp(fit$fitted[ , 2])
epsE <- fit$fitted[ , 3]
delE <- exp(fit$fitted[ , 4])
par(mfrow = c(2, 2))
plot(x, muE, type = 'l', ylab = expression(mu(x)), lwd = 2)
lines(x, mu, col = 2, lty = 2, lwd = 2)
legend("top", c("estimated", "truth"), col = 1:2, lty = 1:2, lwd = 2)
plot(x, sigE, type = 'l', ylab = expression(sigma(x)), lwd = 2)
lines(x, sigma, col = 2, lty = 2, lwd = 2)
plot(x, epsE, type = 'l', ylab = expression(epsilon(x)), lwd = 2)
lines(x, eps, col = 2, lty = 2, lwd = 2)
plot(x, delE, type = 'l', ylab = expression(delta(x)), lwd = 2)
lines(x, del, col = 2, lty = 2, lwd = 2)
## Plotting true and estimated conditional density
par(mfrow = c(1, 1))
plot(x, dat, pch = '.', col = "grey", ylab = "y", ylim = c(-35, 70))
for(qq in c(0.001, 0.01, 0.1, 0.5, 0.9, 0.99, 0.999)){
est <- fit$family$qf(p=qq, mu = fit$fitted)
true <- mu + (del * sigma) * sinh((1/del) * asinh(qnorm(qq)) + (eps/del))
lines(x, est, type = 'l', col = 1, lwd = 2)
lines(x, true, type = 'l', col = 2, lwd = 2, lty = 2)
}
legend("topleft", c("estimated", "truth"), col = 1:2, lty = 1:2, lwd = 2)
#####################
## Motorcycle example
#####################
# Here shash is overkill, in fact the fit is not good, relative
# to what we would get with mgcv::gaulss
library(MASS)
b <- gam(list(accel~s(times, k=20, bs = "ad"), ~s(times, k = 10), ~1, ~1),
data=mcycle, family=shash)
par(mfrow = c(1, 1))
xSeq <- data.frame(cbind("accel" = rep(0, 1e3),
"times" = seq(2, 58, length.out = 1e3)))
pred <- predict(b, newdata = xSeq)
plot(mcycle$times, mcycle$accel, ylim = c(-180, 100))
for(qq in c(0.1, 0.3, 0.5, 0.7, 0.9)){
est <- b$family$qf(p=qq, mu = pred)
lines(xSeq$times, est, type = 'l', col = 2)
}
plot(b, pages = 1, scale = FALSE)
```
r None
`s` Defining smooths in GAM formulae
-------------------------------------
### Description
Function used in definition of smooth terms within `gam` model formulae. The function does not evaluate a (spline) smooth - it exists purely to help set up a model using spline based smooths.
### Usage
```
s(..., k=-1,fx=FALSE,bs="tp",m=NA,by=NA,xt=NULL,id=NULL,sp=NULL,pc=NULL)
```
### Arguments
| | |
| --- | --- |
| `...` | a list of variables that are the covariates that this smooth is a function of. Transformations whose form depends on the values of the data are best avoided here: e.g. `s(log(x))` is fine, but `s(I(x/sd(x)))` is not (see `<predict.gam>`). |
| `k` | the dimension of the basis used to represent the smooth term. The default depends on the number of variables that the smooth is a function of. `k` should not be less than the dimension of the null space of the penalty for the term (see `<null.space.dimension>`), but will be reset if it is. See `<choose.k>` for further information. |
| `fx` | indicates whether the term is a fixed d.f. regression spline (`TRUE`) or a penalized regression spline (`FALSE`). |
| `bs` | a two letter character string indicating the (penalized) smoothing basis to use. (eg `"tp"` for thin plate regression spline, `"cr"` for cubic regression spline). see `<smooth.terms>` for an over view of what is available. |
| `m` | The order of the penalty for this term (e.g. 2 for normal cubic spline penalty with 2nd derivatives when using default t.p.r.s basis). `NA` signals autoinitialization. Only some smooth classes use this. The `"ps"` class can use a 2 item array giving the basis and penalty order separately. |
| `by` | a numeric or factor variable of the same dimension as each covariate. In the numeric vector case the elements multiply the smooth, evaluated at the corresponding covariate values (a ‘varying coefficient model’ results). For the numeric `by` variable case the resulting smooth is not usually subject to a centering constraint (so the `by variable` should not be added as an additional main effect). In the factor `by` variable case a replicate of the smooth is produced for each factor level (these smooths will be centered, so the factor usually needs to be added as a main effect as well). See `<gam.models>` for further details. A `by` variable may also be a matrix if covariates are matrices: in this case implements linear functional of a smooth (see `<gam.models>` and `<linear.functional.terms>` for details). |
| `xt` | Any extra information required to set up a particular basis. Used e.g. to set large data set handling behaviour for `"tp"` basis. If `xt$sumConv` exists and is `FALSE` then the summation convention for matrix arguments is turned off. |
| `id` | A label or integer identifying this term in order to link its smoothing parameters to others of the same type. If two or more terms have the same `id` then they will have the same smoothing paramsters, and, by default, the same bases (first occurance defines basis type, but data from all terms used in basis construction). An `id` with a factor `by` variable causes the smooths at each factor level to have the same smoothing parameter. |
| `sp` | any supplied smoothing parameters for this term. Must be an array of the same length as the number of penalties for this smooth. Positive or zero elements are taken as fixed smoothing parameters. Negative elements signal auto-initialization. Over-rides values supplied in `sp` argument to `<gam>`. Ignored by `gamm`. |
| `pc` | If not `NULL`, signals a point constraint: the smooth should pass through zero at the point given here (as a vector or list with names corresponding to the smooth names). Never ignored if supplied. See `<identifiability>`. |
### Details
The function does not evaluate the variable arguments. To use this function to specify use of your own smooths, note the relationships between the inputs and the output object and see the example in `<smooth.construct>`.
### Value
A class `xx.smooth.spec` object, where `xx` is a basis identifying code given by the `bs` argument of `s`. These `smooth.spec` objects define smooths and are turned into bases and penalties by `smooth.construct` method functions.
The returned object contains the following items:
| | |
| --- | --- |
| `term` | An array of text strings giving the names of the covariates that the term is a function of. |
| `bs.dim` | The dimension of the basis used to represent the smooth. |
| `fixed` | TRUE if the term is to be treated as a pure regression spline (with fixed degrees of freedom); FALSE if it is to be treated as a penalized regression spline |
| `dim` | The dimension of the smoother - i.e. the number of covariates that it is a function of. |
| `p.order` | The order of the t.p.r.s. penalty, or 0 for auto-selection of the penalty order. |
| `by` | is the name of any `by` variable as text (`"NA"` for none). |
| `label` | A suitable text label for this smooth term. |
| `xt` | The object passed in as argument `xt`. |
| `id` | An identifying label or number for the smooth, linking it to other smooths. Defaults to `NULL` for no linkage. |
| `sp` | array of smoothing parameters for the term (negative for auto-estimation). Defaults to `NULL`. |
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2003) Thin plate regression splines. J.R.Statist.Soc.B 65(1):95-114
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<te>`, `<gam>`, `<gamm>`
### Examples
```
# example utilising `by' variables
library(mgcv)
set.seed(0)
n<-200;sig2<-4
x1 <- runif(n, 0, 1);x2 <- runif(n, 0, 1);x3 <- runif(n, 0, 1)
fac<-c(rep(1,n/2),rep(2,n/2)) # create factor
fac.1<-rep(0,n)+(fac==1);fac.2<-1-fac.1 # and dummy variables
fac<-as.factor(fac)
f1 <- exp(2 * x1) - 3.75887
f2 <- 0.2 * x1^11 * (10 * (1 - x1))^6 + 10 * (10 * x1)^3 * (1 - x1)^10
f<-f1*fac.1+f2*fac.2+x2
e <- rnorm(n, 0, sqrt(abs(sig2)))
y <- f + e
# NOTE: smooths will be centered, so need to include fac in model....
b<-gam(y~fac+s(x1,by=fac)+x2)
plot(b,pages=1)
```
r None
`vcov.gam` Extract parameter (estimator) covariance matrix from GAM fit
------------------------------------------------------------------------
### Description
Extracts the Bayesian posterior covariance matrix of the parameters or frequentist covariance matrix of the parameter estimators from a fitted `gam` object.
### Usage
```
## S3 method for class 'gam'
vcov(object, freq = FALSE, dispersion = NULL,unconditional=FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | fitted model object of class `gam` as produced by `gam()`. |
| `freq` | `TRUE` to return the frequentist covariance matrix of the parameter estimators, `FALSE` to return the Bayesian posterior covariance matrix of the parameters. |
| `dispersion` | a value for the dispersion parameter: not normally used. |
| `unconditional` | if `TRUE` (and `freq==FALSE`) then the Bayesian smoothing parameter uncertainty corrected covariance matrix is returned, if available. |
| `...` | other arguments, currently ignored. |
### Details
Basically, just extracts `object$Ve` or `object$Vp` from a `[gamObject](gamobject)`.
### Value
A matrix corresponding to the estimated frequentist covariance matrix of the model parameter estimators/coefficients, or the estimated posterior covariance matrix of the parameters, depending on the argument `freq`.
### Author(s)
Henric Nilsson. Maintained by Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2006) On confidence intervals for generalized additive models based on penalized regression splines. Australian and New Zealand Journal of Statistics. 48(4): 445-464.
### See Also
`<gam>`
### Examples
```
require(mgcv)
n <- 100
x <- runif(n)
y <- sin(x*2*pi) + rnorm(n)*.2
mod <- gam(y~s(x,bs="cc",k=10),knots=list(x=seq(0,1,length=10)))
diag(vcov(mod))
```
r None
`get.var` Get named variable or evaluate expression from list or data.frame
----------------------------------------------------------------------------
### Description
This routine takes a text string and a data frame or list. It first sees if the string is the name of a variable in the data frame/ list. If it is then the value of this variable is returned. Otherwise the routine tries to evaluate the expression within the data.frame/list (but nowhere else) and if successful returns the result. If neither step works then `NULL` is returned. The routine is useful for processing gam formulae. If the variable is a matrix then it is coerced to a numeric vector, by default.
### Usage
```
get.var(txt,data,vecMat=TRUE)
```
### Arguments
| | |
| --- | --- |
| `txt` | a text string which is either the name of a variable in `data` or when parsed is an expression that can be evaluated in `data`. It can also be neither in which case the function returns `NULL`. |
| `data` | A data frame or list. |
| `vecMat` | Should matrices be coerced to numeric vectors? |
### Value
The evaluated variable or `NULL`. May be coerced to a numeric vector if it's a matrix.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<gam>`
### Examples
```
require(mgcv)
y <- 1:4;dat<-data.frame(x=5:10)
get.var("x",dat)
get.var("y",dat)
get.var("x==6",dat)
dat <- list(X=matrix(1:6,3,2))
get.var("X",dat)
```
r None
`Sl.repara` Applying re-parameterization from log-determinant of penalty matrix to model matrix.
-------------------------------------------------------------------------------------------------
### Description
INTERNAL routine to apply re-parameterization from log-determinant of penalty matrix, `ldetS` to model matrix, `X`, blockwise.
### Usage
```
Sl.repara(rp, X, inverse = FALSE, both.sides = TRUE)
```
### Arguments
| | |
| --- | --- |
| `rp` | reparametrization. |
| `X` | if `X` is a matrix it is assumed to be a model matrix whereas if `X` is a vector it is assumed to be a parameter vector. |
| `inverse` | if `TRUE` an inverse re-parametrization is performed. |
| `both.sides` | if `inverse==TRUE` and `both.sides==FALSE` then the re-parametrization only applied to rhs, as appropriate for a choleski factor. If `both.sides==FALSE`, `X` is a vector and `inverse==FALSE` then `X` is taken as a coefficient vector (so re-parametrization is inverse of that for the model matrix). |
### Value
A re-parametrized version of `X`.
### Author(s)
Simon N. Wood <[email protected]>.
r None
`gumbls` Gumbel location-scale model family
--------------------------------------------
### Description
The `gumbls` family implements Gumbel location scale additive models in which the location and scale parameters (see details) can depend on additive smooth predictors. Useable only with `<gam>`, the linear predictors are specified via a list of formulae.
### Usage
```
gumbls(link=list("identity","log"),b=-7)
```
### Arguments
| | |
| --- | --- |
| `link` | two item list specifying the link for the location *m* and log scale parameter *B*. See details for meaning, which may not be intuitive. |
| `b` | The minumum log scale parameter. |
### Details
Let *z = (y - m)exp(-B)*, then the log Gumbel density is *l = -B - z - exp(-z)*. The expected value of a Gumbel r.v. is *m + g exp(B)* where *g* is Eulers constant (about 0.57721566). The corresponding variance is *pi^2 exp(2B)/6*.
`gumbls` is used with `<gam>` to fit Gumbel location - scale models parameterized in terms of location parameter *m* and the log scale parameter *B*. Note that `identity` link for the scale parameter means that the corresponding linear predictor gives *B* directly. By default the `log` link for the scale parameter simply forces the log scale parameter to have a lower limit given by argument `b`: if *l* is the linear predictor for the log scale parameter, *B*, then *B = b + log(1+e^l)*.
`gam` is called with a list containing 2 formulae, the first specifies the response on the left hand side and the structure of the linear predictor for location parameter, *m*, on the right hand side. The second is one sided, specifying the linear predictor for the lg scale, *B*, on the right hand side.
The fitted values for this family will be a two column matrix. The first column is the mean, and the second column is the log scale parameter, *B*. Predictions using `<predict.gam>` will also produce 2 column matrices for `type` `"link"` and `"response"`. The first column is on the original data scale when `type="response"` and on the log mean scale of the linear predictor when `type="link"`. The second column when `type="response"` is again the log scale parameter, but is on the linear predictor when `type="link"`.
### Value
An object inheriting from class `general.family`.
### References
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
### Examples
```
library(mgcv)
## simulate some data
f0 <- function(x) 2 * sin(pi * x)
f1 <- function(x) exp(2 * x)
f2 <- function(x) 0.2 * x^11 * (10 * (1 - x))^6 + 10 *
(10 * x)^3 * (1 - x)^10
n <- 400;set.seed(9)
x0 <- runif(n);x1 <- runif(n);
x2 <- runif(n);x3 <- runif(n);
mu <- f0(x0)+f1(x1)
beta <- exp(f2(x2)/5)
y <- mu - beta*log(-log(runif(n))) ## Gumbel quantile function
b <- gam(list(y~s(x0)+s(x1),~s(x2)+s(x3)),family=gumbls)
plot(b,pages=1,scale=0)
summary(b)
gam.check(b)
```
| programming_docs |
r None
`mvn` Multivariate normal additive models
------------------------------------------
### Description
Family for use with `<gam>` implementing smooth multivariate Gaussian regression. The means for each dimension are given by a separate linear predictor, which may contain smooth components. Extra linear predictors may also be specified giving terms which are shared between components (see `<formula.gam>`). The Choleski factor of the response precision matrix is estimated as part of fitting.
### Usage
```
mvn(d=2)
```
### Arguments
| | |
| --- | --- |
| `d` | The dimension of the response (>1). |
### Details
The response is `d` dimensional multivariate normal, where the covariance matrix is estimated, and the means for each dimension have sperate linear predictors. Model sepcification is via a list of gam like formulae - one for each dimension. See example.
Currently the family ignores any prior weights, and is implemented using first derivative information sufficient for BFGS estimation of smoothing parameters. `"response"` residuals give raw residuals, while `"deviance"` residuals are standardized to be approximately independent standard normal if all is well.
### Value
An object of class `general.family`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
### See Also
`[gaussian](../../stats/html/family)`
### Examples
```
library(mgcv)
## simulate some data...
V <- matrix(c(2,1,1,2),2,2)
f0 <- function(x) 2 * sin(pi * x)
f1 <- function(x) exp(2 * x)
f2 <- function(x) 0.2 * x^11 * (10 * (1 - x))^6 + 10 *
(10 * x)^3 * (1 - x)^10
n <- 300
x0 <- runif(n);x1 <- runif(n);
x2 <- runif(n);x3 <- runif(n)
y <- matrix(0,n,2)
for (i in 1:n) {
mu <- c(f0(x0[i])+f1(x1[i]),f2(x2[i]))
y[i,] <- rmvn(1,mu,V)
}
dat <- data.frame(y0=y[,1],y1=y[,2],x0=x0,x1=x1,x2=x2,x3=x3)
## fit model...
b <- gam(list(y0~s(x0)+s(x1),y1~s(x2)+s(x3)),family=mvn(d=2),data=dat)
b
summary(b)
plot(b,pages=1)
solve(crossprod(b$family$data$R)) ## estimated cov matrix
```
r None
`rTweedie` Generate Tweedie random deviates
--------------------------------------------
### Description
Generates Tweedie random deviates, for powers between 1 and 2.
### Usage
```
rTweedie(mu,p=1.5,phi=1)
```
### Arguments
| | |
| --- | --- |
| `mu` | vector of expected values for the deviates to be generated. One deviate generated for each element of `mu`. |
| `p` | the variance of a deviate is proportional to its mean, `mu` to the power `p`. `p` must be between 1 and 2. 1 is Poisson like (exactly Poisson if `phi=1`), 2 is gamma. |
| `phi` | The scale parameter. Variance of the deviates is given by is `phi*mu^p`. |
### Details
A Tweedie random variable with 1<p<2 is a sum of `N` gamma random variables where `N` has a Poisson distribution, with mean `mu^(2-p)/((2-p)*phi)`. The Gamma random variables that are summed have shape parameter `(2-p)/(p-1)` and scale parameter `phi*(p-1)*mu^(p-1)` (note that this scale parameter is different from the scale parameter for a GLM with Gamma errors).
This is a restricted, but faster, version of `rtweedie` from the `tweedie` package.
### Value
A vector of random deviates from a Tweedie distribution, expected value vector `mu`, variance vector `phi*mu^p`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Peter K Dunn (2009). tweedie: Tweedie exponential family models. R package version 2.0.2. <https://cran.r-project.org/package=tweedie>
### See Also
`[ldTweedie](ldtweedie)`, `[Tweedie](tweedie)`
### Examples
```
library(mgcv)
f2 <- function(x) 0.2 * x^11 * (10 * (1 - x))^6 + 10 *
(10 * x)^3 * (1 - x)^10
n <- 300
x <- runif(n)
mu <- exp(f2(x)/3+.1);x <- x*10 - 4
y <- rTweedie(mu,p=1.5,phi=1.3)
b <- gam(y~s(x,k=20),family=Tweedie(p=1.5))
b
plot(b)
```
r None
`Sl.setup` Setting up a list representing a block diagonal penalty matrix
--------------------------------------------------------------------------
### Description
INTERNAL function for setting up a list representing a block diagonal penalty matrix from the object produced by `gam.setup`.
### Usage
```
Sl.setup(G,cholesky=FALSE,no.repara=FALSE,sparse=FALSE)
```
### Arguments
| | |
| --- | --- |
| `G` | the output of `gam.setup`. |
| `cholesky` | re-parameterize using Cholesky only. |
| `no.repara` | set to `TRUE` to turn off all initial reparameterization. |
| `sparse` | sparse setup? |
### Value
A list with an element for each block. For block, b, `Sl[[b]]` is a list with the following elements
* `repara`: should re-parameterization be applied to model matrix, etc? Usually `FALSE` if non-linear in coefficients.
* `start, stop`: such that `start:stop` are the indexes of the parameters of this block.
* `S`: a list of penalty matrices for the block (`dim = stop-start+1`) If `length(S)==1` then this will be an identity penalty. Otherwise it is a multiple penalty, and an `rS` list of square root penalty matrices will be added. `S` (if `repara==TRUE`) and `rS` (always) will be projected into range space of total penalty matrix.
* `rS`: square root of penalty matrices if multiple penalties are used.
* `D`: a reparameterization matrix for the block. Applies to cols/params in `start:stop`. If numeric then `X[,start:stop]%*%diag(D)` is re-parametrization of `X[,start:stop]`, and `b.orig = D*b.repara` (where `b.orig` is the original parameter vector). If matrix then `X[,start:stop]%*%D` is re-parametrization of `X[,start:stop]`, and `b.orig = D%*%b.repara` (where `b.orig` is the original parameter vector).
### Author(s)
Simon N. Wood <[email protected]>.
r None
`single.index` Single index models with mgcv
---------------------------------------------
### Description
Single index models contain smooth terms with arguments that are linear combinations of other covariates. e.g. *s(Xa)* where *a* has to be estimated. For identifiability, assume *||a||=1* with positive first element. One simple way to fit such models is to use `<gam>` to profile out the smooth model coefficients and smoothing parameters, leaving only the *a* to be estimated by a general purpose optimizer.
Example code is provided below, which can be easily adapted to include multiple single index terms, parametric terms and further smooths. Note the initialization strategy. First estimate *a* without penalization to get starting values and then do the full fit. Otherwise it is easy to get trapped in a local optimum in which the smooth is linear. An alternative is to initialize using fixed penalization (via the `sp` argument to `<gam>`).
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### Examples
```
require(mgcv)
si <- function(theta,y,x,z,opt=TRUE,k=10,fx=FALSE) {
## Fit single index model using gam call, given theta (defines alpha).
## Return ML if opt==TRUE and fitted gam with theta added otherwise.
## Suitable for calling from 'optim' to find optimal theta/alpha.
alpha <- c(1,theta) ## constrained alpha defined using free theta
kk <- sqrt(sum(alpha^2))
alpha <- alpha/kk ## so now ||alpha||=1
a <- x%*%alpha ## argument of smooth
b <- gam(y~s(a,fx=fx,k=k)+s(z),family=poisson,method="ML") ## fit model
if (opt) return(b$gcv.ubre) else {
b$alpha <- alpha ## add alpha
J <- outer(alpha,-theta/kk^2) ## compute Jacobian
for (j in 1:length(theta)) J[j+1,j] <- J[j+1,j] + 1/kk
b$J <- J ## dalpha_i/dtheta_j
return(b)
}
} ## si
## simulate some data from a single index model...
set.seed(1)
f2 <- function(x) 0.2 * x^11 * (10 * (1 - x))^6 + 10 *
(10 * x)^3 * (1 - x)^10
n <- 200;m <- 3
x <- matrix(runif(n*m),n,m) ## the covariates for the single index part
z <- runif(n) ## another covariate
alpha <- c(1,-1,.5); alpha <- alpha/sqrt(sum(alpha^2))
eta <- as.numeric(f2((x%*%alpha+.41)/1.4)+1+z^2*2)/4
mu <- exp(eta)
y <- rpois(n,mu) ## Poi response
## now fit to the simulated data...
th0 <- c(-.8,.4) ## close to truth for speed
## get initial theta, using no penalization...
f0 <- nlm(si,th0,y=y,x=x,z=z,fx=TRUE,k=5)
## now get theta/alpha with smoothing parameter selection...
f1 <- nlm(si,f0$estimate,y=y,x=x,z=z,hessian=TRUE,k=10)
theta.est <-f1$estimate
## Alternative using 'optim'...
th0 <- rep(0,m-1)
## get initial theta, using no penalization...
f0 <- optim(th0,si,y=y,x=x,z=z,fx=TRUE,k=5)
## now get theta/alpha with smoothing parameter selection...
f1 <- optim(f0$par,si,y=y,x=x,z=z,hessian=TRUE,k=10)
theta.est <-f1$par
## extract and examine fitted model...
b <- si(theta.est,y,x,z,opt=FALSE) ## extract best fit model
plot(b,pages=1)
b
b$alpha
## get sd for alpha...
Vt <- b$J%*%solve(f1$hessian,t(b$J))
diag(Vt)^.5
```
r None
`smoothCon` Prediction/Construction wrapper functions for GAM smooth terms
---------------------------------------------------------------------------
### Description
Wrapper functions for construction of and prediction from smooth terms in a GAM. The purpose of the wrappers is to allow user-transparant re-parameterization of smooth terms, in order to allow identifiability constraints to be absorbed into the parameterization of each term, if required. The routine also handles ‘by’ variables and construction of identifiability constraints automatically, although this behaviour can be over-ridden.
### Usage
```
smoothCon(object,data,knots=NULL,absorb.cons=FALSE,
scale.penalty=TRUE,n=nrow(data),dataX=NULL,
null.space.penalty=FALSE,sparse.cons=0,
diagonal.penalty=FALSE,apply.by=TRUE,modCon=0)
PredictMat(object,data,n=nrow(data))
```
### Arguments
| | |
| --- | --- |
| `object` | is a smooth specification object or a smooth object. |
| `data` | A data frame, model frame or list containing the values of the (named) covariates at which the smooth term is to be evaluated. If it's a list then `n` must be supplied. |
| `knots` | An optional data frame supplying any knot locations to be supplied for basis construction. |
| `absorb.cons` | Set to `TRUE` in order to have identifiability constraints absorbed into the basis. |
| `scale.penalty` | should the penalty coefficient matrix be scaled to have approximately the same ‘size’ as the inner product of the terms model matrix with itself? This can improve the performance of `<gamm>` fitting. |
| `n` | number of values for each covariate, or if a covariate is a matrix, the number of rows in that matrix: must be supplied explicitly if `data` is a list. |
| `dataX` | Sometimes the basis should be set up using data in `data`, but the model matrix should be constructed with another set of data provided in `dataX` — `n` is assumed to be the same for both. Facilitates smooth id's. |
| `null.space.penalty` | Should an extra penalty be added to the smooth which will penalize the components of the smooth in the penalty null space: provides a way of penalizing terms out of the model altogether. |
| `apply.by` | set to `FALSE` to have basis setup exactly as in default case, but to return add an additional matrix `X0` to the return object, containing the model matrix without the `by` variable, if a `by` variable is present. Useful for `bam` discrete method setup. |
| `sparse.cons` | If `0` then default sum to zero constraints are used. If `-1` then sweep and drop sum to zero constraints are used (default with `<bam>`). If `1` then one coefficient is set to zero as constraint for sparse smooths. If `2` then sparse coefficient sum to zero constraints are used for sparse smooths. None of these options has an effect if the smooth supplies its own constraint. |
| `diagonal.penalty` | If `TRUE` then the smooth is reparameterized to turn the penalty into an identity matrix, with the final diagonal elements zeroed (corresponding to the penalty nullspace). May result in a matrix `diagRP` in the returned object for use by `PredictMat`. |
| `modCon` | force modification of any smooth supplied constraints. 0 - do nothing. 1 - delete supplied constraints, replacing with automatically generated ones. 2 - set fit and predict constraint to predict constraint. 3 - set fit and predict constraint to fit constraint. |
### Details
These wrapper functions exist to allow smooths specified using `<smooth.construct>` and `[Predict.matrix](predict.matrix)` method functions to be re-parameterized so that identifiability constraints are no longer required in fitting. This is done in a user transparent manner, but is typically of no importance in use of GAMs. The routine's also handle `by` variables and will create default identifiability constraints.
If a user defined smooth constructor handles `by` variables itself, then its returned smooth object should contain an object `by.done`. If this does not exist then `smoothCon` will use the default code. Similarly if a user defined `Predict.matrix` method handles `by` variables internally then the returned matrix should have a `"by.done"` attribute.
Default centering constraints, that terms should sum to zero over the covariates, are produced unless the smooth constructor includes a matrix `C` of constraints. To have no constraints (in which case you had better have a full rank penalty!) the matrix `C` should have no rows. There is an option to use centering constraint that generate no, or limited infil, if the smoother has a sparse model matrix.
`smoothCon` returns a list of smooths because factor `by` variables result in multiple copies of a smooth, each multiplied by the dummy variable associated with one factor level. `smoothCon` modifies the smooth object labels in the presence of `by` variables, to ensure that they are unique, it also stores the level of a by variable factor associated with a smooth, for later use by `PredictMat`.
The parameterization used by `<gam>` can be controlled via `<gam.control>`.
### Value
From `smoothCon` a list of `smooth` objects returned by the appropriate `<smooth.construct>` method function. If constraints are to be absorbed then the objects will have attributes `"qrc"` and `"nCons"`. `"nCons"` is the number of constraints. `"qrc"` is usually the qr decomposition of the constraint matrix (returned by `[qr](../../matrix/html/qr-methods)`), but if it is a single positive integer it is the index of the coefficient to set to zero, and if it is a negative number then this indicates that the parameters are to sum to zero.
For `predictMat` a matrix which will map the parameters associated with the smooth to the vector of values of the smooth evaluated at the covariate values given in `object`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<gam.control>`, `<smooth.construct>`, `[Predict.matrix](predict.matrix)`
### Examples
```
## example of using smoothCon and PredictMat to set up a basis
## to use for regression and make predictions using the result
library(MASS) ## load for mcycle data.
## set up a smoother...
sm <- smoothCon(s(times,k=10),data=mcycle,knots=NULL)[[1]]
## use it to fit a regression spline model...
beta <- coef(lm(mcycle$accel~sm$X-1))
with(mcycle,plot(times,accel)) ## plot data
times <- seq(0,60,length=200) ## creat prediction times
## Get matrix mapping beta to spline prediction at 'times'
Xp <- PredictMat(sm,data.frame(times=times))
lines(times,Xp%*%beta) ## add smooth to plot
## Same again but using a penalized regression spline of
## rank 30....
sm <- smoothCon(s(times,k=30),data=mcycle,knots=NULL)[[1]]
E <- t(mroot(sm$S[[1]])) ## square root penalty
X <- rbind(sm$X,0.1*E) ## augmented model matrix
y <- c(mcycle$accel,rep(0,nrow(E))) ## augmented data
beta <- coef(lm(y~X-1)) ## fit penalized regression spline
Xp <- PredictMat(sm,data.frame(times=times)) ## prediction matrix
with(mcycle,plot(times,accel)) ## plot data
lines(times,Xp%*%beta) ## overlay smooth
```
r None
`trind.generator` Generates index arrays for upper triangular storage
----------------------------------------------------------------------
### Description
Generates index arrays for upper triangular storage up to order four. Useful when working with higher order derivatives, which generate symmetric arrays. Mainly intended for internal use.
### Usage
```
trind.generator(K = 2)
```
### Arguments
| | |
| --- | --- |
| `K` | positive integer determining the size of the array. |
### Details
Suppose that `m=1` and you fill an array using code like `for(i in 1:K) for(j in i:K) for(k in j:K) for(l in k:K)
{a[,m] <- something; m <- m+1 }` and do this because actually the same "something" would be stored for any permutation of the indices i,j,k,l. Clearly in storage we have the restriction l>=k>=j>=i, but for access we want no restriction on the indices. `i4[i,j,k,l]` produces the appropriate `m` for unrestricted indices. `i3` and i2 do the same for 3d and 2d arrays.
### Value
A list where the entries `i1` to `i4` are arrays in up to four dimensions, containing K indexes along each dimension.
### Author(s)
Simon N. Wood <[email protected]>.
### Examples
```
library(mgcv)
A <- trind.generator(3)
# All permutations of c(1, 2, 3) point to the same index (5)
A$i3[1, 2, 3]
A$i3[2, 1, 3]
A$i3[2, 3, 1]
A$i3[3, 1, 2]
A$i3[1, 3, 2]
```
r None
`mgcv-FAQ` Frequently Asked Questions for package mgcv
-------------------------------------------------------
### Description
This page provides answers to some of the questions that get asked most often about mgcv
### FAQ list
1. **How can I compare gamm models?** In the identity link normal errors case, then AIC and hypotheis testing based methods are fine. Otherwise it is best to work out a strategy based on the `<summary.gam>` Alternatively, simple random effects can be fitted with `<gam>`, which makes comparison straightforward. Package `gamm4` is an alternative, which allows AIC type model selection for generalized models.
2. **How do I get the equation of an estimated smooth?** This slightly misses the point of semi-parametric modelling: the idea is that we estimate the form of the function from data without assuming that it has a particular simple functional form. Of course for practical computation the functions do have underlying mathematical representations, but they are not very helpful, when written down. If you do need the functional forms then see chapter 5 of Wood (2017). However for most purposes it is better to use `<predict.gam>` to evaluate the function for whatever argument values you need. If derivatives are required then the simplest approach is to use finite differencing (which also allows SEs etc to be calculated).
3. **Some of my smooths are estimated to be straight lines and their confidence intervals vanish at some point in the middle. What is wrong?** Nothing. Smooths are subject to sum-to-zero identifiability constraints. If a smooth is estimated to be a straight line then it consequently has one degree of freedom, and there is no choice about where it passes through zero — so the CI must vanish at that point.
4. **How do I test whether a smooth is significantly different from a straight line**. See `[tprs](smooth.construct.tp.smooth.spec)` and the example therein.
5. **An example from an mgcv helpfile gives an error - is this a bug?** It might be, but first please check that the version of mgcv you have loaded into R corresponds to the version from which the helpfile came. Many such problems are caused by trying to run code only supported in a later mgcv version in an earlier version. Another possibility is that you have an object loaded whose name clashes with an mgcv function (for example you are trying to use the mgcv `multinom` function, but have another object called `multinom` loaded.)
6. **Some code from Wood (2006) causes an error: why?** The book was written using mgcv version 1.3. To allow for REML estimation of smoothing parameters in versions 1.5, some changes had to be made to the syntax. In particular the function `gam.method` no longer exists. The smoothness selection method (GCV, REML etc) is now controlled by the `method` argument to `gam` while the optimizer is selected using the `optimizer` argument. See `<gam>` for details.
7. **Why is a model object saved under a previous mgcv version not usable with the current mgcv version?** I'm sorry about this issue, I know it's really annoying. Here's my defence. Each mgcv version is run through an extensive test suite before release, to ensure that it gives the same results as before, unless there are good statistical reasons why not (e.g. improvements to p-value approximation, fixing of an error). However it is sometimes necessary to modify the internal structure of model objects in a way that makes an old style object unusable with a newer version. For example, bug fixes or new R features sometimes require changes in the way that things are computed which in turn require modification of the object structure. Similarly improvements, such as the ability to compute smoothing parameters by RE/ML require object level changes. The only fix to this problem is to access the old object using the original mgcv version (available on CRAN), or to recompute the fit using the current mgcv version.
8. **When using `gamm` or `gamm4`, the reported AIC is different for the `gam` object and the `lme` or `lmer` object. Why is this?** There are several reasons for this. The most important is that the models being used are actually different in the two representations. When treating the GAM as a mixed model, you are implicitly assuming that if you gathered a replicate dataset, the smooths in your model would look completely different to the smooths from the original model, except for having the same degree of smoothness. Technically you would expect the smooths to be drawn afresh from their distribution under the random effects model. When viewing the gam from the usual penalized regression perspective, you would expect smooths to look broadly similar under replication of the data. i.e. you are really using Bayesian model for the smooths, rather than a random effects model (it's just that the frequentist random effects and Bayesian computations happen to coincide for computing the estimates). As a result of the different assumptions about the data generating process, AIC model comparisons can give rather different answers depending on the model adopted. Which you use should depend on which model you really think is appropriate. In addition the computations of the AICs are different. The mixed model AIC uses the marginal liklihood and the corresponding number of model parameters. The gam model uses the penalized likelihood and the effective degrees of freedom.
9. **What does 'mgcv' stand for?** '**M**ixed **G**AM **C**omputation **V**ehicle', is my current best effort (let me know if you can do better). Originally it stood for ‘Multiple GCV’, which has long since ceased to be usefully descriptive, (and I can't really change 'mgcv' now without causing disruption). On a bad inbox day '**M**ad **G**AM **C**omputing **V**ulture'.
10. **My new method is failing to beat mgcv, what can I do?** If speed is the problem, then make sure that you use the slowest basis possible (`"tp"`) with a large sample size, and experiment with different optimizers to find one that is slow for your problem. For prediction error/MSE, then leaving the smoothing basis dimensions at their arbitrary defaults, when these are inappropriate for the problem setting, is a good way of reducing performance. Similarly, using p-splines in place of derivative penalty based splines will often shave a little more from the performance here. Unlike REML/ML, prediction error based smoothness selection criteria such as Mallows Cp and GCV often produce a small proportion of severe overfits, so careful choise of smoothness selection method can help further. In particular GCV etc. usually result in worse confidence interval and p-value performance than ML or REML. If all this fails, try using a really odd simulation setup for which mgcv is clearly not suited: for example poor performance is almost guaranteed for small noisy datasets with large numbers of predictors.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood S.N. (2006) Generalized Additive Models: An Introduction with R. Chapman and Hall/CRC Press.
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
| programming_docs |
r None
`notExp` Functions for better-than-log positive parameterization
-----------------------------------------------------------------
### Description
It is common practice in statistical optimization to use log-parameterizations when a parameter ought to be positive. i.e. if an optimization parameter `a` should be non-negative then we use `a=exp(b)` and optimize with respect to the unconstrained parameter `b`. This often works well, but it does imply a rather limited working range for `b`: using 8 byte doubles, for example, if `b`'s magnitude gets much above 700 then `a` overflows or underflows. This can cause problems for numerical optimization methods.
`notExp` is a monotonic function for mapping the real line into the positive real line with much less extreme underflow and overflow behaviour than `exp`. It is a piece-wise function, but is continuous to second derivative: see the source code for the exact definition, and the example below to see what it looks like.
`notLog` is the inverse function of `notExp`.
The major use of these functions was originally to provide more robust `pdMat` classes for `lme` for use by `<gamm>`. Currently the `[notExp2](notexp2)` and `[notLog2](notexp2)` functions are used in their place, as a result of changes to the nlme optimization routines.
### Usage
```
notExp(x)
notLog(x)
```
### Arguments
| | |
| --- | --- |
| `x` | Argument array of real numbers (`notExp`) or positive real numbers (`notLog`). |
### Value
An array of function values evaluated at the supplied argument values.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`[pdTens](pdtens)`, `[pdIdnot](pdidnot)`, `<gamm>`
### Examples
```
## Illustrate the notExp function:
## less steep than exp, but still monotonic.
require(mgcv)
x <- -100:100/10
op <- par(mfrow=c(2,2))
plot(x,notExp(x),type="l")
lines(x,exp(x),col=2)
plot(x,log(notExp(x)),type="l")
lines(x,log(exp(x)),col=2) # redundancy intended
x <- x/4
plot(x,notExp(x),type="l")
lines(x,exp(x),col=2)
plot(x,log(notExp(x)),type="l")
lines(x,log(exp(x)),col=2) # redundancy intended
par(op)
range(notLog(notExp(x))-x) # show that inverse works!
```
r None
`family.mgcv` Distribution families in mgcv
--------------------------------------------
### Description
As well as the standard families documented in `[family](../../stats/html/family)` (see also `[glm](../../stats/html/glm)`) which can be used with functions `<gam>`, `<bam>` and `<gamm>`, `mgcv` also supplies some extra families, most of which are currently only usable with `<gam>`, although some can also be used with `<bam>`. These are described here.
### Details
The following families are in the exponential family given the value of a single parameter. They are usable with all modelling functions.
* `[Tweedie](tweedie)` An exponential family distribution for which the variance of the response is given by the mean response to the power `p`. `p` is in (1,2) and must be supplied. Alternatively, see `[tw](tweedie)` to estimate `p` (`gam` only).
* `<negbin>` The negative binomial. Alternatively see `[nb](negbin)` to estimate the `theta` parameter of the negative binomial (`gam` only).
The following families are for regression type models dependent on a single linear predictor, and with a log likelihood which is a sum of independent terms, each coprresponding to a single response observation. Usable with `<gam>`, with smoothing parameter estimation by `"REML"` or `"ML"` (the latter does not integrate the unpenalized and parameteric effects out of the marginal likelihood optimized for the smoothing parameters). Also usable with `<bam>`.
* `<ocat>` for ordered categorical data.
* `[tw](tweedie)` for Tweedie distributed data, when the power parameter relating the variance to the mean is to be estimated.
* `[nb](negbin)` for negative binomial data when the `theta` parameter is to be estimated.
* `[betar](beta)` for proportions data on (0,1) when the binomial is not appropriate.
* `<scat>` scaled t for heavy tailed data that would otherwise be modelled as Gaussian.
* `[ziP](zip)` for zero inflated Poisson data, when the zero inflation rate depends simply on the Poisson mean.
The following families implement more general model classes. Usable only with `<gam>` and only with REML smoothing parameter estimation.
* `[cox.ph](coxph)` the Cox Proportional Hazards model for survival data.
* `<gammals>` a gamma location-scale model, where the mean and standared deviation are modelled with separate linear predictors.
* `<gaulss>` a Gaussian location-scale model where the mean and the standard deviation are both modelled using smooth linear predictors.
* `<gevlss>` a generalized extreme value (GEV) model where the location, scale and shape parameters are each modelled using a linear predictor.
* `<gumbls>` a Gumbel location-scale model (2 linear predictors).
* `<shash>` Sinh-arcsinh location scale and shape model family (4 linear predicors).
* `<ziplss>` a ‘two-stage’ zero inflated Poisson model, in which 'potential-presence' is modelled with one linear predictor, and Poisson mean abundance given potential presence is modelled with a second linear predictor.
* `<mvn>`: multivariate normal additive models.
* `<multinom>`: multinomial logistic regression, for unordered categorical responses.
### Author(s)
Simon N. Wood ([email protected]) & Natalya Pya
### References
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
r None
`ginla` GAM Integrated Nested Laplace Approximation Newton Enhanced
--------------------------------------------------------------------
### Description
Apply Integrated Nested Laplace Approximation (INLA, Rue et al. 2009) to models estimable by `<gam>` or `<bam>`, using the INLA variant described in Wood (2019). Produces marginal posterior densities for each coefficient, selected coefficients or linear transformations of the coefficient vector.
### Usage
```
ginla(G,A=NULL,nk=16,nb=100,J=1,interactive=FALSE,int=0,approx=0)
```
### Arguments
| | |
| --- | --- |
| `G` | A pre-fit gam object, as produced by `gam(...,fit=FALSE)` or `bam(...,discrete=TRUE,fit=FALSE)`. |
| `A` | Either a matrix of transforms of the coefficients that are of interest, or an array of indices of the parameters of interest. If `NULL` then distributions are produced for all coefficients. |
| `nk` | Number of values of each coefficient at which to evaluate its log marginal posterior density. These points are then spline interpolated. |
| `nb` | Number of points at which to evaluate posterior density of coefficients for returning as a gridded function. |
| `J` | How many determinant updating steps to take in the log determinant approximation step. Not recommended to increase this. |
| `interactive` | If this is `>0` or `TRUE` then every approximate posterior is plotted in red, overlaid on the simple Gaussian approximate posterior. If `2` then waits for user to press return between each plot. Useful for judging whether anything is gained by using INLA approach. |
| `int` | 0 to skip integration and just use the posterior modal smoothing parameter. >0 for integration using the CCD approach proposed in Rue et al. (2009). |
| `approx` | 0 for full approximation; 1 to update Hessian, but use approximate modes; 2 as 1 and assume constant Hessian. See details. |
### Details
Let *b*, *h* and *y* denote the model coefficients, hyperparameters/smoothing parameters and response data, respectively. In principle, INLA employs Laplace approximations for *p(b\_i|h,y)* and *p(h|y)* and then obtains the marginal posterior distribution *p(b\_i|y)* by intergrating the approximations to *p(b\_i|h,y)p(h|y)* w.r.t *h* (marginals for the hyperparameters can also be produced). In practice the Laplace approximation for *p(b\_i|h,y)* is too expensive to compute for each *b\_i* and must itself be approximated. To this end, there are two quantities that have to be computed: the posterior mode *b\*|b\_i* and the determinant of the Hessian of the joint log density *log p(b,h,y)* w.r.t. *b* at the mode. Rue et al. (2009) originally approximated the posterior conditional mode by the conditional mode implied by a simple Gaussian approximation to the posterior *p(b|y)*. They then approximated the log determinant of the Hessian as a function of *b\_i* using a first order Taylor expansion, which is cheap to compute for the sparse model representaiton that they use, but not when using the dense low rank basis expansions used by `<gam>`. They also offer a more expensive alternative approximation based on computing the log determiannt with respect only to those elements of *b* with sufficiently high correlation with *b\_i* according to the simple Gaussian posterior approximation: efficiency again seems to rest on sparsity. Wood (2018) suggests computing the required posterior modes exactly, and basing the log determinant approximation on a BFGS update of the Hessian at the unconditional model. The latter is efficient with or without sparsity, whereas the former is a ‘for free’ improvement. Both steps are efficient because it is cheap to obtain the Cholesky factor of *H[-i,-i]* from that of *H* - see `[choldrop](chol.down)`. This is the approach taken by this routine.
The `approx` argument allows two further approximations to speed up computations. For `approx==1` the exact posterior conditional modes are not used, but instead the conditional modes implied by the simple Gaussian posterior approximation. For `approx==2` the same approximation is used for the modes and the Hessian is assumed constant. The latter is quite fast as no log joint density gradient evaluations are required.
Note that for many models the INLA estimates are very close to the usual Gaussian approximation to the posterior, the `interactive` argument is useful for investigating this issue.
`<bam>` models are only supported with the `disrete=TRUE` option. The `discrete=FALSE` approach would be too inefficient. AR1 models are not supported (related arguments are simply ignored).
### Value
A list with elements `beta` and `density`, both of which are matrices. Each row relates to one coefficient (or linear coefficient combination) of interest. Both matrices have `nb` columns. If `int!=0` then a further element `reml` gives the integration weights used in the CCD integration, with the central point weight given first.
### WARNINGS
This routine is still somewhat experimental, so details are liable to change. Also currently not all steps are optimally efficient.
The routine is written for relatively expert users.
`ginla` is not designed to deal with rank deficient models.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Rue, H, Martino, S. & Chopin, N. (2009) Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations (with discussion). Journal of the Royal Statistical Society, Series B. 71: 319-392.
Wood (2019) Simplified Integrated Laplace Approximation. In press Biometrika.
### Examples
```
require(mgcv); require(MASS)
## example using a scale location model for the motorcycle data. A simple plotting
## routine is produced first...
plot.inla <- function(x,inla,k=1,levels=c(.025,.1,.5,.9,.975),
lcol = c(2,4,4,4,2),lwd = c(1,1,2,1,1),lty=c(1,1,1,1,1),
xlab="x",ylab="y",cex.lab=1.5) {
## a simple effect plotter, when distributions of function values of
## 1D smooths have been computed
require(splines)
p <- length(x)
betaq <- matrix(0,length(levels),p) ## storage for beta quantiles
for (i in 1:p) { ## work through x and betas
j <- i + k - 1
p <- cumsum(inla$density[j,])*(inla$beta[j,2]-inla$beta[j,1])
## getting quantiles of function values...
betaq[,i] <- approx(p,y=inla$beta[j,],levels)$y
}
xg <- seq(min(x),max(x),length=200)
ylim <- range(betaq)
ylim <- 1.1*(ylim-mean(ylim))+mean(ylim)
for (j in 1:length(levels)) { ## plot the quantiles
din <- interpSpline(x,betaq[j,])
if (j==1) {
plot(xg,predict(din,xg)$y,ylim=ylim,type="l",col=lcol[j],
xlab=xlab,ylab=ylab,lwd=lwd[j],cex.lab=1.5,lty=lty[j])
} else lines(xg,predict(din,xg)$y,col=lcol[j],lwd=lwd[j],lty=lty[j])
}
} ## plot.inla
## set up the model with a `gam' call...
G <- gam(list(accel~s(times,k=20,bs="ad"),~s(times)),
data=mcycle,family=gaulss(),fit=FALSE)
b <- gam(G=G,method="REML") ## regular GAM fit for comparison
## Now use ginla to get posteriors of estimated effect values
## at evenly spaced times. Create A matrix for this...
rat <- range(mcycle$times)
pd0 <- data.frame(times=seq(rat[1],rat[2],length=20))
X0 <- predict(b,newdata=pd0,type="lpmatrix")
X0[,21:30] <- 0
pd1 <- data.frame(times=seq(rat[1],rat[2],length=10))
X1 <- predict(b,newdata=pd1,type="lpmatrix")
X1[,1:20] <- 0
A <- rbind(X0,X1) ## A maps coefs to required function values
## call ginla. Set int to 1 for integrated version.
## Set interactive = 1 or 2 to plot marginal posterior distributions
## (red) and simple Gaussian approximation (black).
inla <- ginla(G,A,int=0)
par(mfrow=c(1,2),mar=c(5,5,1,1))
fv <- predict(b,se=TRUE) ## usual Gaussian approximation, for comparison
## plot inla mean smooth effect...
plot.inla(pd0$times,inla,k=1,xlab="time",ylab=expression(f[1](time)))
## overlay simple Gaussian equivalent (in grey) ...
points(mcycle$times,mcycle$accel,col="grey")
lines(mcycle$times,fv$fit[,1],col="grey",lwd=2)
lines(mcycle$times,fv$fit[,1]+2*fv$se.fit[,1],lty=2,col="grey",lwd=2)
lines(mcycle$times,fv$fit[,1]-2*fv$se.fit[,1],lty=2,col="grey",lwd=2)
## same for log sd smooth...
plot.inla(pd1$times,inla,k=21,xlab="time",ylab=expression(f[2](time)))
lines(mcycle$times,fv$fit[,2],col="grey",lwd=2)
lines(mcycle$times,fv$fit[,2]+2*fv$se.fit[,2],col="grey",lty=2,lwd=2)
lines(mcycle$times,fv$fit[,2]-2*fv$se.fit[,2],col="grey",lty=2,lwd=2)
## ... notice some real differences for the log sd smooth, especially
## at the lower and upper ends of the time interval.
```
r None
`smooth.construct.gp.smooth.spec` Low rank Gaussian process smooths
--------------------------------------------------------------------
### Description
Gaussian process/kriging models based on simple covariance functions can be written in a very similar form to thin plate and Duchon spline models (e.g. Handcock, Meier, Nychka, 1994), and low rank versions produced by the eigen approximation method of Wood (2003). Kammann and Wand (2003) suggest a particularly simple form of the Matern covariance function with only a single smoothing parameter to estimate, and this class implements this and other similar models.
Usually invoked by an `s(...,bs="gp")` term in a `gam` formula. Argument `m` selects the covariance function, sets the range parameter and any power parameter. If `m` is not supplied then it defaults to `NA` and the covariance function suggested by Kammann and Wand (2003) along with their suggested range parameter is used. Otherwise `abs(m[1])` between 1 and 5 selects the correlation function from respectively, spherical, power exponential, and Matern with kappa = 1.5, 2.5 or 3.5. The sign of `m[1]` determines whether a linear trend in the covariates is added to the Guassian process (positive), or not (negative). The latter ensures stationarity. `m[2]`, if present, specifies the range parameter, with non-positive or absent indicating that the Kammann and Wand estimate should be used. `m[3]` can be used to specify the power for the power exponential which otherwise defaults to 1.
### Usage
```
## S3 method for class 'gp.smooth.spec'
smooth.construct(object, data, knots)
## S3 method for class 'gp.smooth'
Predict.matrix(object, data)
```
### Arguments
| | |
| --- | --- |
| `object` | a smooth specification object, usually generated by a term `s(...,bs="ms",...)`. |
| `data` | a list containing just the data (including any `by` variable) required by this term, with names corresponding to `object$term` (and `object$by`). The `by` variable is the last element. |
| `knots` | a list containing any knots supplied for basis setup — in same order and with same names as `data`. Can be `NULL` |
### Details
Let *r>0* be the range parameter, *0<k<=2* and *d* denote the distance between two points. Then the correlation functions indexed by `m[1]` are:
1. *1-1.5d/r+0.5(d/r)^3* if *d<=r* and 0 otherwise.
2. *exp((d/r)^k)*.
3. *exp(-d/r)(1+d/r)*.
4. *exp(-d/r)(1+d/r + (d/r)^2/3)*.
5. *exp(-d/r)(1+d/r+2(d/r)^2/4+(d/r)^3/15)*.
See Fahrmeir et al. (2013) section 8.1.6, for example. Note that setting `r` to too small a value will lead to unpleasant results, as most points become all but independent (especially for the spherical model. Note: Wood 2017, Figure 5.20 right is based on a buggy implementation).
The default basis dimension for this class is `k=M+k.def` where `M` is the null space dimension (dimension of unpenalized function space) and `k.def` is 10 for dimension 1, 30 for dimension 2 and 100 for higher dimensions. This is essentially arbitrary, and should be checked, but as with all penalized regression smoothers, results are statistically insensitive to the exact choise, provided it is not so small that it forces oversmoothing (the smoother's degrees of freedom are controlled primarily by its smoothing parameter).
The constructor is not normally called directly, but is rather used internally by `<gam>`. To use for basis setup it is recommended to use `[smooth.construct2](smooth.construct)`.
For these classes the specification `object` will contain information on how to handle large datasets in their `xt` field. The default is to randomly subsample 2000 ‘knots’ from which to produce a reduced rank eigen approximation to the full basis, if the number of unique predictor variable combinations in excess of 2000. The default can be modified via the `xt` argument to `<s>`. This is supplied as a list with elements `max.knots` and `seed` containing a number to use in place of 2000, and the random number seed to use (either can be missing). Note that the random sampling will not effect the state of R's RNG.
For these bases `knots` has two uses. Firstly, as mentioned already, for large datasets the calculation of the `tp` basis can be time-consuming. The user can retain most of the advantages of the approach by supplying a reduced set of covariate values from which to obtain the basis - typically the number of covariate values used will be substantially smaller than the number of data, and substantially larger than the basis dimension, `k`. This approach is the one taken automatically if the number of unique covariate values (combinations) exceeds `max.knots`. The second possibility is to avoid the eigen-decomposition used to find the spline basis altogether and simply use the basis implied by the chosen knots: this will happen if the number of knots supplied matches the basis dimension, `k`. For a given basis dimension the second option is faster, but gives poorer results (and the user must be quite careful in choosing knot locations).
### Value
An object of class `"gp.smooth"`. In addition to the usual elements of a smooth class documented under `<smooth.construct>`, this object will contain:
| | |
| --- | --- |
| `shift` | A record of the shift applied to each covariate in order to center it around zero and avoid any co-linearity problems that might otherwise occur in the penalty null space basis of the term. |
| `Xu` | A matrix of the unique covariate combinations for this smooth (the basis is constructed by first stripping out duplicate locations). |
| `UZ` | The matrix mapping the smoother parameters back to the parameters of a full GP smooth. |
| `null.space.dimension` | The dimension of the space of functions that have zero wiggliness according to the wiggliness penalty for this term. |
| `gp.defn` | the type, range parameter and power parameter defining the correlation function. |
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Fahrmeir, L., T. Kneib, S. Lang and B. Marx (2013) Regression, Springer.
Handcock, M. S., K. Meier and D. Nychka (1994) Journal of the American Statistical Association, 89: 401-403
Kammann, E. E. and M.P. Wand (2003) Geoadditive Models. Applied Statistics 52(1):1-18.
Wood, S.N. (2003) Thin plate regression splines. J.R.Statist.Soc.B 65(1):95-114
Wood, S.N. (2017) Generalized Additive Models: an introduction with R (2nd ed). CRC/Taylor and Francis
### See Also
`[tprs](smooth.construct.tp.smooth.spec)`
### Examples
```
require(mgcv)
eg <- gamSim(2,n=200,scale=.05)
attach(eg)
op <- par(mfrow=c(2,2),mar=c(4,4,1,1))
b0 <- gam(y~s(x,z,k=50),data=data) ## tps
b <- gam(y~s(x,z,bs="gp",k=50),data=data) ## Matern spline default range
b1 <- gam(y~s(x,z,bs="gp",k=50,m=c(1,.5)),data=data) ## spherical
persp(truth$x,truth$z,truth$f,theta=30) ## truth
vis.gam(b0,theta=30)
vis.gam(b,theta=30)
vis.gam(b1,theta=30)
## compare non-stationary (b1) and stationary (b2)
b2 <- gam(y~s(x,z,bs="gp",k=50,m=c(-1,.5)),data=data) ## sph stationary
vis.gam(b1,theta=30);vis.gam(b2,theta=30)
x <- seq(-1,2,length=200); z <- rep(.5,200)
pd <- data.frame(x=x,z=z)
plot(x,predict(b1,pd),type="l");lines(x,predict(b2,pd),col=2)
abline(v=c(0,1))
plot(predict(b1),predict(b2))
detach(eg)
```
| programming_docs |
r None
`k.check` Checking smooth basis dimension
------------------------------------------
### Description
Takes a fitted `gam` object produced by `gam()` and runs diagnostic tests of whether the basis dimension choises are adequate.
### Usage
```
k.check(b, subsample=5000, n.rep=400)
```
### Arguments
| | |
| --- | --- |
| `b` | a fitted `gam` object as produced by `<gam>()`. |
| `subsample` | above this number of data, testing uses a random sub-sample of data of this size. |
| `n.rep` | how many re-shuffles to do to get p-value for k testing. |
### Details
The test of whether the basis dimension for a smooth is adequate (Wood, 2017, section 5.9) is based on computing an estimate of the residual variance based on differencing residuals that are near neighbours according to the (numeric) covariates of the smooth. This estimate divided by the residual variance is the `k-index` reported. The further below 1 this is, the more likely it is that there is missed pattern left in the residuals. The `p-value` is computed by simulation: the residuals are randomly re-shuffled `n.rep` times to obtain the null distribution of the differencing variance estimator, if there is no pattern in the residuals. For models fitted to more than `subsample` data, the tests are based of `subsample` randomly sampled data. Low p-values may indicate that the basis dimension, `k`, has been set too low, especially if the reported `edf` is close to `k\'`, the maximum possible EDF for the term. Note the disconcerting fact that if the test statistic itself is based on random resampling and the null is true, then the associated p-values will of course vary widely from one replicate to the next. Currently smooths of factor variables are not supported and will give an `NA` p-value.
Doubling a suspect `k` and re-fitting is sensible: if the reported `edf` increases substantially then you may have been missing something in the first fit. Of course p-values can be low for reasons other than a too low `k`. See `<choose.k>` for fuller discussion.
### Value
A matrix contaning the output of the tests described above.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<choose.k>`, `<gam>`, `<gam.check>`
### Examples
```
library(mgcv)
set.seed(0)
dat <- gamSim(1,n=200)
b<-gam(y~s(x0)+s(x1)+s(x2)+s(x3),data=dat)
plot(b,pages=1)
k.check(b)
```
r None
`formula.gam` GAM formula
--------------------------
### Description
Description of `<gam>` formula (see Details), and how to extract it from a fitted `gam` object.
### Usage
```
## S3 method for class 'gam'
formula(x,...)
```
### Arguments
| | |
| --- | --- |
| `x` | fitted model objects of class `gam` (see `[gamObject](gamobject)`) as produced by `gam()`. |
| `...` | un-used in this case |
### Details
`<gam>` will accept a formula or, with some families, a list of formulae. Other `mgcv` modelling functions will not accept a list. The list form provides a mechanism for specifying several linear predictors, and allows these to share terms: see below.
The formulae supplied to `<gam>` are exactly like those supplied to `[glm](../../stats/html/glm)` except that smooth terms, `<s>`, `<te>`, `[ti](te)` and `<t2>` can be added to the right hand side (and `.` is not supported in `gam` formulae).
Smooth terms are specified by expressions of the form:
`s(x1,x2,...,k=12,fx=FALSE,bs="tp",by=z,id=1)`
where `x1`, `x2`, etc. are the covariates which the smooth is a function of, and `k` is the dimension of the basis used to represent the smooth term. If `k` is not specified then basis specific defaults are used. Note that these defaults are essentially arbitrary, and it is important to check that they are not so small that they cause oversmoothing (too large just slows down computation). Sometimes the modelling context suggests sensible values for `k`, but if not informal checking is easy: see `<choose.k>` and `<gam.check>`.
`fx` is used to indicate whether or not this term should be unpenalized, and therefore have a fixed number of degrees of freedom set by `k` (almost always `k-1`). `bs` indicates the basis to use for the smooth: the built in options are described in `<smooth.terms>`, and user defined smooths can be added (see `[user.defined.smooth](smooth.construct)`). If `bs` is not supplied then the default `"tp"` (`[tprs](smooth.construct.tp.smooth.spec)`) basis is used. `by` can be used to specify a variable by which the smooth should be multiplied. For example `gam(y~s(x,by=z))` would specify a model *E(y)=f(x)z* where *f(.)* is a smooth function. The `by` option is particularly useful for models in which different functions of the same variable are required for each level of a factor and for ‘varying coefficient models’: see `<gam.models>`. `id` is used to give smooths identities: smooths with the same identity have the same basis, penalty and smoothing parameter (but different coefficients, so they are different functions).
An alternative for specifying smooths of more than one covariate is e.g.:
`te(x,z,bs=c("tp","tp"),m=c(2,3),k=c(5,10))`
which would specify a tensor product smooth of the two covariates `x` and `z` constructed from marginal t.p.r.s. bases of dimension 5 and 10 with marginal penalties of order 2 and 3. Any combination of basis types is possible, as is any number of covariates. `<te>` provides further information. `[ti](te)` terms are a variant designed to be used as interaction terms when the main effects (and any lower order interactions) are present. `<t2>` produces tensor product smooths that are the natural low rank analogue of smoothing spline anova models.
`s`, `te`, `ti` and `t2` terms accept an `sp` argument of supplied smoothing parameters: positive values are taken as fixed values to be used, negative to indicate that the parameter should be estimated. If `sp` is supplied then it over-rides whatever is in the `sp` argument to `gam`, if it is not supplied then it defaults to all negative, but does not over-ride the `sp` argument to `gam`.
Formulae can involve nested or “overlapping” terms such as
`y~s(x)+s(z)+s(x,z)` or `y~s(x,z)+s(z,v)`
but nested models should really be set up using `[ti](te)` terms: see `<gam.side>` for further details and examples.
Smooth terms in a `gam` formula will accept matrix arguments as covariates (and corresponding `by` variable), in which case a ‘summation convention’ is invoked. Consider the example of `s(X,Z,by=L)` where `X`, `Z` and `L` are n by m matrices. Let `F` be the n by m matrix that results from evaluating the smooth at the values in `X` and `Z`. Then the contribution to the linear predictor from the term will be `rowSums(F*L)` (note the element-wise multiplication). This convention allows the linear predictor of the GAM to depend on (a discrete approximation to) any linear functional of a smooth: see `<linear.functional.terms>` for more information and examples (including functional linear models/signal regression).
Note that `gam` allows any term in the model formula to be penalized (possibly by multiple penalties), via the `paraPen` argument. See `<gam.models>` for details and example code.
When several formulae are provided in a list, then they can be used to specify multiple linear predictors for families for which this makes sense (e.g. `<mvn>`). The first formula in the list must include a response variable, but later formulae need not (depending on the requirements of the family). Let the linear predictors be indexed, 1 to d where d is the number of linear predictors, and the indexing is in the order in which the formulae appear in the list. It is possible to supply extra formulae specifying that several linear predictors should share some terms. To do this a formula is supplied in which the response is replaced by numbers specifying the indices of the linear predictors which will shre the terms specified on the r.h.s. For example `1+3~s(x)+z-1` specifies that linear predictors 1 and 3 will share the terms `s(x)` and `z` (but we don't want an extra intercept, as this would usually be unidentifiable). Note that it is possible that a linear predictor only includes shared terms: it must still have its own formula, but the r.h.s. would simply be `-1` (e.g. `y ~ -1` or `~ -1`).
### Value
Returns the model formula, `x$formula`. Provided so that `anova` methods print an appropriate description of the model.
### WARNING
A `gam` formula should not refer to variables using e.g. `dat[["x"]]`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### See Also
`<gam>`
r None
`jagam` Just Another Gibbs Additive Modeller: JAGS support for mgcv.
---------------------------------------------------------------------
### Description
Facilities to auto-generate model specification code and associated data to simulate with GAMs in JAGS (or BUGS). This is useful for inference about models with complex random effects structure best coded in JAGS. It is a very innefficient approach to making inferences about standard GAMs. The idea is that `jagam` generates template JAGS code, and associated data, for the smooth part of the model. This template is then directly edited to include other stochastic components. After simulation with the resulting model, facilities are provided for plotting and prediction with the model smooth components.
### Usage
```
jagam(formula,family=gaussian,data=list(),file,weights=NULL,na.action,
offset=NULL,knots=NULL,sp=NULL,drop.unused.levels=TRUE,
control=gam.control(),centred=TRUE,sp.prior = "gamma",diagonalize=FALSE)
sim2jam(sam,pregam,edf.type=2,burnin=0)
```
### Arguments
| | |
| --- | --- |
| `formula` | A GAM formula (see `<formula.gam>` and also `<gam.models>`). This is exactly like the formula for a GLM except that smooth terms, `<s>`, `<te>`, `[ti](te)` and `<t2>` can be added to the right hand side to specify that the linear predictor depends on smooth functions of predictors (or linear functionals of these). |
| `family` | This is a family object specifying the distribution and link function to use. See `[glm](../../stats/html/glm)` and `[family](../../stats/html/family)` for more details. Currently only gaussian, poisson, binomial and Gamma families are supported, but the user can easily modify the assumed distribution in the JAGS code. |
| `data` | A data frame or list containing the model response variable and covariates required by the formula. By default the variables are taken from `environment(formula)`: typically the environment from which `jagam` is called. |
| `file` | Name of the file to which JAGS model specification code should be written. See `[setwd](../../base/html/getwd)` for setting and querying the current working directory. |
| `weights` | prior weights on the data. |
| `na.action` | a function which indicates what should happen when the data contain ‘NA’s. The default is set by the ‘na.action’ setting of ‘options’, and is ‘na.fail’ if that is unset. The “factory-fresh” default is ‘na.omit’. |
| `offset` | Can be used to supply a model offset for use in fitting. Note that this offset will always be completely ignored when predicting, unlike an offset included in `formula`: this conforms to the behaviour of `lm` and `glm`. |
| `control` | A list of fit control parameters to replace defaults returned by `<gam.control>`. Any control parameters not supplied stay at their default values. little effect on `jagam`. |
| `knots` | this is an optional list containing user specified knot values to be used for basis construction. For most bases the user simply supplies the knots to be used, which must match up with the `k` value supplied (note that the number of knots is not always just `k`). See `[tprs](smooth.construct.tp.smooth.spec)` for what happens in the `"tp"/"ts"` case. Different terms can use different numbers of knots, unless they share a covariate. |
| `sp` | A vector of smoothing parameters can be provided here. Smoothing parameters must be supplied in the order that the smooth terms appear in the model formula (without forgetting null space penalties). Negative elements indicate that the parameter should be estimated, and hence a mixture of fixed and estimated parameters is possible. If smooths share smoothing parameters then `length(sp)` must correspond to the number of underlying smoothing parameters. |
| `drop.unused.levels` | by default unused levels are dropped from factors before fitting. For some smooths involving factor variables you might want to turn this off. Only do so if you know what you are doing. |
| `centred` | Should centring constraints be applied to the smooths, as is usual with GAMS? Only set this to `FALSE` if you know exactly what you are doing. If `FALSE` there is a (usually global) intercept for each smooth. |
| `sp.prior` | `"gamma"` or `"log.uniform"` prior for the smoothing parameters? Do check that the default parameters are appropriate for your model in the JAGS code. |
| `diagonalize` | Should smooths be re-parameterized to have i.i.d. Gaussian priors (where possible)? For Gaussian data this allows efficient conjugate samplers to be used, and it can also work well with GLMs if the JAGS `"glm"` module is loaded, but otherwise it is often better to update smoothers blockwise, and not do this. |
| `sam` | jags sample object, containing at least fields `b` (coefficients) and `rho` (log smoothing parameters). May also contain field `mu` containing monitored expected response. |
| `pregam` | standard `mgcv` GAM setup data, as returned in `jagam` return list. |
| `edf.type` | Since EDF is not uniquely defined and may be affected by the stochastic structure added to the JAGS model file, 3 options are offered. See details. |
| `burnin` | the amount of burn in to discard from the simulation chains. Limited to .9 of the chain length. |
### Details
Smooths are easily incorportated into JAGS models using multivariate normal priors on the smooth coefficients. The smoothing parameters and smoothing penalty matrices directly specifiy the prior multivariate normal precision matrix. Normally a smoothing penalty does not correspond to a full rank precision matrix, implying an improper prior inappropriate for Gibbs sampling. To rectify this problem the null space penalties suggested in Marra and Wood (2011) are added to the usual penalties.
In an additive modelling context it is usual to centre the smooths, to avoid the identifiability issues associated with having an intercept for each smooth term (in addition to a global intercept). Under Gibbs sampling with JAGS it is technically possible to omit this centring, since we anyway force propriety on the priors, and this propiety implies formal model identifiability. However, in most situations this formal identifiability is rather artificial and does not imply statistically meaningfull identifiability. Rather it serves only to massively inflate confidence intervals, since the multiple intercept terms are not identifiable from the data, but only from the prior. By default then, `jagam` imposes standard GAM identifiability constraints on all smooths. The `centred` argument does allow you to turn this off, but it is not recommended. If you do set `centred=FALSE` then chain convergence and mixing checks should be particularly stringent.
The final technical issue for model setup is the setting of initial conditions for the coefficients and smoothing parameters. The approach taken is to take the default initial smoothing parameter values used elsewhere by `mgcv`, and to take a single PIRLS fitting step with these smoothing parameters in order to obtain starting values for the smooth coefficients. In the setting of fully conjugate updating the initial values of the coefficients are not critical, and good results are obtained without supplying them. But in the usual setting in which slice sampling is required for at least some of the updates then very poor results can sometimes be obtained without initial values, as the sampler simply fails to find the region of the posterior mode.
The `sim2jam` function takes the partial `gam` object (`pregam`) from `jagam` along with simulation output in standard `rjags` form and creates a reduced version of a `gam` object, suitable for plotting and prediction of the model's smooth components. `sim2gam` computes effective degrees of freedom for each smooth, but it should be noted that there are several possibilites for doing this in the context of a model with a complex random effects structure. The simplest approach (`edf.type=0`) is to compute the degrees of freedom that the smooth would have had if it had been part of an unweighted Gaussian additive model. One might choose to use this option if the model has been modified so that the response distribution and/or link are not those that were specified to `jagam`. The second option is (`edf.type=1`) uses the edf that would have been computed by `<gam>` had it produced these estimates - in the context in which the JAGS model modifications have all been about modifying the random effects structure, this is equivalent to simply setting all the random effects to zero for the effective degrees of freedom calculation. The default option (`edf.type=2`) is to base the EDF on the sample covariance matrix, `Vp`, of the model coefficients. If the simulation output (`sim`) includes a `mu` field, then this will be used to form the weight matrix `W` in `XWX = t(X)%*%W%*%X`, where the EDF is computed from `rowSums(Vp*XWX)*scale`. If `mu` is not supplied then it is estimated from the the model matrix `X` and the mean of the simulated coefficients, but the resulting `W` may not be strictly comaptible with the `Vp` matrix in this case. In the situation in which the fitted model is very different in structure from the regression model of the template produced by `jagam` then the default option may make no sense, and indeed it may be best to use option 0.
### Value
For `jagam` a three item list containing
| | |
| --- | --- |
| `pregam` | standard `mgcv` GAM setup data. |
| `jags.data` | list of arguments to be supplied to JAGS containing information referenced in model specification. |
| `jags.ini` | initialization data for smooth coefficients and smoothing parameters. |
For `sim2jam` an object of class `"jam"`: a partial version of an `mgcv` `[gamObject](gamobject)`, suitable for plotting and predicting.
### WARNINGS
Gibb's sampling is a very slow inferential method for standard GAMs. It is only likely to be worthwhile when complex random effects structures are required above what is possible with direct GAMM methods.
Check that the parameters of the priors on the parameters are fit for your purpose.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2016) Just Another Gibbs Additive Modeller: Interfacing JAGS and mgcv. Journal of Statistical Software 75(7):1-15 doi:10.18637/jss.v075.i07)
Marra, G. and S.N. Wood (2011) Practical variable selection for generalized additive models. Computational Statistics & Data Analysis 55(7): 2372-2387
Here is a key early reference to smoothing using BUGS (although the approach and smooths used are a bit different to jagam)
Crainiceanu, C. M. D Ruppert, & M.P. Wand (2005) Bayesian Analysis for Penalized Spline Regression Using WinBUGS Journal of Statistical Software 14.
### See Also
`<gam>`, `<gamm>`, `<bam>`
### Examples
```
## the following illustrates a typical workflow. To run the
## 'Not run' code you need rjags (and JAGS) to be installed.
require(mgcv)
set.seed(2) ## simulate some data...
n <- 400
dat <- gamSim(1,n=n,dist="normal",scale=2)
## regular gam fit for comparison...
b0 <- gam(y~s(x0)+s(x1) + s(x2)+s(x3),data=dat,method="REML")
## Set directory and file name for file containing jags code.
## In real use you would *never* use tempdir() for this. It is
## only done here to keep CRAN happy, and avoid any chance of
## an accidental overwrite. Instead you would use
## setwd() to set an appropriate working directory in which
## to write the file, and just set the file name to what you
## want to call it (e.g. "test.jags" here).
jags.file <- paste(tempdir(),"/test.jags",sep="")
## Set up JAGS code and data. In this one might want to diagonalize
## to use conjugate samplers. Usually call 'setwd' first, to set
## directory in which model file ("test.jags") will be written.
jd <- jagam(y~s(x0)+s(x1)+s(x2)+s(x3),data=dat,file=jags.file,
sp.prior="gamma",diagonalize=TRUE)
## In normal use the model in "test.jags" would now be edited to add
## the non-standard stochastic elements that require use of JAGS....
## Not run:
require(rjags)
load.module("glm") ## improved samplers for GLMs often worth loading
jm <-jags.model(jags.file,data=jd$jags.data,inits=jd$jags.ini,n.chains=1)
list.samplers(jm)
sam <- jags.samples(jm,c("b","rho","scale"),n.iter=10000,thin=10)
jam <- sim2jam(sam,jd$pregam)
plot(jam,pages=1)
jam
pd <- data.frame(x0=c(.5,.6),x1=c(.4,.2),x2=c(.8,.4),x3=c(.1,.1))
fv <- predict(jam,newdata=pd)
## and some minimal checking...
require(coda)
effectiveSize(as.mcmc.list(sam$b))
## End(Not run)
## a gamma example...
set.seed(1); n <- 400
dat <- gamSim(1,n=n,dist="normal",scale=2)
scale <- .5; Ey <- exp(dat$f/2)
dat$y <- rgamma(n,shape=1/scale,scale=Ey*scale)
jd <- jagam(y~s(x0)+te(x1,x2)+s(x3),data=dat,family=Gamma(link=log),
file=jags.file,sp.prior="log.uniform")
## In normal use the model in "test.jags" would now be edited to add
## the non-standard stochastic elements that require use of JAGS....
## Not run:
require(rjags)
## following sets random seed, but note that under JAGS 3.4 many
## models are still not fully repeatable (JAGS 4 should fix this)
jd$jags.ini$.RNG.name <- "base::Mersenne-Twister" ## setting RNG
jd$jags.ini$.RNG.seed <- 6 ## how to set RNG seed
jm <-jags.model(jags.file,data=jd$jags.data,inits=jd$jags.ini,n.chains=1)
list.samplers(jm)
sam <- jags.samples(jm,c("b","rho","scale","mu"),n.iter=10000,thin=10)
jam <- sim2jam(sam,jd$pregam)
plot(jam,pages=1)
jam
pd <- data.frame(x0=c(.5,.6),x1=c(.4,.2),x2=c(.8,.4),x3=c(.1,.1))
fv <- predict(jam,newdata=pd)
## End(Not run)
```
| programming_docs |
r None
`uniquecombs` find the unique rows in a matrix
-----------------------------------------------
### Description
This routine returns a matrix or data frame containing all the unique rows of the matrix or data frame supplied as its argument. That is, all the duplicate rows are stripped out. Note that the ordering of the rows on exit need not be the same as on entry. It also returns an index attribute for relating the result back to the original matrix.
### Usage
```
uniquecombs(x,ordered=FALSE)
```
### Arguments
| | |
| --- | --- |
| `x` | is an **R** matrix (numeric), or data frame. |
| `ordered` | set to `TRUE` to have the rows of the returned object in the same order regardless of input ordering. |
### Details
Models with more parameters than unique combinations of covariates are not identifiable. This routine provides a means of evaluating the number of unique combinations of covariates in a model.
When `x` has only one column then the routine uses `[unique](../../base/html/unique)` and `[match](../../base/html/match)` to get the index. When there are multiple columns then it uses `[paste0](../../base/html/paste)` to produce labels for each row, which should be unique if the row is unique. Then `unique` and `match` can be used as in the single column case. Obviously the pasting is inefficient, but still quicker for large n than the C based code that used to be called by this routine, which had O(nlog(n)) cost. In principle a hash table based solution in C would be only O(n) and much quicker in the multicolumn case.
`[unique](../../base/html/unique)` and `[duplicated](../../base/html/duplicated)`, can be used in place of this, if the full index is not needed. Relative performance is variable.
If `x` is not a matrix or data frame on entry then an attempt is made to coerce it to a data frame.
### Value
A matrix or data frame consisting of the unique rows of `x` (in arbitrary order).
The matrix or data frame has an `"index"` attribute. `index[i]` gives the row of the returned matrix that contains row i of the original matrix.
### WARNINGS
If a dataframe contains variables of a type other than numeric, logical, factor or character, which either have no `as.character` method, or whose `as.character` method is a many to one mapping, then the routine is likely to fail.
If the character representation of a dataframe variable (other than of class factor of character) contains `*` then in principle the method could fail (but with a warning).
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected]) with thanks to Jonathan Rougier
### See Also
`[unique](../../base/html/unique)`, `[duplicated](../../base/html/duplicated)`, `[match](../../base/html/match)`.
### Examples
```
require(mgcv)
## matrix example...
X <- matrix(c(1,2,3,1,2,3,4,5,6,1,3,2,4,5,6,1,1,1),6,3,byrow=TRUE)
print(X)
Xu <- uniquecombs(X);Xu
ind <- attr(Xu,"index")
## find the value for row 3 of the original from Xu
Xu[ind[3],];X[3,]
## same with fixed output ordering
Xu <- uniquecombs(X,TRUE);Xu
ind <- attr(Xu,"index")
## find the value for row 3 of the original from Xu
Xu[ind[3],];X[3,]
## data frame example...
df <- data.frame(f=factor(c("er",3,"b","er",3,3,1,2,"b")),
x=c(.5,1,1.4,.5,1,.6,4,3,1.7),
bb = c(rep(TRUE,5),rep(FALSE,4)),
fred = c("foo","a","b","foo","a","vf","er","r","g"),
stringsAsFactors=FALSE)
uniquecombs(df)
```
r None
`mini.roots` Obtain square roots of penalty matrices
-----------------------------------------------------
### Description
INTERNAL function to obtain square roots, `B[[i]]`, of the penalty matrices `S[[i]]`'s having as few columns as possible.
### Usage
```
mini.roots(S, off, np, rank = NULL)
```
### Arguments
| | |
| --- | --- |
| `S` | a list of penalty matrices, in packed form. |
| `off` | a vector where the i-th element is the offset for the i-th matrix. The elements in columns `1:off[i]` of `B[[i]]` will be equal to zero. |
| `np` | total number of parameters. |
| `rank` | here `rank[i]` is optional supplied rank of `S[[i]]`. Set `rank[i] < 1`, or `rank=NULL` to estimate. |
### Value
A list of matrix square roots such that `S[[i]]=B[[i]]%*%t(B[[i]])`.
### Author(s)
Simon N. Wood <[email protected]>.
r None
`FFdes` Level 5 fractional factorial designs
---------------------------------------------
### Description
Computes level 5 fractional factorial designs for up to 120 factors using the agorithm of Sanchez and Sanchez (2005), and optionally central composite designs.
### Usage
```
FFdes(size=5,ccd=FALSE)
```
### Arguments
| | |
| --- | --- |
| `size` | number of factors up to 120. |
| `ccd` | if `TRUE`, adds points along each axis at the same distance from the origin as the points in the fractional factorial design, to create the outer points of a central composite design. Add central points to complete. |
### Details
Basically a translation of the code provided in the appendix of Sanchez and Sanchez (2005).
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Sanchez, S. M. & Sanchez, P. J. (2005) Very large fractional factorial and central composite designs. ACM Transactions on Modeling and Computer Simulation. 15: 362-377
### Examples
```
require(mgcv)
plot(rbind(0,FFdes(2,TRUE)),xlab="x",ylab="y",
col=c(2,1,1,1,1,4,4,4,4),pch=19,main="CCD")
FFdes(5)
FFdes(5,TRUE)
```
r None
`Beta` GAM beta regression family
----------------------------------
### Description
Family for use with `<gam>` or `<bam>`, implementing regression for beta distributed data on (0,1). A linear predictor controls the mean, *mu* of the beta distribution, while the variance is then *mu(1-mu)/(1+phi)*, with parameter *phi* being estimated during fitting, alongside the smoothing parameters.
### Usage
```
betar(theta = NULL, link = "logit",eps=.Machine$double.eps*100)
```
### Arguments
| | |
| --- | --- |
| `theta` | the extra parameter (*phi* above). |
| `link` | The link function: one of `"logit"`, `"probit"`, `"cloglog"` and `"cauchit"`. |
| `eps` | the response variable will be truncated to the interval `[eps,1-eps]` if there are values outside this range. This truncation is not entirely benign, but too small a value of `eps` will cause stability problems if there are zeroes or ones in the response. |
### Details
These models are useful for proportions data which can not be modelled as binomial. Note the assumption that data are in (0,1), despite the fact that for some parameter values 0 and 1 are perfectly legitimate observations. The restriction is needed to keep the log likelihood bounded for all parameter values. Any data exactly at 0 or 1 are reset to be just above 0 or just below 1 using the `eps` argument (in fact any observation `<eps` is reset to `eps` and any observation `>1-eps` is reset to `1-eps`). Note the effect of this resetting. If *mu\*phi>1* then impossible 0s are replaced with highly improbable `eps` values. If the inequality is reversed then 0s with infinite probability density are replaced with `eps` values having high finite probability density. The equivalent condition for 1s is *(1-mu)\*phi>1*. Clearly all types of resetting are somewhat unsatisfactory, and care is needed if data contain 0s or 1s (often it makes sense to manually reset the 0s and 1s in a manner that somehow reflects the sampling setup).
### Value
An object of class `extended.family`.
### WARNINGS
Do read the details section if your data contain 0s and or 1s.
### Author(s)
Natalya Pya ([email protected]) and Simon Wood ([email protected])
### Examples
```
library(mgcv)
## Simulate some beta data...
set.seed(3);n<-400
dat <- gamSim(1,n=n)
mu <- binomial()$linkinv(dat$f/4-2)
phi <- .5
a <- mu*phi;b <- phi - a;
dat$y <- rbeta(n,a,b)
bm <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=betar(link="logit"),data=dat)
bm
plot(bm,pages=1)
```
r None
`gam.fit5.post.proc` Post-processing output of gam.fit5
--------------------------------------------------------
### Description
INTERNAL function for post-processing the output of `gam.fit5`.
### Usage
```
gam.fit5.post.proc(object, Sl, L, lsp0, S, off)
```
### Arguments
| | |
| --- | --- |
| `object` | output of `gam.fit5`. |
| `Sl` | penalty object, output of `Sl.setup`. |
| `L` | matrix mapping the working smoothing parameters. |
| `lsp0` | log smoothing parameters. |
| `S` | penalty matrix. |
| `off` | vector of offsets. |
### Value
A list containing:
* `R`: unpivoted Choleski of estimated expected hessian of log-likelihood.
* `Vb`: the Bayesian covariance matrix of the model parameters.
* `Ve`: "frequentist" alternative to `Vb`.
* `Vc`: corrected covariance matrix.
* `F`: matrix of effective degrees of freedom (EDF).
* `edf`: `diag(F)`.
* `edf2`: `diag(2F-FF)`.
### Author(s)
Simon N. Wood <[email protected]>.
r None
`chol.down` Deletion and rank one Cholesky factor update
---------------------------------------------------------
### Description
Given a Cholesky factor, `R`, of a matrix, `A`, `choldrop` finds the Cholesky factor of `A[-k,-k]`, where `k` is an integer. `cholup` finds the factor of *A+uu'* (update) or *A-uu'* (downdate).
### Usage
```
choldrop(R,k)
cholup(R,u,up)
```
### Arguments
| | |
| --- | --- |
| `R` | Cholesky factor of a matrix, `A`. |
| `k` | row and column of `A` to drop. |
| `u` | vector defining rank one update. |
| `up` | if `TRUE` compute update, otherwise downdate. |
### Details
First consider `choldrop`. If `R` is upper triangular then `t(R[,-k])%*%R[,-k] == A[-k,-k]`, but `R[,-k]` has elements on the first sub-diagonal, from its kth column onwards. To get from this to a triangular Cholesky factor of `A[-k,-k]` we can apply a sequence of Givens rotations from the left to eliminate the sub-diagonal elements. The routine does this. If `R` is a lower triangular factor then Givens rotations from the right are needed to remove the extra elements. If `n` is the dimension of `R` then the update has *O(n^2)* computational cost.
`cholup` (which assumes `R` is upper triangular) updates based on the observation that *R'R + uu' = [u,R'][u,R']' = [u,R']Q'Q[u,R']'*, and therefore we can construct *Q* so that *Q[u,R']'=[0,R1']'*, where *R1* is the modified factor. *Q* is constructed from a sequence of Givens rotations in order to zero the elements of *u*. Downdating is similar except that hyperbolic rotations have to be used in place of Givens rotations — see Golub and van Loan (2013, section 6.5.4) for details. Downdating only works if *A-uu'* is positive definite. Again the computational cost is *O(n^2)*.
Note that the updates are vector oriented, and are hence not susceptible to speed up by use of an optimized BLAS. The updates are set up to be relatively Cache friendly, in that in the upper triangular case successive Givens rotations are stored for sequential application column-wise, rather than being applied row-wise as soon as they are computed. Even so, the upper triangular update is slightly slower than the lower triangular update.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Golub GH and CF Van Loan (2013) Matrix Computations (4th edition) Johns Hopkins
### Examples
```
require(mgcv)
set.seed(0)
n <- 6
A <- crossprod(matrix(runif(n*n),n,n))
R0 <- chol(A)
k <- 3
Rd <- choldrop(R0,k)
range(Rd-chol(A[-k,-k]))
Rd;chol(A[-k,-k])
## same but using lower triangular factor A = LL'
L <- t(R0)
Ld <- choldrop(L,k)
range(Ld-t(chol(A[-k,-k])))
Ld;t(chol(A[-k,-k]))
## Rank one update example
u <- runif(n)
R <- cholup(R0,u,TRUE)
Ru <- chol(A+u %*% t(u)) ## direct for comparison
R;Ru
range(R-Ru)
## Downdate - just going back from R to R0
Rd <- cholup(R,u,FALSE)
R0;Rd
range(R-Ru)
```
r None
`cSplineDes` Evaluate cyclic B spline basis
--------------------------------------------
### Description
Uses `splineDesign` to set up the model matrix for a cyclic B-spline basis.
### Usage
```
cSplineDes(x, knots, ord = 4, derivs=0)
```
### Arguments
| | |
| --- | --- |
| `x` | covariate values for smooth. |
| `knots` | The knot locations: the range of these must include all the data. |
| `ord` | order of the basis. 4 is a cubic spline basis. Must be >1. |
| `derivs` | order of derivative of the spline to evaluate, between 0 and `ord`-1. Recycled to length of `x`. |
### Details
The routine is a wrapper that sets up a B-spline basis, where the basis functions wrap at the first and last knot locations.
### Value
A matrix with `length(x)` rows and `length(knots)-1` columns.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### See Also
`[cyclic.p.spline](smooth.construct.ps.smooth.spec)`
### Examples
```
require(mgcv)
## create some x's and knots...
n <- 200
x <- 0:(n-1)/(n-1);k<- 0:5/5
X <- cSplineDes(x,k) ## cyclic spline design matrix
## plot evaluated basis functions...
plot(x,X[,1],type="l"); for (i in 2:5) lines(x,X[,i],col=i)
## check that the ends match up....
ee <- X[1,]-X[n,];ee
tol <- .Machine$double.eps^.75
if (all.equal(ee,ee*0,tolerance=tol)!=TRUE)
stop("cyclic spline ends don't match!")
## similar with uneven data spacing...
x <- sort(runif(n)) + 1 ## sorting just makes end checking easy
k <- seq(min(x),max(x),length=8) ## create knots
X <- cSplineDes(x,k) ## get cyclic spline model matrix
plot(x,X[,1],type="l"); for (i in 2:ncol(X)) lines(x,X[,i],col=i)
ee <- X[1,]-X[n,];ee ## do ends match??
tol <- .Machine$double.eps^.75
if (all.equal(ee,ee*0,tolerance=tol)!=TRUE)
stop("cyclic spline ends don't match!")
```
r None
`gam.selection` Generalized Additive Model Selection
-----------------------------------------------------
### Description
This page is intended to provide some more information on how to select GAMs. In particular, it gives a brief overview of smoothness selection, and then discusses how this can be extended to select inclusion/exclusion of terms. Hypothesis testing approaches to the latter problem are also discussed.
### Smoothness selection criteria
Given a model structure specified by a gam model formula, `gam()` attempts to find the appropriate smoothness for each applicable model term using prediction error criteria or likelihood based methods. The prediction error criteria used are Generalized (Approximate) Cross Validation (GCV or GACV) when the scale parameter is unknown or an Un-Biased Risk Estimator (UBRE) when it is known. UBRE is essentially scaled AIC (Generalized case) or Mallows' Cp (additive model case). GCV and UBRE are covered in Craven and Wahba (1979) and Wahba (1990). Alternatively REML of maximum likelihood (ML) may be used for smoothness selection, by viewing the smooth components as random effects (in this case the variance component for each smooth random effect will be given by the scale parameter divided by the smoothing parameter — for smooths with multiple penalties, there will be multiple variance components). The `method` argument to `<gam>` selects the smoothness selection criterion.
Automatic smoothness selection is unlikely to be successful with few data, particularly with multiple terms to be selected. In addition GCV and UBRE/AIC score can occasionally display local minima that can trap the minimisation algorithms. GCV/UBRE/AIC scores become constant with changing smoothing parameters at very low or very high smoothing parameters, and on occasion these ‘flat’ regions can be separated from regions of lower score by a small ‘lip’. This seems to be the most common form of local minimum, but is usually avoidable by avoiding extreme smoothing parameters as starting values in optimization, and by avoiding big jumps in smoothing parameters while optimizing. Never the less, if you are suspicious of smoothing parameter estimates, try changing fit method (see `<gam>` arguments `method` and `optimizer`) and see if the estimates change, or try changing some or all of the smoothing parameters ‘manually’ (argument `sp` of `<gam>`, or `sp` arguments to `<s>` or `<te>`).
REML and ML are less prone to local minima than the other criteria, and may therefore be preferable.
### Automatic term selection
Unmodified smoothness selection by GCV, AIC, REML etc. will not usually remove a smooth from a model. This is because most smoothing penalties view some space of (non-zero) functions as ‘completely smooth’ and once a term is penalized heavily enough that it is in this space, further penalization does not change it.
However it is straightforward to modify smooths so that under heavy penalization they are penalized to the zero function and thereby ‘selected out’ of the model. There are two approaches.
The first approach is to modify the smoothing penalty with an additional shrinkage term. Smooth classes`cs.smooth` and `tprs.smooth` (specified by `"cs"` and `"ts"` respectively) have smoothness penalties which include a small shrinkage component, so that for large enough smoothing parameters the smooth becomes identically zero. This allows automatic smoothing parameter selection methods to effectively remove the term from the model altogether. The shrinkage component of the penalty is set at a level that usually makes negligable contribution to the penalization of the model, only becoming effective when the term is effectively ‘completely smooth’ according to the conventional penalty.
The second approach leaves the original smoothing penalty unchanged, but constructs an additional penalty for each smooth, which penalizes only functions in the null space of the original penalty (the ‘completely smooth’ functions). Hence, if all the smoothing parameters for a term tend to infinity, the term will be selected out of the model. This latter approach is more expensive computationally, but has the advantage that it can be applied automatically to any smooth term. The `select` argument to `<gam>` turns on this method.
In fact, as implemented, both approaches operate by eigen-decomposiong the original penalty matrix. A new penalty is created on the null space: it is the matrix with the same eigenvectors as the original penalty, but with the originally postive egienvalues set to zero, and the originally zero eigenvalues set to something positive. The first approach just addes a multiple of this penalty to the original penalty, where the multiple is chosen so that the new penalty can not dominate the original. The second approach treats the new penalty as an extra penalty, with its own smoothing parameter.
Of course, as with all model selection methods, some care must be take to ensure that the automatic selection is sensible, and a decision about the effective degrees of freedom at which to declare a term ‘negligible’ has to be made.
### Interactive term selection
In general the most logically consistent method to use for deciding which terms to include in the model is to compare GCV/UBRE/ML scores for models with and without the term (REML scores should not be used to compare models with different fixed effects structures). When UBRE is the smoothness selection method this will give the same result as comparing by `[AIC](../../stats/html/aic)` (the AIC in this case uses the model EDF in place of the usual model DF). Similarly, comparison via GCV score and via AIC seldom yields different answers. Note that the negative binomial with estimated `theta` parameter is a special case: the GCV score is not informative, because of the `theta` estimation scheme used. More generally the score for the model with a smooth term can be compared to the score for the model with the smooth term replaced by appropriate parametric terms. Candidates for replacement by parametric terms are smooth terms with estimated degrees of freedom close to their minimum possible.
Candidates for removal can also be identified by reference to the approximate p-values provided by `summary.gam`, and by looking at the extent to which the confidence band for an estimated term includes the zero function. It is perfectly possible to perform backwards selection using p-values in the usual way: that is by sequentially dropping the single term with the highest non-significant p-value from the model and re-fitting, until all terms are significant. This suffers from the same problems as stepwise procedures for any GLM/LM, with the additional caveat that the p-values are only approximate. If adopting this approach, it is probably best to use ML smoothness selection.
Note that GCV and UBRE are not appropriate for comparing models using different families: in that case AIC should be used.
### Caveats/platitudes
Formal model selection methods are only appropriate for selecting between reasonable models. If formal model selection is attempted starting from a model that simply doesn't fit the data, then it is unlikely to provide meaningful results.
The more thought is given to appropriate model structure up front, the more successful model selection is likely to be. Simply starting with a hugely flexible model with ‘everything in’ and hoping that automatic selection will find the right structure is not often successful.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Marra, G. and S.N. Wood (2011) Practical variable selection for generalized additive models. Computational Statistics and Data Analysis 55,2372-2387.
Craven and Wahba (1979) Smoothing Noisy Data with Spline Functions. Numer. Math. 31:377-403
Venables and Ripley (1999) Modern Applied Statistics with S-PLUS
Wahba (1990) Spline Models of Observational Data. SIAM.
Wood, S.N. (2003) Thin plate regression splines. J.R.Statist.Soc.B 65(1):95-114
Wood, S.N. (2008) Fast stable direct fitting and smoothness selection for generalized additive models. J.R.Statist. Soc. B 70(3):495-518
Wood, S.N. (2011) Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. Journal of the Royal Statistical Society (B) 73(1):3-36
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<gam>`, `<step.gam>`
### Examples
```
## an example of automatic model selection via null space penalization
library(mgcv)
set.seed(3);n<-200
dat <- gamSim(1,n=n,scale=.15,dist="poisson") ## simulate data
dat$x4 <- runif(n, 0, 1);dat$x5 <- runif(n, 0, 1) ## spurious
b<-gam(y~s(x0)+s(x1)+s(x2)+s(x3)+s(x4)+s(x5),data=dat,
family=poisson,select=TRUE,method="REML")
summary(b)
plot(b,pages=1)
```
| programming_docs |
r None
`ziplss` Zero inflated (hurdle) Poisson location-scale model family
--------------------------------------------------------------------
### Description
The `ziplss` family implements a zero inflated (hurdle) Poisson model in which one linear predictor controls the probability of presence and the other controls the mean given presence. Useable only with `<gam>`, the linear predictors are specified via a list of formulae. Should be used with care: simply having a large number of zeroes is not an indication of zero inflation.
Requires integer count data.
### Usage
```
ziplss(link=list("identity","identity"))
```
### Arguments
| | |
| --- | --- |
| `link` | two item list specifying the link - currently only identity links are possible, as parameterization is directly in terms of log of Poisson response and logit of probability of presence. |
### Details
Used with `<gam>` to fit 2 stage zero inflated Poisson models. `gam` is called with a list containing 2 formulae, the first specifies the response on the left hand side and the structure of the linear predictor for the Poisson parameter on the right hand side. The second is one sided, specifying the linear predictor for the probability of presence on the right hand side.
The fitted values for this family will be a two column matrix. The first column is the log of the Poisson parameter, and the second column is the complimentary log log of probability of presence.. Predictions using `<predict.gam>` will also produce 2 column matrices for `type` `"link"` and `"response"`.
The null deviance computed for this model assumes that a single probability of presence and a single Poisson parameter are estimated.
For data with large areas of covariate space over which the response is zero it may be advisable to use low order penalties to avoid problems. For 1D smooths uses e.g. `s(x,m=1)` and for isotropic smooths use `[Duchon.spline](smooth.construct.ds.smooth.spec)`s in place of thin plaste terms with order 1 penalties, e.g `s(x,z,m=c(1,.5))` — such smooths penalize towards constants, thereby avoiding extreme estimates when the data are uninformative.
### Value
An object inheriting from class `general.family`.
### WARNINGS
Zero inflated models are often over-used. Having lots of zeroes in the data does not in itself imply zero inflation. Having too many zeroes \*given the model mean\* may imply zero inflation.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
### Examples
```
library(mgcv)
## simulate some data...
f0 <- function(x) 2 * sin(pi * x); f1 <- function(x) exp(2 * x)
f2 <- function(x) 0.2 * x^11 * (10 * (1 - x))^6 + 10 *
(10 * x)^3 * (1 - x)^10
n <- 500;set.seed(5)
x0 <- runif(n); x1 <- runif(n)
x2 <- runif(n); x3 <- runif(n)
## Simulate probability of potential presence...
eta1 <- f0(x0) + f1(x1) - 3
p <- binomial()$linkinv(eta1)
y <- as.numeric(runif(n)<p) ## 1 for presence, 0 for absence
## Simulate y given potentially present (not exactly model fitted!)...
ind <- y>0
eta2 <- f2(x2[ind])/3
y[ind] <- rpois(exp(eta2),exp(eta2))
## Fit ZIP model...
b <- gam(list(y~s(x2)+s(x3),~s(x0)+s(x1)),family=ziplss())
b$outer.info ## check convergence
summary(b)
plot(b,pages=1)
```
r None
`scat` GAM scaled t family for heavy tailed data
-------------------------------------------------
### Description
Family for use with `<gam>` or `<bam>`, implementing regression for the heavy tailed response variables, y, using a scaled t model. The idea is that *(y - mu)/sig ~ t\_nu* where *mu* is determined by a linear predictor, while *sig* and *nu* are parameters to be estimated alongside the smoothing parameters.
### Usage
```
scat(theta = NULL, link = "identity",min.df=3)
```
### Arguments
| | |
| --- | --- |
| `theta` | the parameters to be estimated *nu = b + exp(theta\_1)* (where ‘b’ is `min.df`) and *sig = exp(theta\_2)*. If supplied and both positive, then taken to be fixed values of *nu* and *sig*. If any negative, then absolute values taken as starting values. |
| `link` | The link function: one of `"identity"`, `"log"` or `"inverse"`. |
| `min.df` | minimum degrees of freedom. Should not be set to 2 or less as this implies infinite response variance. |
### Details
Useful in place of Gaussian, when data are heavy tailed. `min.df` can be modified, but lower values can occasionally lead to convergence problems in smoothing parameter estimation. In any case `min.df` should be >2, since only then does a t random variable have finite variance.
### Value
An object of class `extended.family`.
### Author(s)
Natalya Pya ([email protected])
### References
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
### Examples
```
library(mgcv)
## Simulate some t data...
set.seed(3);n<-400
dat <- gamSim(1,n=n)
dat$y <- dat$f + rt(n,df=4)*2
b <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=scat(link="identity"),data=dat)
b
plot(b,pages=1)
```
r None
`mgcv-package` Mixed GAM Computation Vehicle with GCV/AIC/REML smoothness estimation and GAMMs by REML/PQL
-----------------------------------------------------------------------------------------------------------
### Description
`mgcv` provides functions for generalized additive modelling (`<gam>` and `<bam>`) and generalized additive mixed modelling (`<gamm>`, and `<random.effects>`). The term GAM is taken to include any model dependent on unknown smooth functions of predictors and estimated by quadratically penalized (possibly quasi-) likelihood maximization. Available distributions are covered in `<family.mgcv>` and available smooths in `<smooth.terms>`.
Particular features of the package are facilities for automatic smoothness selection (Wood, 2004, 2011), and the provision of a variety of smooths of more than one variable. User defined smooths can be added. A Bayesian approach to confidence/credible interval calculation is provided. Linear functionals of smooths, penalization of parametric model terms and linkage of smoothing parameters are all supported. Lower level routines for generalized ridge regression and penalized linearly constrained least squares are also available. In addition to the main modelling functions, `<jagam>` provided facilities to ease the set up of models for use with JAGS, while `<ginla>` provides marginal inference via a version of Integrated Nested Laplace Approximation.
### Details
`mgcv` provides generalized additive modelling functions `<gam>`, `<predict.gam>` and `<plot.gam>`, which are very similar in use to the S functions of the same name designed by Trevor Hastie (with some extensions). However the underlying representation and estimation of the models is based on a penalized regression spline approach, with automatic smoothness selection. A number of other functions such as `<summary.gam>` and `<anova.gam>` are also provided, for extracting information from a fitted `[gamObject](gamobject)`.
Use of `<gam>` is much like use of `[glm](../../stats/html/glm)`, except that within a `gam` model formula, isotropic smooths of any number of predictors can be specified using `<s>` terms, while scale invariant smooths of any number of predictors can be specified using `<te>`, `[ti](te)` or `<t2>` terms. `<smooth.terms>` provides an overview of the built in smooth classes, and `<random.effects>` should be refered to for an overview of random effects terms (see also `[mrf](smooth.construct.mrf.smooth.spec)` for Markov random fields). Estimation is by penalized likelihood or quasi-likelihood maximization, with smoothness selection by GCV, GACV, gAIC/UBRE or (RE)ML. See `<gam>`, `<gam.models>`, `<linear.functional.terms>` and `<gam.selection>` for some discussion of model specification and selection. For detailed control of fitting see `<gam.convergence>`, `<gam>` arguments `method` and `optimizer` and `<gam.control>`. For checking and visualization see `<gam.check>`, `<choose.k>`, `<vis.gam>` and `<plot.gam>`. While a number of types of smoother are built into the package, it is also extendable with user defined smooths, see `<smooth.construct>`, for example.
A Bayesian approach to smooth modelling is used to derive standard errors on predictions, and hence credible intervals (see Marra and Wood, 2012). The Bayesian covariance matrix for the model coefficients is returned in `Vp` of the `[gamObject](gamobject)`. See `<predict.gam>` for examples of how this can be used to obtain credible regions for any quantity derived from the fitted model, either directly, or by direct simulation from the posterior distribution of the model coefficients. Approximate p-values can also be obtained for testing individual smooth terms for equality to the zero function, using similar ideas (see Wood, 2013a,b). Frequentist approximations can be used for hypothesis testing based model comparison. See `<anova.gam>` and `<summary.gam>` for more on hypothesis testing.
For large datasets (that is large n) see `<bam>` which is a version of `<gam>` with a much reduced memory footprint.
The package also provides a generalized additive mixed modelling function, `<gamm>`, based on a PQL approach and `lme` from the `nlme` library (for an `lme4` based version, see package `gamm4`). `gamm` is particularly useful for modelling correlated data (i.e. where a simple independence model for the residual variation is inappropriate). In addition, low level routine `<magic>` can fit models to data with a known correlation structure.
Some underlying GAM fitting methods are available as low level fitting functions: see `<magic>`. But there is little functionality that can not be more conventiently accessed via `<gam>` . Penalized weighted least squares with linear equality and inequality constraints is provided by `<pcls>`.
For a complete list of functions type `library(help=mgcv)`. See also `[mgcv.FAQ](mgcv-faq)`.
### Author(s)
Simon Wood <[email protected]>
with contributions and/or help from Natalya Pya, Thomas Kneib, Kurt Hornik, Mike Lonergan, Henric Nilsson, Fabian Scheipl and Brian Ripley.
Polish translation - Lukasz Daniel; German translation - Chris Leick, Detlef Steuer; French Translation - Philippe Grosjean
Maintainer: Simon Wood <[email protected]>
Part funded by EPSRC: EP/K005251/1
### References
These provide details for the underlying mgcv methods, and fuller references to the large literature on which the methods are based.
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models (with discussion). Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
Wood, S.N. (2011) Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. Journal of the Royal Statistical Society (B) 73(1):3-36
Wood, S.N. (2004) Stable and efficient multiple smoothing parameter estimation for generalized additive models. J. Amer. Statist. Ass. 99:673-686.
Marra, G and S.N. Wood (2012) Coverage Properties of Confidence Intervals for Generalized Additive Model Components. Scandinavian Journal of Statistics, 39(1), 53-74.
Wood, S.N. (2013a) A simple test for random effects in regression models. Biometrika 100:1005-1010
Wood, S.N. (2013b) On p-values for smooth components of an extended generalized additive model. Biometrika 100:221-228
Wood, S.N. (2017) *Generalized Additive Models: an introduction with R (2nd edition)*, CRC
Development of mgcv version 1.8 was part funded by EPSRC grants EP/K005251/1 and EP/I000917/1.
### Examples
```
## see examples for gam and gamm
```
r None
`mono.con` Monotonicity constraints for a cubic regression spline
------------------------------------------------------------------
### Description
Finds linear constraints sufficient for monotonicity (and optionally upper and/or lower boundedness) of a cubic regression spline. The basis representation assumed is that given by the `gam`, `"cr"` basis: that is the spline has a set of knots, which have fixed x values, but the y values of which constitute the parameters of the spline.
### Usage
```
mono.con(x,up=TRUE,lower=NA,upper=NA)
```
### Arguments
| | |
| --- | --- |
| `x` | The array of knot locations. |
| `up` | If `TRUE` then the constraints imply increase, if `FALSE` then decrease. |
| `lower` | This specifies the lower bound on the spline unless it is `NA` in which case no lower bound is imposed. |
| `upper` | This specifies the upper bound on the spline unless it is `NA` in which case no upper bound is imposed. |
### Details
Consider the natural cubic spline passing through the points *(x\_i,p\_i), i=1..n*. Then it is possible to find a relatively small set of linear constraints on *p* sufficient to ensure monotonicity (and bounds if required): *Ap >= b*. Details are given in Wood (1994).
### Value
a list containing constraint matrix `A` and constraint vector `b`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Gill, P.E., Murray, W. and Wright, M.H. (1981) *Practical Optimization*. Academic Press, London.
Wood, S.N. (1994) Monotonic smoothing splines fitted by cross validation. *SIAM Journal on Scientific Computing* **15**(5), 1126–1133.
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<magic>`, `<pcls>`
### Examples
```
## see ?pcls
```
r None
`gam.side` Identifiability side conditions for a GAM
-----------------------------------------------------
### Description
GAM formulae with repeated variables may only correspond to identifiable models given some side conditions. This routine works out appropriate side conditions, based on zeroing redundant parameters. It is called from `mgcv:::gam.setup` and is not intended to be called by users.
The method identifies nested and repeated variables by their names, but numerically evaluates which constraints need to be imposed. Constraints are always applied to smooths of more variables in preference to smooths of fewer variables. The numerical approach allows appropriate constraints to be applied to models constructed using any smooths, including user defined smooths.
### Usage
```
gam.side(sm,Xp,tol=.Machine$double.eps^.5,with.pen=FALSE)
```
### Arguments
| | |
| --- | --- |
| `sm` | A list of smooth objects as returned by `<smooth.construct>`. |
| `Xp` | The model matrix for the strictly parametric model components. |
| `tol` | The tolerance to use when assessing linear dependence of smooths. |
| `with.pen` | Should the computation of dependence consider the penalties or not. Doing so will lead to fewer constraints. |
### Details
Models such as `y~s(x)+s(z)+s(x,z)` can be estimated by `<gam>`, but require identifiability constraints to be applied, to make them identifiable. This routine does this, effectively setting redundant parameters to zero. When the redundancy is between smooths of lower and higher numbers of variables, the constraint is always applied to the smooth of the higher number of variables.
Dependent smooths are identified symbolically, but which constraints are needed to ensure identifiability of these smooths is determined numerically, using `[fixDependence](fixdependence)`. This makes the routine rather general, and not dependent on any particular basis.
`Xp` is used to check whether there is a constant term in the model (or columns that can be linearly combined to give a constant). This is because centred smooths can appear independent, when they would be dependent if there is a constant in the model, so dependence testing needs to take account of this.
### Value
A list of smooths, with model matrices and penalty matrices adjusted to automatically impose the required constraints. Any smooth that has been modified will have an attribute `"del.index"`, listing the columns of its model matrix that were deleted. This index is used in the creation of prediction matrices for the term.
### WARNINGS
Much better statistical stability will be obtained by using models like `y~s(x)+s(z)+ti(x,z)` or `y~ti(x)+ti(z)+ti(x,z)` rather than `y~s(x)+s(z)+s(x,z)`, since the former are designed not to require further constraint.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### See Also
`[ti](te)`, `<gam.models>`
### Examples
```
## The first two examples here iluustrate models that cause
## gam.side to impose constraints, but both are a bad way
## of estimating such models. The 3rd example is the right
## way....
set.seed(7)
require(mgcv)
dat <- gamSim(n=400,scale=2) ## simulate data
## estimate model with redundant smooth interaction (bad idea).
b<-gam(y~s(x0)+s(x1)+s(x0,x1)+s(x2),data=dat)
plot(b,pages=1)
## Simulate data with real interation...
dat <- gamSim(2,n=500,scale=.1)
old.par<-par(mfrow=c(2,2))
## a fully nested tensor product example (bad idea)
b <- gam(y~s(x,bs="cr",k=6)+s(z,bs="cr",k=6)+te(x,z,k=6),
data=dat$data)
plot(b)
old.par<-par(mfrow=c(2,2))
## A fully nested tensor product example, done properly,
## so that gam.side is not needed to ensure identifiability.
## ti terms are designed to produce interaction smooths
## suitable for adding to main effects (we could also have
## used s(x) and s(z) without a problem, but not s(z,x)
## or te(z,x)).
b <- gam(y ~ ti(x,k=6) + ti(z,k=6) + ti(x,z,k=6),
data=dat$data)
plot(b)
par(old.par)
rm(dat)
```
r None
`model.matrix.gam` Extract model matrix from GAM fit
-----------------------------------------------------
### Description
Obtains the model matrix from a fitted `gam` object.
### Usage
```
## S3 method for class 'gam'
model.matrix(object, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | fitted model object of class `gam` as produced by `gam()`. |
| `...` | other arguments, passed to `<predict.gam>`. |
### Details
Calls `<predict.gam>` with no `newdata` argument and `type="lpmatrix"` in order to obtain the model matrix of `object`.
### Value
A model matrix.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood S.N. (2006b) Generalized Additive Models: An Introduction with R. Chapman and Hall/CRC Press.
### See Also
`<gam>`
### Examples
```
require(mgcv)
n <- 15
x <- runif(n)
y <- sin(x*2*pi) + rnorm(n)*.2
mod <- gam(y~s(x,bs="cc",k=6),knots=list(x=seq(0,1,length=6)))
model.matrix(mod)
```
r None
`gam` Generalized additive models with integrated smoothness estimation
------------------------------------------------------------------------
### Description
Fits a generalized additive model (GAM) to data, the term ‘GAM’ being taken to include any quadratically penalized GLM and a variety of other models estimated by a quadratically penalised likelihood type approach (see `<family.mgcv>`). The degree of smoothness of model terms is estimated as part of fitting. `gam` can also fit any GLM subject to multiple quadratic penalties (including estimation of degree of penalization). Confidence/credible intervals are readily available for any quantity predicted using a fitted model.
Smooth terms are represented using penalized regression splines (or similar smoothers) with smoothing parameters selected by GCV/UBRE/AIC/REML or by regression splines with fixed degrees of freedom (mixtures of the two are permitted). Multi-dimensional smooths are available using penalized thin plate regression splines (isotropic) or tensor product splines (when an isotropic smooth is inappropriate), and users can add smooths. Linear functionals of smooths can also be included in models. For an overview of the smooths available see `<smooth.terms>`. For more on specifying models see `<gam.models>`, `<random.effects>` and `<linear.functional.terms>`. For more on model selection see `<gam.selection>`. Do read `<gam.check>` and `<choose.k>`.
See package `gam`, for GAMs via the original Hastie and Tibshirani approach (see details for differences to this implementation).
For very large datasets see `<bam>`, for mixed GAM see `<gamm>` and `<random.effects>`.
### Usage
```
gam(formula,family=gaussian(),data=list(),weights=NULL,subset=NULL,
na.action,offset=NULL,method="GCV.Cp",
optimizer=c("outer","newton"),control=list(),scale=0,
select=FALSE,knots=NULL,sp=NULL,min.sp=NULL,H=NULL,gamma=1,
fit=TRUE,paraPen=NULL,G=NULL,in.out,drop.unused.levels=TRUE,
drop.intercept=NULL,discrete=FALSE,...)
```
### Arguments
| | |
| --- | --- |
| `formula` | A GAM formula, or a list of formulae (see `<formula.gam>` and also `<gam.models>`). These are exactly like the formula for a GLM except that smooth terms, `<s>`, `<te>`, `[ti](te)` and `<t2>`, can be added to the right hand side to specify that the linear predictor depends on smooth functions of predictors (or linear functionals of these). |
| `family` | This is a family object specifying the distribution and link to use in fitting etc (see `[glm](../../stats/html/glm)` and `[family](../../stats/html/family)`). See `<family.mgcv>` for a full list of what is available, which goes well beyond exponential family. Note that `quasi` families actually result in the use of extended quasi-likelihood if `method` is set to a RE/ML method (McCullagh and Nelder, 1989, 9.6). |
| `data` | A data frame or list containing the model response variable and covariates required by the formula. By default the variables are taken from `environment(formula)`: typically the environment from which `gam` is called. |
| `weights` | prior weights on the contribution of the data to the log likelihood. Note that a weight of 2, for example, is equivalent to having made exactly the same observation twice. If you want to re-weight the contributions of each datum without changing the overall magnitude of the log likelihood, then you should normalize the weights (e.g. `weights <- weights/mean(weights)`). |
| `subset` | an optional vector specifying a subset of observations to be used in the fitting process. |
| `na.action` | a function which indicates what should happen when the data contain ‘NA’s. The default is set by the ‘na.action’ setting of ‘options’, and is ‘na.fail’ if that is unset. The “factory-fresh” default is ‘na.omit’. |
| `offset` | Can be used to supply a model offset for use in fitting. Note that this offset will always be completely ignored when predicting, unlike an offset included in `formula` (this used to conform to the behaviour of `lm` and `glm`). |
| `control` | A list of fit control parameters to replace defaults returned by `<gam.control>`. Values not set assume default values. |
| `method` | The smoothing parameter estimation method. `"GCV.Cp"` to use GCV for unknown scale parameter and Mallows' Cp/UBRE/AIC for known scale. `"GACV.Cp"` is equivalent, but using GACV in place of GCV. `"REML"` for REML estimation, including of unknown scale, `"P-REML"` for REML estimation, but using a Pearson estimate of the scale. `"ML"` and `"P-ML"` are similar, but using maximum likelihood in place of REML. Beyond the exponential family `"REML"` is the default, and the only other option is `"ML"`. |
| `optimizer` | An array specifying the numerical optimization method to use to optimize the smoothing parameter estimation criterion (given by `method`). `"perf"` (deprecated) for performance iteration. `"outer"` for the more stable direct approach. `"outer"` can use several alternative optimizers, specified in the second element of `optimizer`: `"newton"` (default), `"bfgs"`, `"optim"`, `"nlm"` and `"nlm.fd"` (the latter is based entirely on finite differenced derivatives and is very slow). `"efs"` for the extended Fellner Schall method of Wood and Fasiolo (2017). |
| `scale` | If this is positive then it is taken as the known scale parameter. Negative signals that the scale parameter is unknown. 0 signals that the scale parameter is 1 for Poisson and binomial and unknown otherwise. Note that (RE)ML methods can only work with scale parameter 1 for the Poisson and binomial cases. |
| `select` | If this is `TRUE` then `gam` can add an extra penalty to each term so that it can be penalized to zero. This means that the smoothing parameter estimation that is part of fitting can completely remove terms from the model. If the corresponding smoothing parameter is estimated as zero then the extra penalty has no effect. Use `gamma` to increase level of penalization. |
| `knots` | this is an optional list containing user specified knot values to be used for basis construction. For most bases the user simply supplies the knots to be used, which must match up with the `k` value supplied (note that the number of knots is not always just `k`). See `[tprs](smooth.construct.tp.smooth.spec)` for what happens in the `"tp"/"ts"` case. Different terms can use different numbers of knots, unless they share a covariate. |
| `sp` | A vector of smoothing parameters can be provided here. Smoothing parameters must be supplied in the order that the smooth terms appear in the model formula. Negative elements indicate that the parameter should be estimated, and hence a mixture of fixed and estimated parameters is possible. If smooths share smoothing parameters then `length(sp)` must correspond to the number of underlying smoothing parameters. |
| `min.sp` | Lower bounds can be supplied for the smoothing parameters. Note that if this option is used then the smoothing parameters `full.sp`, in the returned object, will need to be added to what is supplied here to get the smoothing parameters actually multiplying the penalties. `length(min.sp)` should always be the same as the total number of penalties (so it may be longer than `sp`, if smooths share smoothing parameters). |
| `H` | A user supplied fixed quadratic penalty on the parameters of the GAM can be supplied, with this as its coefficient matrix. A common use of this term is to add a ridge penalty to the parameters of the GAM in circumstances in which the model is close to un-identifiable on the scale of the linear predictor, but perfectly well defined on the response scale. |
| `gamma` | Increase this beyond 1 to produce smoother models. `gamma` multiplies the effective degrees of freedom in the GCV or UBRE/AIC. coden/gamma can be viewed as an effective sample size in the GCV score, and this also enables it to be used with REML/ML. Ignored with P-RE/ML or the `efs` optimizer. |
| `fit` | If this argument is `TRUE` then `gam` sets up the model and fits it, but if it is `FALSE` then the model is set up and an object `G` containing what would be required to fit is returned is returned. See argument `G`. |
| `paraPen` | optional list specifying any penalties to be applied to parametric model terms. `<gam.models>` explains more. |
| `G` | Usually `NULL`, but may contain the object returned by a previous call to `gam` with `fit=FALSE`, in which case all other arguments are ignored except for `sp`, `gamma`, `in.out`, `scale`, `control`, `method` `optimizer` and `fit`. |
| `in.out` | optional list for initializing outer iteration. If supplied then this must contain two elements: `sp` should be an array of initialization values for all smoothing parameters (there must be a value for all smoothing parameters, whether fixed or to be estimated, but those for fixed s.p.s are not used); `scale` is the typical scale of the GCV/UBRE function, for passing to the outer optimizer, or the the initial value of the scale parameter, if this is to be estimated by RE/ML. |
| `drop.unused.levels` | by default unused levels are dropped from factors before fitting. For some smooths involving factor variables you might want to turn this off. Only do so if you know what you are doing. |
| `drop.intercept` | Set to `TRUE` to force the model to really not have the a constant in the parametric model part, even with factor variables present. Can be vector when `formula` is a list. |
| `discrete` | experimental option for setting up models for use with discrete methods employed in `<bam>`. Do not modify. |
| `...` | further arguments for passing on e.g. to `gam.fit` (such as `mustart`). |
### Details
A generalized additive model (GAM) is a generalized linear model (GLM) in which the linear predictor is given by a user specified sum of smooth functions of the covariates plus a conventional parametric component of the linear predictor. A simple example is:
*log(E(y\_i))= a + f\_1(x\_1i)+f\_2(x\_2i)*
where the (independent) response variables *y\_i~Poi*, and *f\_1* and *f\_2* are smooth functions of covariates *x\_1* and *x\_2*. The log is an example of a link function. Note that to be identifiable the model requires constraints on the smooth functions. By default these are imposed automatically and require that the function sums to zero over the observed covariate values (the presence of a metric `by` variable is the only case which usually suppresses this).
If absolutely any smooth functions were allowed in model fitting then maximum likelihood estimation of such models would invariably result in complex over-fitting estimates of *f\_1* and *f\_2*. For this reason the models are usually fit by penalized likelihood maximization, in which the model (negative log) likelihood is modified by the addition of a penalty for each smooth function, penalizing its ‘wiggliness’. To control the trade-off between penalizing wiggliness and penalizing badness of fit each penalty is multiplied by an associated smoothing parameter: how to estimate these parameters, and how to practically represent the smooth functions are the main statistical questions introduced by moving from GLMs to GAMs.
The `mgcv` implementation of `gam` represents the smooth functions using penalized regression splines, and by default uses basis functions for these splines that are designed to be optimal, given the number basis functions used. The smooth terms can be functions of any number of covariates and the user has some control over how smoothness of the functions is measured.
`gam` in `mgcv` solves the smoothing parameter estimation problem by using the Generalized Cross Validation (GCV) criterion
*n D/(n - DoF)^2*
or an Un-Biased Risk Estimator (UBRE )criterion
*D/n + 2 s DoF / n -s*
where *D* is the deviance, *n* the number of data, *s* the scale parameter and *DoF* the effective degrees of freedom of the model. Notice that UBRE is effectively just AIC rescaled, but is only used when *s* is known.
Alternatives are GACV, or a Laplace approximation to REML. There is some evidence that the latter may actually be the most effective choice. The main computational challenge solved by the `mgcv` package is to optimize the smoothness selection criteria efficiently and reliably.
Broadly `gam` works by first constructing basis functions and one or more quadratic penalty coefficient matrices for each smooth term in the model formula, obtaining a model matrix for the strictly parametric part of the model formula, and combining these to obtain a complete model matrix (/design matrix) and a set of penalty matrices for the smooth terms. The linear identifiability constraints are also obtained at this point. The model is fit using `<gam.fit>`, `<gam.fit3>` or variants, which are modifications of `[glm.fit](../../stats/html/glm)`. The GAM penalized likelihood maximization problem is solved by Penalized Iteratively Re-weighted Least Squares (P-IRLS) (see e.g. Wood 2000). Smoothing parameter selection is possible in one of two ways. (i) ‘Performance iteration’ uses the fact that at each P-IRLS step a working penalized linear model is estimated, and the smoothing parameter estimation can be performed for each such working model. Eventually, in most cases, both model parameter estimates and smoothing parameter estimates converge. This option is available in `<bam>` and `<gamm>` but is deprecated for `gam` (ii) Alternatively the P-IRLS scheme is iterated to convergence for each trial set of smoothing parameters, and GCV, UBRE or REML scores are only evaluated on convergence - optimization is then ‘outer’ to the P-IRLS loop: in this case the P-IRLS iteration has to be differentiated, to facilitate optimization, and `<gam.fit3>` or one of its variants is used in place of `gam.fit`. `gam` uses the second method, outer iteration.
Several alternative basis-penalty types are built in for representing model smooths, but alternatives can easily be added (see `<smooth.terms>` for an overview and `<smooth.construct>` for how to add smooth classes). The choice of the basis dimension (`k` in the `s`, `te`, `ti` and `t2` terms) is something that should be considered carefully (the exact value is not critical, but it is important not to make it restrictively small, nor very large and computationally costly). The basis should be chosen to be larger than is believed to be necessary to approximate the smooth function concerned. The effective degrees of freedom for the smooth will then be controlled by the smoothing penalty on the term, and (usually) selected automatically (with an upper limit set by `k-1` or occasionally `k`). Of course the `k` should not be made too large, or computation will be slow (or in extreme cases there will be more coefficients to estimate than there are data).
Note that `gam` assumes a very inclusive definition of what counts as a GAM: basically any penalized GLM can be used: to this end `gam` allows the non smooth model components to be penalized via argument `paraPen` and allows the linear predictor to depend on general linear functionals of smooths, via the summation convention mechanism described in `<linear.functional.terms>`. `link{family.mgcv}` details what is available beyond GLMs and the exponential family.
Details of the default underlying fitting methods are given in Wood (2011 and 2004). Some alternative methods are discussed in Wood (2000 and 2006).
`gam()` is not a clone of Trevor Hastie's original (as supplied in S-PLUS or package `gam`). The major differences are (i) that by default estimation of the degree of smoothness of model terms is part of model fitting, (ii) a Bayesian approach to variance estimation is employed that makes for easier confidence interval calculation (with good coverage probabilities), (iii) that the model can depend on any (bounded) linear functional of smooth terms, (iv) the parametric part of the model can be penalized, (v) simple random effects can be incorporated, and (vi) the facilities for incorporating smooths of more than one variable are different: specifically there are no `lo` smooths, but instead (a) `<s>` terms can have more than one argument, implying an isotropic smooth and (b) `<te>`, `[ti](te)` or `<t2>` smooths are provided as an effective means for modelling smooth interactions of any number of variables via scale invariant tensor product smooths. Splines on the sphere, Duchon splines and Gaussian Markov Random Fields are also available. (vii) Models beyond the exponential family are available. See package `gam`, for GAMs via the original Hastie and Tibshirani approach.
### Value
If `fit=FALSE` the function returns a list `G` of items needed to fit a GAM, but doesn't actually fit it.
Otherwise the function returns an object of class `"gam"` as described in `[gamObject](gamobject)`.
### WARNINGS
The default basis dimensions used for smooth terms are essentially arbitrary, and it should be checked that they are not too small. See `<choose.k>` and `<gam.check>`.
You must have more unique combinations of covariates than the model has total parameters. (Total parameters is sum of basis dimensions plus sum of non-spline terms less the number of spline terms).
Automatic smoothing parameter selection is not likely to work well when fitting models to very few response data.
For data with many zeroes clustered together in the covariate space it is quite easy to set up GAMs which suffer from identifiability problems, particularly when using Poisson or binomial families. The problem is that with e.g. log or logit links, mean value zero corresponds to an infinite range on the linear predictor scale.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
Front end design inspired by the S function of the same name based on the work of Hastie and Tibshirani (1990). Underlying methods owe much to the work of Wahba (e.g. 1990) and Gu (e.g. 2002).
### References
Key References on this implementation:
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models (with discussion). Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
Wood, S.N. (2011) Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. Journal of the Royal Statistical Society (B) 73(1):3-36
Wood, S.N. (2004) Stable and efficient multiple smoothing parameter estimation for generalized additive models. J. Amer. Statist. Ass. 99:673-686. [Default method for additive case by GCV (but no longer for generalized)]
Wood, S.N. (2003) Thin plate regression splines. J.R.Statist.Soc.B 65(1):95-114
Wood, S.N. (2006a) Low rank scale invariant tensor product smooths for generalized additive mixed models. Biometrics 62(4):1025-1036
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
Wood, S.N. and M. Fasiolo (2017) A generalized Fellner-Schall method for smoothing parameter optimization with application to Tweedie location, scale and shape models. Biometrics 73 (4), 1071-1081
Wood S.N., F. Scheipl and J.J. Faraway (2012) Straightforward intermediate rank tensor product smoothing in mixed models. Statistical Computing.
Marra, G and S.N. Wood (2012) Coverage Properties of Confidence Intervals for Generalized Additive Model Components. Scandinavian Journal of Statistics, 39(1), 53-74.
Key Reference on GAMs and related models:
Hastie (1993) in Chambers and Hastie (1993) Statistical Models in S. Chapman and Hall.
Hastie and Tibshirani (1990) Generalized Additive Models. Chapman and Hall.
Wahba (1990) Spline Models of Observational Data. SIAM
Wood, S.N. (2000) Modelling and Smoothing Parameter Estimation with Multiple Quadratic Penalties. J.R.Statist.Soc.B 62(2):413-428 [The original mgcv paper, but no longer the default methods.]
Background References:
Green and Silverman (1994) Nonparametric Regression and Generalized Linear Models. Chapman and Hall.
Gu and Wahba (1991) Minimizing GCV/GML scores with multiple smoothing parameters via the Newton method. SIAM J. Sci. Statist. Comput. 12:383-398
Gu (2002) Smoothing Spline ANOVA Models, Springer.
McCullagh and Nelder (1989) Generalized Linear Models 2nd ed. Chapman & Hall.
O'Sullivan, Yandall and Raynor (1986) Automatic smoothing of regression functions in generalized linear models. J. Am. Statist.Ass. 81:96-103
Wood (2001) mgcv:GAMs and Generalized Ridge Regression for R. R News 1(2):20-25
Wood and Augustin (2002) GAMs with integrated model selection using penalized regression splines and applications to environmental modelling. Ecological Modelling 157:157-177
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<mgcv-package>`, `[gamObject](gamobject)`, `<gam.models>`, `<smooth.terms>`, `<linear.functional.terms>`, `<s>`, `<te>` `<predict.gam>`, `<plot.gam>`, `<summary.gam>`, `<gam.side>`, `<gam.selection>`, `<gam.control>` `<gam.check>`, `<linear.functional.terms>` `<negbin>`, `<magic>`,`<vis.gam>`
### Examples
```
## see also examples in ?gam.models (e.g. 'by' variables,
## random effects and tricks for large binary datasets)
library(mgcv)
set.seed(2) ## simulate some data...
dat <- gamSim(1,n=400,dist="normal",scale=2)
b <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),data=dat)
summary(b)
plot(b,pages=1,residuals=TRUE) ## show partial residuals
plot(b,pages=1,seWithMean=TRUE) ## `with intercept' CIs
## run some basic model checks, including checking
## smoothing basis dimensions...
gam.check(b)
## same fit in two parts .....
G <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),fit=FALSE,data=dat)
b <- gam(G=G)
print(b)
## 2 part fit enabling manipulation of smoothing parameters...
G <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),fit=FALSE,data=dat,sp=b$sp)
G$lsp0 <- log(b$sp*10) ## provide log of required sp vec
gam(G=G) ## it's smoother
## change the smoothness selection method to REML
b0 <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),data=dat,method="REML")
## use alternative plotting scheme, and way intervals include
## smoothing parameter uncertainty...
plot(b0,pages=1,scheme=1,unconditional=TRUE)
## Would a smooth interaction of x0 and x1 be better?
## Use tensor product smooth of x0 and x1, basis
## dimension 49 (see ?te for details, also ?t2).
bt <- gam(y~te(x0,x1,k=7)+s(x2)+s(x3),data=dat,
method="REML")
plot(bt,pages=1)
plot(bt,pages=1,scheme=2) ## alternative visualization
AIC(b0,bt) ## interaction worse than additive
## Alternative: test for interaction with a smooth ANOVA
## decomposition (this time between x2 and x1)
bt <- gam(y~s(x0)+s(x1)+s(x2)+s(x3)+ti(x1,x2,k=6),
data=dat,method="REML")
summary(bt)
## If it is believed that x0 and x1 are naturally on
## the same scale, and should be treated isotropically
## then could try...
bs <- gam(y~s(x0,x1,k=40)+s(x2)+s(x3),data=dat,
method="REML")
plot(bs,pages=1)
AIC(b0,bt,bs) ## additive still better.
## Now do automatic terms selection as well
b1 <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),data=dat,
method="REML",select=TRUE)
plot(b1,pages=1)
## set the smoothing parameter for the first term, estimate rest ...
bp <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),sp=c(0.01,-1,-1,-1),data=dat)
plot(bp,pages=1,scheme=1)
## alternatively...
bp <- gam(y~s(x0,sp=.01)+s(x1)+s(x2)+s(x3),data=dat)
# set lower bounds on smoothing parameters ....
bp<-gam(y~s(x0)+s(x1)+s(x2)+s(x3),
min.sp=c(0.001,0.01,0,10),data=dat)
print(b);print(bp)
# same with REML
bp<-gam(y~s(x0)+s(x1)+s(x2)+s(x3),
min.sp=c(0.1,0.1,0,10),data=dat,method="REML")
print(b0);print(bp)
## now a GAM with 3df regression spline term & 2 penalized terms
b0 <- gam(y~s(x0,k=4,fx=TRUE,bs="tp")+s(x1,k=12)+s(x2,k=15),data=dat)
plot(b0,pages=1)
## now simulate poisson data...
set.seed(6)
dat <- gamSim(1,n=2000,dist="poisson",scale=.1)
## use "cr" basis to save time, with 2000 data...
b2<-gam(y~s(x0,bs="cr")+s(x1,bs="cr")+s(x2,bs="cr")+
s(x3,bs="cr"),family=poisson,data=dat,method="REML")
plot(b2,pages=1)
## drop x3, but initialize sp's from previous fit, to
## save more time...
b2a<-gam(y~s(x0,bs="cr")+s(x1,bs="cr")+s(x2,bs="cr"),
family=poisson,data=dat,method="REML",
in.out=list(sp=b2$sp[1:3],scale=1))
par(mfrow=c(2,2))
plot(b2a)
par(mfrow=c(1,1))
## similar example using GACV...
dat <- gamSim(1,n=400,dist="poisson",scale=.25)
b4<-gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=poisson,
data=dat,method="GACV.Cp",scale=-1)
plot(b4,pages=1)
## repeat using REML as in Wood 2011...
b5<-gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=poisson,
data=dat,method="REML")
plot(b5,pages=1)
## a binary example (see ?gam.models for large dataset version)...
dat <- gamSim(1,n=400,dist="binary",scale=.33)
lr.fit <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=binomial,
data=dat,method="REML")
## plot model components with truth overlaid in red
op <- par(mfrow=c(2,2))
fn <- c("f0","f1","f2","f3");xn <- c("x0","x1","x2","x3")
for (k in 1:4) {
plot(lr.fit,residuals=TRUE,select=k)
ff <- dat[[fn[k]]];xx <- dat[[xn[k]]]
ind <- sort.int(xx,index.return=TRUE)$ix
lines(xx[ind],(ff-mean(ff))[ind]*.33,col=2)
}
par(op)
anova(lr.fit)
lr.fit1 <- gam(y~s(x0)+s(x1)+s(x2),family=binomial,
data=dat,method="REML")
lr.fit2 <- gam(y~s(x1)+s(x2),family=binomial,
data=dat,method="REML")
AIC(lr.fit,lr.fit1,lr.fit2)
## For a Gamma example, see ?summary.gam...
## For inverse Gaussian, see ?rig
## now 2D smoothing...
eg <- gamSim(2,n=500,scale=.1)
attach(eg)
op <- par(mfrow=c(2,2),mar=c(4,4,1,1))
contour(truth$x,truth$z,truth$f) ## contour truth
b4 <- gam(y~s(x,z),data=data) ## fit model
fit1 <- matrix(predict.gam(b4,pr,se=FALSE),40,40)
contour(truth$x,truth$z,fit1) ## contour fit
persp(truth$x,truth$z,truth$f) ## persp truth
vis.gam(b4) ## persp fit
detach(eg)
par(op)
##################################################
## largish dataset example with user defined knots
##################################################
par(mfrow=c(2,2))
n <- 5000
eg <- gamSim(2,n=n,scale=.5)
attach(eg)
ind<-sample(1:n,200,replace=FALSE)
b5<-gam(y~s(x,z,k=40),data=data,
knots=list(x=data$x[ind],z=data$z[ind]))
## various visualizations
vis.gam(b5,theta=30,phi=30)
plot(b5)
plot(b5,scheme=1,theta=50,phi=20)
plot(b5,scheme=2)
par(mfrow=c(1,1))
## and a pure "knot based" spline of the same data
b6<-gam(y~s(x,z,k=64),data=data,knots=list(x= rep((1:8-0.5)/8,8),
z=rep((1:8-0.5)/8,rep(8,8))))
vis.gam(b6,color="heat",theta=30,phi=30)
## varying the default large dataset behaviour via `xt'
b7 <- gam(y~s(x,z,k=40,xt=list(max.knots=500,seed=2)),data=data)
vis.gam(b7,theta=30,phi=30)
detach(eg)
```
| programming_docs |
r None
`Tweedie` GAM Tweedie families
-------------------------------
### Description
Tweedie families, designed for use with `<gam>` from the `mgcv` library. Restricted to variance function powers between 1 and 2. A useful alternative to `[quasi](../../stats/html/family)` when a full likelihood is desirable. `Tweedie` is for use with fixed `p`. `tw` is for use when `p` is to be estimated during fitting. For fixed `p` between 1 and 2 the Tweedie is an exponential family distribution with variance given by the mean to the power `p`.
`tw` is only useable with `<gam>` and `<bam>` but not `gamm`. `Tweedie` works with all three.
### Usage
```
Tweedie(p=1, link = power(0))
tw(theta = NULL, link = "log",a=1.01,b=1.99)
```
### Arguments
| | |
| --- | --- |
| `p` | the variance of an observation is proportional to its mean to the power `p`. `p` must be greater than 1 and less than or equal to 2. 1 would be Poisson, 2 is gamma. |
| `link` | The link function: one of `"log"`, `"identity"`, `"inverse"`, `"sqrt"`, or a `[power](../../stats/html/power)` link (`Tweedie` only). |
| `theta` | Related to the Tweedie power parameter by *p=(a+b\*exp(theta))/(1+exp(theta))*. If this is supplied as a positive value then it is taken as the fixed value for `p`. If it is a negative values then its absolute value is taken as the initial value for `p`. |
| `a` | lower limit on `p` for optimization. |
| `b` | upper limit on `p` for optimization. |
### Details
A Tweedie random variable with 1<p<2 is a sum of `N` gamma random variables where `N` has a Poisson distribution. The p=1 case is a generalization of a Poisson distribution and is a discrete distribution supported on integer multiples of the scale parameter. For 1<p<2 the distribution is supported on the positive reals with a point mass at zero. p=2 is a gamma distribution. As p gets very close to 1 the continuous distribution begins to converge on the discretely supported limit at p=1, and is therefore highly multimodal. See `[ldTweedie](ldtweedie)` for more on this behaviour.
`Tweedie` is based partly on the `[poisson](../../stats/html/family)` family, and partly on `tweedie` from the `statmod` package. It includes extra components to work with all `mgcv` GAM fitting methods as well as an `aic` function.
The Tweedie density involves a normalizing constant with no closed form, so this is evaluated using the series evaluation method of Dunn and Smyth (2005), with extensions to also compute the derivatives w.r.t. `p` and the scale parameter. Without restricting `p` to (1,2) the calculation of Tweedie densities is more difficult, and there does not currently seem to be an implementation which offers any benefit over `[quasi](../../stats/html/family)`. If you need this case then the `tweedie` package is the place to start.
### Value
For `Tweedie`, an object inheriting from class `family`, with additional elements
| | |
| --- | --- |
| `dvar` | the function giving the first derivative of the variance function w.r.t. `mu`. |
| `d2var` | the function giving the second derivative of the variance function w.r.t. `mu`. |
| `ls` | A function returning a 3 element array: the saturated log likelihood followed by its first 2 derivatives w.r.t. the scale parameter. |
For `tw`, an object of class `extended.family`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected]).
### References
Dunn, P.K. and G.K. Smyth (2005) Series evaluation of Tweedie exponential dispersion model densities. Statistics and Computing 15:267-280
Tweedie, M. C. K. (1984). An index which distinguishes between some important exponential families. Statistics: Applications and New Directions. Proceedings of the Indian Statistical Institute Golden Jubilee International Conference (Eds. J. K. Ghosh and J. Roy), pp. 579-604. Calcutta: Indian Statistical Institute.
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
### See Also
`[ldTweedie](ldtweedie)`, `[rTweedie](rtweedie)`
### Examples
```
library(mgcv)
set.seed(3)
n<-400
## Simulate data...
dat <- gamSim(1,n=n,dist="poisson",scale=.2)
dat$y <- rTweedie(exp(dat$f),p=1.3,phi=.5) ## Tweedie response
## Fit a fixed p Tweedie, with wrong link ...
b <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=Tweedie(1.25,power(.1)),
data=dat)
plot(b,pages=1)
print(b)
## Same by approximate REML...
b1 <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=Tweedie(1.25,power(.1)),
data=dat,method="REML")
plot(b1,pages=1)
print(b1)
## estimate p as part of fitting
b2 <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=tw(),
data=dat,method="REML")
plot(b2,pages=1)
print(b2)
rm(dat)
```
r None
`gam.models` Specifying generalized additive models
----------------------------------------------------
### Description
This page is intended to provide some more information on how to specify GAMs. A GAM is a GLM in which the linear predictor depends, in part, on a sum of smooth functions of predictors and (possibly) linear functionals of smooth functions of (possibly dummy) predictors.
Specifically let *y\_i* denote an independent random variable with mean *mu\_i* and an exponential family distribution, or failing that a known mean variance relationship suitable for use of quasi-likelihood methods. Then the the linear predictor of a GAM has a structure something like
*g(mu\_i)=X\_i b + f\_1(x\_1i,x\_2i) + f\_2(x\_3i) + L\_i f\_3(x\_4) + ...*
where *g* is a known smooth monotonic ‘link’ function, *X\_i b* is the parametric part of the linear predictor, the *x\_j* are predictor variables, the *f\_j* are smooth functions and *L\_i* is some linear functional of *f\_3*. There may of course be multiple linear functional terms, or none.
The key idea here is that the dependence of the response on the predictors can be represented as a parametric sub-model plus the sum of some (functionals of) smooth functions of one or more of the predictor variables. Thus the model is quite flexible relative to strictly parametric linear or generalized linear models, but still has much more structure than the completely general model that says that the response is just some smooth function of all the covariates.
Note one important point. In order for the model to be identifiable the smooth functions usually have to be constrained to have zero mean (usually taken over the set of covariate values). The constraint is needed if the term involving the smooth includes a constant function in its span. `gam` always applies such constraints unless there is a `by` variable present, in which case an assessment is made of whether the constraint is needed or not (see below).
The following sections discuss specifying model structures for `gam`. Specification of the distribution and link function is done using the `[family](../../stats/html/family)` argument to `<gam>` and works in the same way as for `[glm](../../stats/html/glm)`. This page therefore concentrates on the model formula for `gam`.
### Models with simple smooth terms
Consider the example model.
*g(mu\_i) = b\_0 + b\_1 x\_1i + b\_2 x\_2i + f1(x\_3i) + f2(x\_4i,x\_5i)*
where the response variables *y\_i* has expectation *mu\_i* and *g* is a link function.
The `gam` formula for this would be
`y ~ x1 + x2 + s(x3) + s(x4,x5)`.
This would use the default basis for the smooths (a thin plate regression spline basis for each), with automatic selection of the effective degrees of freedom for both smooths. The dimension of the smoothing basis is given a default value as well (the dimension of the basis sets an upper limit on the maximum possible degrees of freedom for the basis - the limit is typically one less than basis dimension). Full details of how to control smooths are given in `<s>` and `<te>`, and further discussion of basis dimension choice can be found in `<choose.k>`. For the moment suppose that we would like to change the basis of the first smooth to a cubic regression spline basis with a dimension of 20, while fixing the second term at 25 degrees of freedom. The appropriate formula would be:
`y ~ x1 + x2 + s(x3,bs="cr",k=20) + s(x4,x5,k=26,fx=TRUE)`.
The above assumes that *x\_4* and *x\_5* are naturally on similar scales (e.g. they might be co-ordinates), so that isotropic smoothing is appropriate. If this assumption is false then tensor product smoothing might be better (see `<te>`).
`y ~ x1 + x2 + s(x3) + te(x4,x5)`
would generate a tensor product smooth of *x\_4* and *x\_5*. By default this smooth would have basis dimension 25 and use cubic regression spline marginals. Varying the defaults is easy. For example
`y ~ x1 + x2 + s(x3) + te(x4,x5,bs=c("cr","ps"),k=c(6,7))`
specifies that the tensor product should use a rank 6 cubic regression spline marginal and a rank 7 P-spline marginal to create a smooth with basis dimension 42.
### Nested terms/functional ANOVA
Sometimes it is interesting to specify smooth models with a main effects + interaction structure such as
*E(y) = f1 (x) + f2(z) + f3(x,z)*
or
*E(y) = f1(x) + f2(z) + f3(v) + f4(x,z) + f5(z,v) + f6(z,v) + f7(x,z,v)*
for example. Such models should be set up using `[ti](te)` terms in the model formula. For example:
`y ~ ti(x) + ti(z) + ti(x,z)`, or
`y ~ ti(x) + ti(z) + ti(v) + ti(x,z) + ti(x,v) + ti(z,v)+ti(x,z,v)`.
The `ti` terms produce interactions with the component main effects excluded appropriately. (There is in fact no need to use `ti` terms for the main effects here, `s` terms could also be used.)
`gam` allows nesting (or ‘overlap’) of `te` and `s` smooths, and automatically generates side conditions to make such models identifiable, but the resulting models are much less stable and interpretable than those constructed using `ti` terms.
### ‘by’ variables
`by` variables are the means for constructing ‘varying-coefficient models’ (geographic regression models) and for letting smooths ‘interact’ with factors or parametric terms. They are also the key to specifying general linear functionals of smooths.
The `<s>` and `<te>` terms used to specify smooths accept an argument `by`, which is a numeric or factor variable of the same dimension as the covariates of the smooth. If a `by` variable is numeric, then its *ith* element multiples the *ith* row of the model matrix corresponding to the smooth term concerned.
Factor smooth interactions (see also `[factor.smooth.interaction](smooth.construct.fs.smooth.spec)`). If a `by` variable is a `[factor](../../base/html/factor)` then it generates an indicator vector for each level of the factor, unless it is an `[ordered](../../base/html/factor)` factor. In the non-ordered case, the model matrix for the smooth term is then replicated for each factor level, and each copy has its rows multiplied by the corresponding rows of its indicator variable. The smoothness penalties are also duplicated for each factor level. In short a different smooth is generated for each factor level (the `id` argument to `<s>` and `<te>` can be used to force all such smooths to have the same smoothing parameter). `[ordered](../../base/html/factor)` `by` variables are handled in the same way, except that no smooth is generated for the first level of the ordered factor (see `b3` example below). This is useful for setting up identifiable models when the same smooth occurs more than once in a model, with different factor `by` variables.
As an example, consider the model
*E(y\_i) = b\_0 + f(x\_i)z\_i*
where *f* is a smooth function, and *z\_i* is a numeric variable. The appropriate formula is:
`y ~ s(x,by=z)`
- the `by` argument ensures that the smooth function gets multiplied by covariate `z`. Note that when using factor by variables, centering constraints are applied to the smooths, which usually means that the by variable should be included as a parametric term, as well.
The example code below also illustrates the use of factor `by` variables.
`by` variables may be supplied as numeric matrices as part of specifying general linear functional terms.
If a `by` variable is present and numeric (rather than a factor) then the corresponding smooth is only subjected to an identifiability constraint if (i) the `by` variable is a constant vector, or, (ii) for a matrix `by` variable, `L`, if `L%*%rep(1,ncol(L))` is constant or (iii) if a user defined smooth constructor supplies an identifiability constraint explicitly, and that constraint has an attibute `"always.apply"`.
### Linking smooths with ‘id’
It is sometimes desirable to insist that different smooth terms have the same degree of smoothness. This can be done by using the `id` argument to `<s>` or `<te>` terms. Smooths which share an `id` will have the same smoothing parameter. Really this only makes sense if the smooths use the same basis functions, and the default behaviour is to force this to happen: all smooths sharing an `id` have the same basis functions as the first smooth occurring with that `id`. Note that if you want exactly the same function for each smooth, then this is best achieved by making use of the summation convention covered under ‘linear functional terms’.
As an example suppose that *E(y\_i)=mu\_i* and
*g(mu\_i) = f1(x\_1i) + f2(x\_2i,x\_3i) + f3(x\_4i)*
but that *f1* and *f3* should have the same smoothing parameters (and *x\_2* and *x\_3* are on different scales). Then the `gam` formula
`y ~ s(x1,id=1) + te(x_2,x3) + s(x4,id=1)`
would achieve the desired result. `id` can be numbers or character strings. Giving an `id` to a term with a factor `by` variable causes the smooths at each level of the factor to have the same smoothing parameter.
Smooth term `id`s are not supported by `gamm`.
### Linear functional terms
General linear functional terms have a long history in the spline literature including in the penalized GLM context (see e.g. Wahba 1990). Such terms encompass varying coefficient models/ geographic regression, functional GLMs (i.e. GLMs with functional predictors), GLASS models, etc, and allow smoothing with respect to aggregated covariate values, for example.
Such terms are implemented in `mgcv` using a simple ‘summation convention’ for smooth terms: If the covariates of a smooth are supplied as matrices, then summation of the evaluated smooth over the columns of the matrices is implied. Each covariate matrix and any `by` variable matrix must be of the same dimension. Consider, for example the term
`s(X,Z,by=L)`
where `X`, `Z` and `L` are *n by p* matrices. Let *f* denote the thin plate regression spline specified. The resulting contibution to the *ith* element of the linear predictor is
*sum\_j^p L\_ij f(X\_ij,Z\_ij)*
If no `L` is supplied then all its elements are taken as 1. In R code terms, let `F` denote the *n by p* matrix obtained by evaluating the smooth at the values in `X` and `Z`. Then the contribution of the term to the linear predictor is `rowSums(L*F)` (note that it's element by element multiplication here!).
The summation convention applies to `te` terms as well as `s` terms. More details and examples are provided in `<linear.functional.terms>`.
### Random effects
Random effects can be added to `gam` models using `s(...,bs="re")` terms (see `<smooth.construct.re.smooth.spec>`), or the `paraPen` argument to `<gam>` covered below. See `<gam.vcomp>`, `<random.effects>` and `<smooth.construct.re.smooth.spec>` for further details. An alternative is to use the approach of `<gamm>`.
### Penalizing the parametric terms
In case the ability to add smooth classes, smooth identities, `by` variables and the summation convention are still not sufficient to implement exactly the penalized GLM that you require, `<gam>` also allows you to penalize the parametric terms in the model formula. This is mostly useful in allowing one or more matrix terms to be included in the formula, along with a sequence of quadratic penalty matrices for each.
Suppose that you have set up a model matrix *X*, and want to penalize the corresponding coefficients, *b* with two penalties *b'S1 b* and *b'S2 b*. Then something like the following would be appropriate:
`gam(y ~ X - 1,paraPen=list(X=list(S1,S2)))`
The `paraPen` argument should be a list with elements having names corresponding to the terms being penalized. Each element of `paraPen` is itself a list, with optional elements `L`, `rank` and `sp`: all other elements must be penalty matrices. If present, `rank` is a vector giving the rank of each penalty matrix (if absent this is determined numerically). `L` is a matrix that maps underlying log smoothing parameters to the log smoothing parameters that actually multiply the individual quadratic penalties: taken as the identity if not supplied. `sp` is a vector of (underlying) smoothing parameter values: positive values are taken as fixed, negative to signal that the smoothing parameter should be estimated. Taken as all negative if not supplied.
An obvious application of `paraPen` is to incorporate random effects, and an example of this is provided below. In this case the supplied penalty matrices will be (generalized) inverse covariance matrices for the random effects — i.e. precision matrices. The final estimate of the covariance matrix corresponding to one of these penalties is given by the (generalized) inverse of the penalty matrix multiplied by the estimated scale parameter and divided by the estimated smoothing parameter for the penalty. For example, if you use an identity matrix to penalize some coefficients that are to be viewed as i.i.d. Gaussian random effects, then their estimated variance will be the estimated scale parameter divided by the estimate of the smoothing parameter, for this penalty. See the ‘rail’ example below.
P-values for penalized parametric terms should be treated with caution. If you must have them, then use the option `freq=TRUE` in `<anova.gam>` and `<summary.gam>`, which will tend to give reasonable results for random effects implemented this way, but not for terms with a rank defficient penalty (or penalties with a wide eigen-spectrum).
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wahba (1990) Spline Models of Observational Data SIAM.
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
### Examples
```
require(mgcv)
set.seed(10)
## simulate date from y = f(x2)*x1 + error
dat <- gamSim(3,n=400)
b<-gam(y ~ s(x2,by=x1),data=dat)
plot(b,pages=1)
summary(b)
## Factor `by' variable example (with a spurious covariate x0)
## simulate data...
dat <- gamSim(4)
## fit model...
b <- gam(y ~ fac+s(x2,by=fac)+s(x0),data=dat)
plot(b,pages=1)
summary(b)
## note that the preceding fit is the same as....
b1<-gam(y ~ s(x2,by=as.numeric(fac==1))+s(x2,by=as.numeric(fac==2))+
s(x2,by=as.numeric(fac==3))+s(x0)-1,data=dat)
## ... the `-1' is because the intercept is confounded with the
## *uncentred* smooths here.
plot(b1,pages=1)
summary(b1)
## repeat forcing all s(x2) terms to have the same smoothing param
## (not a very good idea for these data!)
b2 <- gam(y ~ fac+s(x2,by=fac,id=1)+s(x0),data=dat)
plot(b2,pages=1)
summary(b2)
## now repeat with a single reference level smooth, and
## two `difference' smooths...
dat$fac <- ordered(dat$fac)
b3 <- gam(y ~ fac+s(x2)+s(x2,by=fac)+s(x0),data=dat,method="REML")
plot(b3,pages=1)
summary(b3)
rm(dat)
## An example of a simple random effects term implemented via
## penalization of the parametric part of the model...
dat <- gamSim(1,n=400,scale=2) ## simulate 4 term additive truth
## Now add some random effects to the simulation. Response is
## grouped into one of 20 groups by `fac' and each groups has a
## random effect added....
fac <- as.factor(sample(1:20,400,replace=TRUE))
dat$X <- model.matrix(~fac-1)
b <- rnorm(20)*.5
dat$y <- dat$y + dat$X%*%b
## now fit appropriate random effect model...
PP <- list(X=list(rank=20,diag(20)))
rm <- gam(y~ X+s(x0)+s(x1)+s(x2)+s(x3),data=dat,paraPen=PP)
plot(rm,pages=1)
## Get estimated random effects standard deviation...
sig.b <- sqrt(rm$sig2/rm$sp[1]);sig.b
## a much simpler approach uses "re" terms...
rm1 <- gam(y ~ s(fac,bs="re")+s(x0)+s(x1)+s(x2)+s(x3),data=dat,method="ML")
gam.vcomp(rm1)
## Simple comparison with lme, using Rail data.
## See ?random.effects for a simpler method
require(nlme)
b0 <- lme(travel~1,data=Rail,~1|Rail,method="ML")
Z <- model.matrix(~Rail-1,data=Rail,
contrasts.arg=list(Rail="contr.treatment"))
b <- gam(travel~Z,data=Rail,paraPen=list(Z=list(diag(6))),method="ML")
b0
(b$reml.scale/b$sp)^.5 ## `gam' ML estimate of Rail sd
b$reml.scale^.5 ## `gam' ML estimate of residual sd
b0 <- lme(travel~1,data=Rail,~1|Rail,method="REML")
Z <- model.matrix(~Rail-1,data=Rail,
contrasts.arg=list(Rail="contr.treatment"))
b <- gam(travel~Z,data=Rail,paraPen=list(Z=list(diag(6))),method="REML")
b0
(b$reml.scale/b$sp)^.5 ## `gam' REML estimate of Rail sd
b$reml.scale^.5 ## `gam' REML estimate of residual sd
################################################################
## Approximate large dataset logistic regression for rare events
## based on subsampling the zeroes, and adding an offset to
## approximately allow for this.
## Doing the same thing, but upweighting the sampled zeroes
## leads to problems with smoothness selection, and CIs.
################################################################
n <- 50000 ## simulate n data
dat <- gamSim(1,n=n,dist="binary",scale=.33)
p <- binomial()$linkinv(dat$f-6) ## make 1's rare
dat$y <- rbinom(p,1,p) ## re-simulate rare response
## Now sample all the 1's but only proportion S of the 0's
S <- 0.02 ## sampling fraction of zeroes
dat <- dat[dat$y==1 | runif(n) < S,] ## sampling
## Create offset based on total sampling fraction
dat$s <- rep(log(nrow(dat)/n),nrow(dat))
lr.fit <- gam(y~s(x0,bs="cr")+s(x1,bs="cr")+s(x2,bs="cr")+s(x3,bs="cr")+
offset(s),family=binomial,data=dat,method="REML")
## plot model components with truth overlaid in red
op <- par(mfrow=c(2,2))
fn <- c("f0","f1","f2","f3");xn <- c("x0","x1","x2","x3")
for (k in 1:4) {
plot(lr.fit,select=k,scale=0)
ff <- dat[[fn[k]]];xx <- dat[[xn[k]]]
ind <- sort.int(xx,index.return=TRUE)$ix
lines(xx[ind],(ff-mean(ff))[ind]*.33,col=2)
}
par(op)
rm(dat)
## A Gamma example, by modify `gamSim' output...
dat <- gamSim(1,n=400,dist="normal",scale=1)
dat$f <- dat$f/4 ## true linear predictor
Ey <- exp(dat$f);scale <- .5 ## mean and GLM scale parameter
## Note that `shape' and `scale' in `rgamma' are almost
## opposite terminology to that used with GLM/GAM...
dat$y <- rgamma(Ey*0,shape=1/scale,scale=Ey*scale)
bg <- gam(y~ s(x0)+ s(x1)+s(x2)+s(x3),family=Gamma(link=log),
data=dat,method="REML")
plot(bg,pages=1,scheme=1)
```
| programming_docs |
r None
`fixDependence` Detect linear dependencies of one matrix on another
--------------------------------------------------------------------
### Description
Identifies columns of a matrix `X2` which are linearly dependent on columns of a matrix `X1`. Primarily of use in setting up identifiability constraints for nested GAMs.
### Usage
```
fixDependence(X1,X2,tol=.Machine$double.eps^.5,rank.def=0,strict=FALSE)
```
### Arguments
| | |
| --- | --- |
| `X1` | A matrix. |
| `X2` | A matrix, the columns of which may be partially linearly dependent on the columns of `X1`. |
| `tol` | The tolerance to use when assessing linear dependence. |
| `rank.def` | If the degree of rank deficiency in `X2`, given `X1`, is known, then it can be supplied here, and `tol` is then ignored. Unused unless positive and not greater than the number of columns in `X2`. |
| `strict` | if `TRUE` then only columns individually dependent on `X1` are detected, if `FALSE` then enough columns to make the reduced `X2` full rank and independent of `X1` are detected. |
### Details
The algorithm uses a simple approach based on QR decomposition: see Wood (2017, section 5.6.3) for details.
### Value
A vector of the columns of `X2` which are linearly dependent on columns of `X1` (or which need to be deleted to acheive independence and full rank if `strict==FALSE`). `NULL` if the two matrices are independent.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
### Examples
```
library(mgcv)
n<-20;c1<-4;c2<-7
X1<-matrix(runif(n*c1),n,c1)
X2<-matrix(runif(n*c2),n,c2)
X2[,3]<-X1[,2]+X2[,4]*.1
X2[,5]<-X1[,1]*.2+X1[,2]*.04
fixDependence(X1,X2)
fixDependence(X1,X2,strict=TRUE)
```
r None
`magic` Stable Multiple Smoothing Parameter Estimation by GCV or UBRE
----------------------------------------------------------------------
### Description
Function to efficiently estimate smoothing parameters in generalized ridge regression problems with multiple (quadratic) penalties, by GCV or UBRE. The function uses Newton's method in multi-dimensions, backed up by steepest descent to iteratively adjust the smoothing parameters for each penalty (one penalty may have a smoothing parameter fixed at 1).
For maximal numerical stability the method is based on orthogonal decomposition methods, and attempts to deal with numerical rank deficiency gracefully using a truncated singular value decomposition approach.
### Usage
```
magic(y,X,sp,S,off,L=NULL,lsp0=NULL,rank=NULL,H=NULL,C=NULL,
w=NULL,gamma=1,scale=1,gcv=TRUE,ridge.parameter=NULL,
control=list(tol=1e-6,step.half=25,rank.tol=
.Machine$double.eps^0.5),extra.rss=0,n.score=length(y),nthreads=1)
```
### Arguments
| | |
| --- | --- |
| `y` | is the response data vector. |
| `X` | is the model matrix (more columns than rows are allowed). |
| `sp` | is the array of smoothing parameters. The vector `L%*%log(sp)
+ lsp0` contains the logs of the smoothing parameters that actually multiply the penalty matrices stored in `S` (`L` is taken as the identity if `NULL`). Any `sp` values that are negative are autoinitialized, otherwise they are taken as supplying starting values. A supplied starting value will be reset to a default starting value if the gradient of the GCV/UBRE score is too small at the supplied value. |
| `S` | is a list of of penalty matrices. `S[[i]]` is the ith penalty matrix, but note that it is not stored as a full matrix, but rather as the smallest square matrix including all the non-zero elements of the penalty matrix. Element 1,1 of `S[[i]]` occupies element `off[i]`, `off[i]` of the ith penalty matrix. Each `S[[i]]` must be positive semi-definite. Set to `list()` if there are no smoothing parameters to be estimated. |
| `off` | is an array indicating the first parameter in the parameter vector that is penalized by the penalty involving `S[[i]]`. |
| `L` | is a matrix mapping `log(sp)` to the log smoothing parameters that actually multiply the penalties defined by the elemts of `S`. Taken as the identity, if `NULL`. See above under `sp`. |
| `lsp0` | If `L` is not `NULL` this is a vector of constants in the linear transformation from `log(sp)` to the actual log smoothing parameters. So the logs of the smoothing parameters multiplying the `S[[i]]` are given by `L%*%log(sp) + lsp0`. Taken as 0 if `NULL`. |
| `rank` | is an array specifying the ranks of the penalties. This is useful, but not essential, for forming square roots of the penalty matrices. |
| `H` | is the optional offset penalty - i.e. a penalty with a smoothing parameter fixed at 1. This is useful for allowing regularization of the estimation process, fixed smoothing penalties etc. |
| `C` | is the optional matrix specifying any linear equality constraints on the fitting problem. If *b* is the parameter vector then the parameters are forced to satisfy *Cb=0*. |
| `w` | the regression weights. If this is a matrix then it is taken as being the square root of the inverse of the covariance matrix of `y`, specifically *V\_y^{-1}=w'w*. If `w` is an array then it is taken as the diagonal of this matrix, or simply the weight for each element of `y`. See below for an example using this. |
| `gamma` | is an inflation factor for the model degrees of freedom in the GCV or UBRE score. |
| `scale` | is the scale parameter for use with UBRE. |
| `gcv` | should be set to `TRUE` if GCV is to be used, `FALSE` for UBRE. |
| `ridge.parameter` | It is sometimes useful to apply a ridge penalty to the fitting problem, penalizing the parameters in the constrained space directly. Setting this parameter to a value greater than zero will cause such a penalty to be used, with the magnitude given by the parameter value. |
| `control` | is a list of iteration control constants with the following elements: tol
The tolerance to use in judging convergence. step.half
If a trial step fails then the method tries halving it up to a maximum of `step.half` times. rank.tol
is a constant used to test for numerical rank deficiency of the problem. Basically any singular value less than `rank_tol` multiplied by the largest singular value of the problem is set to zero. |
| | |
| --- | --- |
| `extra.rss` | is a constant to be added to the residual sum of squares (squared norm) term in the calculation of the GCV, UBRE and scale parameter estimate. In conjuction with `n.score`, this is useful for certain methods for dealing with very large data sets. |
| `n.score` | number to use as the number of data in GCV/UBRE score calculation: usually the actual number of data, but there are methods for dealing with very large datasets that change this. |
| `nthreads` | `magic` can make use of multiple threads if this is set to >1. |
### Details
The method is a computationally efficient means of applying GCV or UBRE (often approximately AIC) to the problem of smoothing parameter selection in generalized ridge regression problems of the form:
*min ||W(Xb-y)||^2 + b'Hb + theta\_1 b'S\_1 b + theta\_2 b'S\_2 b + . . .*
possibly subject to constraints *Cb=0*. *X* is a design matrix, *b* a parameter vector, *y* a data vector, *W* a weight matrix, *S\_i* a positive semi-definite matrix of coefficients defining the ith penalty with associated smoothing parameter *theta\_i*, *H* is the positive semi-definite offset penalty matrix and *C* a matrix of coefficients defining any linear equality constraints on the problem. *X* need not be of full column rank.
The *theta\_i* are chosen to minimize either the GCV score:
*V\_g = n ||W(y-Ay)||^2/[tr(I - g A)]^2*
or the UBRE score:
*V\_u =||W(y-Ay||^2/n - 2 s tr(I - g A)/n + s*
where *g* is `gamma` the inflation factor for degrees of freedom (usually set to 1) and *s* is `scale`, the scale parameter. *A* is the hat matrix (influence matrix) for the fitting problem (i.e the matrix mapping data to fitted values). Dependence of the scores on the smoothing parameters is through *A*.
The method operates by Newton or steepest descent updates of the logs of the *theta\_i*. A key aspect of the method is stable and economical calculation of the first and second derivatives of the scores w.r.t. the log smoothing parameters. Because the GCV/UBRE scores are flat w.r.t. very large or very small *theta\_i*, it's important to get good starting parameters, and to be careful not to step into a flat region of the smoothing parameter space. For this reason the algorithm rescales any Newton step that would result in a *log(theta\_i)* change of more than 5. Newton steps are only used if the Hessian of the GCV/UBRE is postive definite, otherwise steepest descent is used. Similarly steepest descent is used if the Newton step has to be contracted too far (indicating that the quadratic model underlying Newton is poor). All initial steepest descent steps are scaled so that their largest component is 1. However a step is calculated, it is never expanded if it is successful (to avoid flat portions of the objective), but steps are successively halved if they do not decrease the GCV/UBRE score, until they do, or the direction is deemed to have failed. (Given the smoothing parameters the optimal *b* parameters are easily found.)
The method is coded in `C` with matrix factorizations performed using LINPACK and LAPACK routines.
### Value
The function returns a list with the following items:
| | |
| --- | --- |
| `b` | The best fit parameters given the estimated smoothing parameters. |
| `scale` | the estimated (GCV) or supplied (UBRE) scale parameter. |
| `score` | the minimized GCV or UBRE score. |
| `sp` | an array of the estimated smoothing parameters. |
| `sp.full` | an array of the smoothing parameters that actually multiply the elements of `S` (same as `sp` if `L` was `NULL`). This is `exp(L%*%log(sp))`. |
| `rV` | a factored form of the parameter covariance matrix. The (Bayesian) covariance matrix of the parametes `b` is given by `rV%*%t(rV)*scale`. |
| `gcv.info` | is a list of information about the performance of the method with the following elements: full.rank
The apparent rank of the problem: number of parameters less number of equality constraints. rank
The estimated actual rank of the problem (at the final iteration of the method). fully.converged
is `TRUE` if the method converged by satisfying the convergence criteria, and `FALSE` if it coverged by failing to decrease the score along the search direction. hess.pos.def
is `TRUE` if the hessian of the UBRE or GCV score was positive definite at convergence. iter
is the number of Newton/Steepest descent iterations taken. score.calls
is the number of times that the GCV/UBRE score had to be evaluated. rms.grad
is the root mean square of the gradient of the UBRE/GCV score w.r.t. the smoothing parameters. R
The factor R from the QR decomposition of the weighted model matrix. This is un-pivoted so that column order corresponds to `X`. So it may not be upper triangular.
|
Note that some further useful quantities can be obtained using `<magic.post.proc>`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2004) Stable and efficient multiple smoothing parameter estimation for generalized additive models. J. Amer. Statist. Ass. 99:673-686
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<magic.post.proc>`,`<gam>`
### Examples
```
## Use `magic' for a standard additive model fit ...
library(mgcv)
set.seed(1);n <- 200;sig <- 1
dat <- gamSim(1,n=n,scale=sig)
k <- 30
## set up additive model
G <- gam(y~s(x0,k=k)+s(x1,k=k)+s(x2,k=k)+s(x3,k=k),fit=FALSE,data=dat)
## fit using magic (and gam default tolerance)
mgfit <- magic(G$y,G$X,G$sp,G$S,G$off,rank=G$rank,
control=list(tol=1e-7,step.half=15))
## and fit using gam as consistency check
b <- gam(G=G)
mgfit$sp;b$sp # compare smoothing parameter estimates
edf <- magic.post.proc(G$X,mgfit,G$w)$edf # get e.d.f. per param
range(edf-b$edf) # compare
## p>n example... fit model to first 100 data only, so more
## params than data...
mgfit <- magic(G$y[1:100],G$X[1:100,],G$sp,G$S,G$off,rank=G$rank)
edf <- magic.post.proc(G$X[1:100,],mgfit,G$w[1:100])$edf
## constrain first two smooths to have identical smoothing parameters
L <- diag(3);L <- rbind(L[1,],L)
mgfit <- magic(G$y,G$X,rep(-1,3),G$S,G$off,L=L,rank=G$rank,C=G$C)
## Now a correlated data example ...
library(nlme)
## simulate truth
set.seed(1);n<-400;sig<-2
x <- 0:(n-1)/(n-1)
f <- 0.2*x^11*(10*(1-x))^6+10*(10*x)^3*(1-x)^10
## produce scaled covariance matrix for AR1 errors...
V <- corMatrix(Initialize(corAR1(.6),data.frame(x=x)))
Cv <- chol(V) # t(Cv)%*%Cv=V
## Simulate AR1 errors ...
e <- t(Cv)%*%rnorm(n,0,sig) # so cov(e) = V * sig^2
## Observe truth + AR1 errors
y <- f + e
## GAM ignoring correlation
par(mfrow=c(1,2))
b <- gam(y~s(x,k=20))
plot(b);lines(x,f-mean(f),col=2);title("Ignoring correlation")
## Fit smooth, taking account of *known* correlation...
w <- solve(t(Cv)) # V^{-1} = w'w
## Use `gam' to set up model for fitting...
G <- gam(y~s(x,k=20),fit=FALSE)
## fit using magic, with weight *matrix*
mgfit <- magic(G$y,G$X,G$sp,G$S,G$off,rank=G$rank,C=G$C,w=w)
## Modify previous gam object using new fit, for plotting...
mg.stuff <- magic.post.proc(G$X,mgfit,w)
b$edf <- mg.stuff$edf;b$Vp <- mg.stuff$Vb
b$coefficients <- mgfit$b
plot(b);lines(x,f-mean(f),col=2);title("Known correlation")
```
r None
`ocat` GAM ordered categorical family
--------------------------------------
### Description
Family for use with `<gam>` or `<bam>`, implementing regression for ordered categorical data. A linear predictor provides the expected value of a latent variable following a logistic distribution. The probability of this latent variable lying between certain cut-points provides the probability of the ordered categorical variable being of the corresponding category. The cut-points are estimated along side the model smoothing parameters (using the same criterion). The observed categories are coded 1, 2, 3, ... up to the number of categories.
### Usage
```
ocat(theta=NULL,link="identity",R=NULL)
```
### Arguments
| | |
| --- | --- |
| `theta` | cut point parameter vector (dimension `R-2`). If supplied and all positive, then taken to be the cut point increments (first cut point is fixed at -1). If any are negative then absolute values are taken as starting values for cutpoint increments. |
| `link` | The link function: only `"identity"` allowed at present (possibly for ever). |
| `R` | the number of catergories. |
### Details
Such cumulative threshold models are only identifiable up to an intercept, or one of the cut points. Rather than remove the intercept, `ocat` simply sets the first cut point to -1. Use `<predict.gam>` with `type="response"` to get the predicted probabilities in each category.
### Value
An object of class `extended.family`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
### Examples
```
library(mgcv)
## Simulate some ordered categorical data...
set.seed(3);n<-400
dat <- gamSim(1,n=n)
dat$f <- dat$f - mean(dat$f)
alpha <- c(-Inf,-1,0,5,Inf)
R <- length(alpha)-1
y <- dat$f
u <- runif(n)
u <- dat$f + log(u/(1-u))
for (i in 1:R) {
y[u > alpha[i]&u <= alpha[i+1]] <- i
}
dat$y <- y
## plot the data...
par(mfrow=c(2,2))
with(dat,plot(x0,y));with(dat,plot(x1,y))
with(dat,plot(x2,y));with(dat,plot(x3,y))
## fit ocat model to data...
b <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),family=ocat(R=R),data=dat)
b
plot(b,pages=1)
gam.check(b)
summary(b)
b$family$getTheta(TRUE) ## the estimated cut points
## predict probabilities of being in each category
predict(b,dat[1:2,],type="response",se=TRUE)
```
r None
`bam.update` Update a strictly additive bam model for new data.
----------------------------------------------------------------
### Description
Gaussian with identity link models fitted by `<bam>` can be efficiently updated as new data becomes available, by simply updating the QR decomposition on which estimation is based, and re-optimizing the smoothing parameters, starting from the previous estimates. This routine implements this.
### Usage
```
bam.update(b,data,chunk.size=10000)
```
### Arguments
| | |
| --- | --- |
| `b` | A `gam` object fitted by `<bam>` and representing a strictly additive model (i.e. `gaussian` errors, `identity` link). |
| `data` | Extra data to augment the original data used to obtain `b`. Must include a `weights` column if the original fit was weighted and a `AR.start` column if `AR.start` was non `NULL` in original fit. |
| `chunk.size` | size of subsets of data to process in one go when getting fitted values. |
### Details
`bam.update` updates the QR decomposition of the (weighted) model matrix of the GAM represented by `b` to take account of the new data. The orthogonal factor multiplied by the response vector is also updated. Given these updates the model and smoothing parameters can be re-estimated, as if the whole dataset (original and the new data) had been fitted in one go. The function will use the same AR1 model for the residuals as that employed in the original model fit (see `rho` parameter of `<bam>`).
Note that there may be small numerical differences in fit between fitting the data all at once, and fitting in stages by updating, if the smoothing bases used have any of their details set with reference to the data (e.g. default knot locations).
### Value
An object of class `"gam"` as described in `[gamObject](gamobject)`.
### WARNINGS
AIC computation does not currently take account of AR model, if used.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<mgcv-package>`, `<bam>`
### Examples
```
library(mgcv)
## following is not *very* large, for obvious reasons...
set.seed(8)
n <- 5000
dat <- gamSim(1,n=n,dist="normal",scale=5)
dat[c(50,13,3000,3005,3100),]<- NA
dat1 <- dat[(n-999):n,]
dat0 <- dat[1:(n-1000),]
bs <- "ps";k <- 20
method <- "GCV.Cp"
b <- bam(y ~ s(x0,bs=bs,k=k)+s(x1,bs=bs,k=k)+s(x2,bs=bs,k=k)+
s(x3,bs=bs,k=k),data=dat0,method=method)
b1 <- bam.update(b,dat1)
b2 <- bam.update(bam.update(b,dat1[1:500,]),dat1[501:1000,])
b3 <- bam(y ~ s(x0,bs=bs,k=k)+s(x1,bs=bs,k=k)+s(x2,bs=bs,k=k)+
s(x3,bs=bs,k=k),data=dat,method=method)
b1;b2;b3
## example with AR1 errors...
e <- rnorm(n)
for (i in 2:n) e[i] <- e[i-1]*.7 + e[i]
dat$y <- dat$f + e*3
dat[c(50,13,3000,3005,3100),]<- NA
dat1 <- dat[(n-999):n,]
dat0 <- dat[1:(n-1000),]
b <- bam(y ~ s(x0,bs=bs,k=k)+s(x1,bs=bs,k=k)+s(x2,bs=bs,k=k)+
s(x3,bs=bs,k=k),data=dat0,rho=0.7)
b1 <- bam.update(b,dat1)
summary(b1);summary(b2);summary(b3)
```
r None
`Predict.matrix.soap.film` Prediction matrix for soap film smooth
------------------------------------------------------------------
### Description
Creates a prediction matrix for a soap film smooth object, mapping the coefficients of the smooth to the linear predictor component for the smooth. This is the `[Predict.matrix](predict.matrix)` method function required by `<gam>`.
### Usage
```
## S3 method for class 'soap.film'
Predict.matrix(object,data)
## S3 method for class 'sw'
Predict.matrix(object,data)
## S3 method for class 'sf'
Predict.matrix(object,data)
```
### Arguments
| | |
| --- | --- |
| `object` | A class `"soap.film"`, `"sf"` or `"sw"` object. |
| `data` | A list list or data frame containing the arguments of the smooth at which predictions are required. |
### Details
The smooth object will be largely what is returned from `<smooth.construct.so.smooth.spec>`, although elements `X` and `S` are not needed, and need not be present, of course.
### Value
A matrix. This may have an `"offset"` attribute corresponding to the contribution from any known boundary conditions on the smooth.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<smooth.construct.so.smooth.spec>`
### Examples
```
## This is a lower level example. The basis and
## penalties are obtained explicitly
## and `magic' is used as the fitting routine...
require(mgcv)
set.seed(66)
## create a boundary...
fsb <- list(fs.boundary())
## create some internal knots...
knots <- data.frame(x=rep(seq(-.5,3,by=.5),4),
y=rep(c(-.6,-.3,.3,.6),rep(8,4)))
## Simulate some fitting data, inside boundary...
n<-1000
x <- runif(n)*5-1;y<-runif(n)*2-1
z <- fs.test(x,y,b=1)
ind <- inSide(fsb,x,y) ## remove outsiders
z <- z[ind];x <- x[ind]; y <- y[ind]
n <- length(z)
z <- z + rnorm(n)*.3 ## add noise
## plot boundary with knot and data locations
plot(fsb[[1]]$x,fsb[[1]]$y,type="l");points(knots$x,knots$y,pch=20,col=2)
points(x,y,pch=".",col=3);
## set up the basis and penalties...
sob <- smooth.construct2(s(x,y,bs="so",k=40,xt=list(bnd=fsb,nmax=100)),
data=data.frame(x=x,y=y),knots=knots)
## ... model matrix is element `X' of sob, penalties matrices
## are in list element `S'.
## fit using `magic'
um <- magic(z,sob$X,sp=c(-1,-1),sob$S,off=c(1,1))
beta <- um$b
## produce plots...
par(mfrow=c(2,2),mar=c(4,4,1,1))
m<-100;n<-50
xm <- seq(-1,3.5,length=m);yn<-seq(-1,1,length=n)
xx <- rep(xm,n);yy<-rep(yn,rep(m,n))
## plot truth...
tru <- matrix(fs.test(xx,yy),m,n) ## truth
image(xm,yn,tru,col=heat.colors(100),xlab="x",ylab="y")
lines(fsb[[1]]$x,fsb[[1]]$y,lwd=3)
contour(xm,yn,tru,levels=seq(-5,5,by=.25),add=TRUE)
## Plot soap, by first predicting on a fine grid...
## First get prediction matrix...
X <- Predict.matrix2(sob,data=list(x=xx,y=yy))
## Now the predictions...
fv <- X%*%beta
## Plot the estimated function...
image(xm,yn,matrix(fv,m,n),col=heat.colors(100),xlab="x",ylab="y")
lines(fsb[[1]]$x,fsb[[1]]$y,lwd=3)
points(x,y,pch=".")
contour(xm,yn,matrix(fv,m,n),levels=seq(-5,5,by=.25),add=TRUE)
## Plot TPRS...
b <- gam(z~s(x,y,k=100))
fv.gam <- predict(b,newdata=data.frame(x=xx,y=yy))
names(sob$sd$bnd[[1]]) <- c("xx","yy","d")
ind <- inSide(sob$sd$bnd,xx,yy)
fv.gam[!ind]<-NA
image(xm,yn,matrix(fv.gam,m,n),col=heat.colors(100),xlab="x",ylab="y")
lines(fsb[[1]]$x,fsb[[1]]$y,lwd=3)
points(x,y,pch=".")
contour(xm,yn,matrix(fv.gam,m,n),levels=seq(-5,5,by=.25),add=TRUE)
```
| programming_docs |
r None
`rmvn` Generate from or evaluate multivariate normal or t densities.
---------------------------------------------------------------------
### Description
Generates multivariate normal or t random deviates, and evaluates the corresponding log densities.
### Usage
```
rmvn(n,mu,V)
r.mvt(n,mu,V,df)
dmvn(x,mu,V,R=NULL)
d.mvt(x,mu,V,df,R=NULL)
```
### Arguments
| | |
| --- | --- |
| `n` | number of simulated vectors required. |
| `mu` | the mean of the vectors: either a single vector of length `p=ncol(V)` or an `n` by `p` matrix. |
| `V` | A positive semi definite covariance matrix. |
| `df` | The degrees of freedom for a t distribution. |
| `x` | A vector or matrix to evaluate the log density of. |
| `R` | An optional Cholesky factor of V (not pivoted). |
### Details
Uses a ‘square root’ of `V` to transform standard normal deviates to multivariate normal with the correct covariance matrix.
### Value
An `n` row matrix, with each row being a draw from a multivariate normal or t density with covariance matrix `V` and mean vector `mu`. Alternatively each row may have a different mean vector if `mu` is a vector.
For density functions, a vector of log densities.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### See Also
`[ldTweedie](ldtweedie)`, `[Tweedie](tweedie)`
### Examples
```
library(mgcv)
V <- matrix(c(2,1,1,2),2,2)
mu <- c(1,3)
n <- 1000
z <- rmvn(n,mu,V)
crossprod(sweep(z,2,colMeans(z)))/n ## observed covariance matrix
colMeans(z) ## observed mu
dmvn(z,mu,V)
```
r None
`smooth2random` Convert a smooth to a form suitable for estimating as random effect
------------------------------------------------------------------------------------
### Description
A generic function for converting `mgcv` smooth objects to forms suitable for estimation as random effects by e.g. `lme`. Exported mostly for use by other package developers.
### Usage
```
smooth2random(object,vnames,type=1)
```
### Arguments
| | |
| --- | --- |
| `object` | an `mgcv` smooth object. |
| `vnames` | a vector of names to avoid as dummy variable names in the random effects form. |
| `type` | `1` for `lme`, otherwise `lmer`. |
### Details
There is a duality between smooths and random effects which means that smooths can be estimated using mixed modelling software. This function converts standard `mgcv` smooth objects to forms suitable for estimation by `lme`, for example. A service routine for `<gamm>` exported for use by package developers. See examples for creating prediction matrices for new data, corresponding to the random and fixed effect matrices returned when `type=2`.
### Value
A list.
| | |
| --- | --- |
| `rand` | a list of random effects, including grouping factors, and a fixed effects matrix. Grouping factors, model matrix and model matrix name attached as attributes, to each element. Alternatively, for `type=2` a list of random effect model matrices, each corresponding to an i.i.d. Gaussian random effect with a single variance component. |
| `trans.D` | A vector, trans.D, that transforms coefs, in order [rand1, rand2,... fix] back to original parameterization. If null, then taken as vector of ones. `b.original = trans.U %*% (trans.D*b.fit)`. |
| `trans.U` | A matrix, trans.U, that transforms coefs, in order [rand1, rand2,... fix] back to original parameterization. If null, then not needed. If null then taken as identity. |
| `Xf` | A matrix for the fixed effects, if any. |
| `fixed` | `TRUE/FALSE`, indicating if term was unpenalized or not. If unpenalized then other stuff may not be returned (it's not a random effect). |
| `rind` | an index vector such that if br is the vector of random coefficients for the term, br[rind] is the coefs in order for this term. |
| `pen.ind` | index of which penalty penalizes each coefficient: 0 for unpenalized. |
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected]).
### References
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
### See Also
`<gamm>`
### Examples
```
## Simple type 1 'lme' style...
library(mgcv)
x <- runif(30)
sm <- smoothCon(s(x),data.frame(x=x))[[1]]
smooth2random(sm,"")
## Now type 2 'lme4' style...
z <- runif(30)
dat <- data.frame(x=x,z=z)
sm <- smoothCon(t2(x,z),dat)[[1]]
re <- smooth2random(sm,"",2)
str(re)
## For prediction after fitting we might transform parameters back to
## original parameterization using 'rind', 'trans.D' and 'trans.U',
## and call PredictMat(sm,newdata) to get the prediction matrix to
## multiply these transformed parameters by.
## Alternatively we could obtain fixed and random effect Prediction
## matrices corresponding to the results from smooth2random, which
## can be used with the fit parameters without transforming them.
## The following shows how...
s2rPred <- function(sm,re,data) {
## Function to aid prediction from smooths represented as type==2
## random effects. re must be the result of smooth2random(sm,...,type=2).
X <- PredictMat(sm,data) ## get prediction matrix for new data
## transform to r.e. parameterization
if (!is.null(re$trans.U)) X <- X%*%re$trans.U
X <- t(t(X)*re$trans.D)
## re-order columns according to random effect re-ordering...
X[,re$rind] <- X[,re$pen.ind!=0]
## re-order penalization index in same way
pen.ind <- re$pen.ind; pen.ind[re$rind] <- pen.ind[pen.ind>0]
## start return object...
r <- list(rand=list(),Xf=X[,which(re$pen.ind==0),drop=FALSE])
for (i in 1:length(re$rand)) { ## loop over random effect matrices
r$rand[[i]] <- X[,which(pen.ind==i),drop=FALSE]
attr(r$rand[[i]],"s.label") <- attr(re$rand[[i]],"s.label")
}
names(r$rand) <- names(re$rand)
r
} ## s2rPred
## use function to obtain prediction random and fixed effect matrices
## for first 10 elements of 'dat'. Then confirm that these match the
## first 10 rows of the original model matrices, as they should...
r <- s2rPred(sm,re,dat[1:10,])
range(r$Xf-re$Xf[1:10,])
range(r$rand[[1]]-re$rand[[1]][1:10,])
```
r None
`pdIdnot` Overflow proof pdMat class for multiples of the identity matrix
--------------------------------------------------------------------------
### Description
This set of functions is a modification of the `pdMat` class `pdIdent` from library `nlme`. The modification is to replace the log parameterization used in `pdMat` with a `[notLog2](notexp2)` parameterization, since the latter avoids indefiniteness in the likelihood and associated convergence problems: the parameters also relate to variances rather than standard deviations, for consistency with the `[pdTens](pdtens)` class. The functions are particularly useful for working with Generalized Additive Mixed Models where variance parameters/smoothing parameters can be very large or very small, so that overflow or underflow can be a problem.
These functions would not normally be called directly, although unlike the `[pdTens](pdtens)` class it is easy to do so.
### Usage
```
pdIdnot(value = numeric(0), form = NULL,
nam = NULL, data = sys.frame(sys.parent()))
```
### Arguments
| | |
| --- | --- |
| `value` | Initialization values for parameters. Not normally used. |
| `form` | A one sided formula specifying the random effects structure. |
| `nam` | a names argument, not normally used with this class. |
| `data` | data frame in which to evaluate formula. |
### Details
The following functions are provided: `Dim.pdIndot`, `coef.pdIdnot`, `corMatrix.pdIdnot`, `logDet.pdIdnot`, `pdConstruct.pdIdnot`, `pdFactor.pdIdnot`, `pdMatrix.pdIdnot`, `solve.pdIdnot`, `summary.pdIdnot`. (e.g. `mgcv:::coef.pdIdnot` to access.)
Note that while the `pdFactor` and `pdMatrix` functions return the inverse of the scaled random effect covariance matrix or its factor, the `pdConstruct` function is initialised with estimates of the scaled covariance matrix itself.
### Value
A class `pdIdnot` object, or related quantities. See the `nlme` documentation for further details.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Pinheiro J.C. and Bates, D.M. (2000) Mixed effects Models in S and S-PLUS. Springer
The `nlme` source code.
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<te>`, `[pdTens](pdtens)`, `[notLog2](notexp2)`, `<gamm>`
### Examples
```
# see gamm
```
r None
`gam.vcomp` Report gam smoothness estimates as variance components
-------------------------------------------------------------------
### Description
GAMs can be viewed as mixed models, where the smoothing parameters are related to variance components. This routine extracts the estimated variance components associated with each smooth term, and if possible returns confidence intervals on the standard deviation scale.
### Usage
```
gam.vcomp(x,rescale=TRUE,conf.lev=.95)
```
### Arguments
| | |
| --- | --- |
| `x` | a fitted model object of class `gam` as produced by `gam()`. |
| `rescale` | the penalty matrices for smooths are rescaled before fitting, for numerical stability reasons, if `TRUE` this rescaling is reversed, so that the variance components are on the original scale. |
| `conf.lev` | when the smoothing parameters are estimated by REML or ML, then confidence intervals for the variance components can be obtained from large sample likelihood results. This gives the confidence level to work at. |
### Details
The (pseudo) inverse of the penalty matrix penalizing a term is proportional to the covariance matrix of the term's coefficients, when these are viewed as random. For single penalty smooths, it is possible to compute the variance component for the smooth (which multiplies the inverse penalty matrix to obtain the covariance matrix of the smooth's coefficients). This variance component is given by the scale parameter divided by the smoothing parameter.
This routine computes such variance components, for `gam` models, and associated confidence intervals, if smoothing parameter estimation was likelihood based. Note that variance components are also returned for tensor product smooths, but that their interpretation is not so straightforward.
The routine is particularly useful for model fitted by `<gam>` in which random effects have been incorporated.
### Value
Either a vector of variance components for each smooth term (as standard deviations), or a matrix. The first column of the matrix gives standard deviations for each term, while the subsequent columns give lower and upper confidence bounds, on the same scale.
For models in which there are more smoothing parameters than actually estimated (e.g. if some were fixed, or smoothing parameters are linked) then a list is returned. The `vc` element is as above, the `all` element is a vector of variance components for all the smoothing parameters (estimated + fixed or replicated).
The routine prints a table of estimated standard deviations and confidence limits, if these can be computed, and reports the numerical rank of the covariance matrix.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2008) Fast stable direct fitting and smoothness selection for generalized additive models. Journal of the Royal Statistical Society (B) 70(3):495-518
Wood, S.N. (2011) Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. Journal of the Royal Statistical Society (B) 73(1):3-36
### See Also
`<smooth.construct.re.smooth.spec>`
### Examples
```
set.seed(3)
require(mgcv)
## simulate some data, consisting of a smooth truth + random effects
dat <- gamSim(1,n=400,dist="normal",scale=2)
a <- factor(sample(1:10,400,replace=TRUE))
b <- factor(sample(1:7,400,replace=TRUE))
Xa <- model.matrix(~a-1) ## random main effects
Xb <- model.matrix(~b-1)
Xab <- model.matrix(~a:b-1) ## random interaction
dat$y <- dat$y + Xa%*%rnorm(10)*.5 +
Xb%*%rnorm(7)*.3 + Xab%*%rnorm(70)*.7
dat$a <- a;dat$b <- b
## Fit the model using "re" terms, and smoother linkage
mod <- gam(y~s(a,bs="re")+s(b,bs="re")+s(a,b,bs="re")+s(x0,id=1)+s(x1,id=1)+
s(x2,k=15)+s(x3),data=dat,method="ML")
gam.vcomp(mod)
```
r None
`pdTens` Functions implementing a pdMat class for tensor product smooths
-------------------------------------------------------------------------
### Description
This set of functions implements an `nlme` library `pdMat` class to allow tensor product smooths to be estimated by `lme` as called by `gamm`. Tensor product smooths have a penalty matrix made up of a weighted sum of penalty matrices, where the weights are the smoothing parameters. In the mixed model formulation the penalty matrix is the inverse of the covariance matrix for the random effects of a term, and the smoothing parameters (times a half) are variance parameters to be estimated. It's not possible to transform the problem to make the required random effects covariance matrix look like one of the standard `pdMat` classes: hence the need for the `pdTens` class. A `[notLog2](notexp2)` parameterization ensures that the parameters are positive.
These functions (`pdTens`, `pdConstruct.pdTens`, `pdFactor.pdTens`, `pdMatrix.pdTens`, `coef.pdTens` and `summary.pdTens`) would not normally be called directly.
### Usage
```
pdTens(value = numeric(0), form = NULL,
nam = NULL, data = sys.frame(sys.parent()))
```
### Arguments
| | |
| --- | --- |
| `value` | Initialization values for parameters. Not normally used. |
| `form` | A one sided formula specifying the random effects structure. The formula should have an attribute `S` which is a list of the penalty matrices the weighted sum of which gives the inverse of the covariance matrix for these random effects. |
| `nam` | a names argument, not normally used with this class. |
| `data` | data frame in which to evaluate formula. |
### Details
If using this class directly note that it is worthwhile scaling the `S` matrices to be of ‘moderate size’, for example by dividing each matrix by its largest singular value: this avoids problems with `lme` defaults (`<smooth.construct.tensor.smooth.spec>` does this automatically).
This appears to be the minimum set of functions required to implement a new `pdMat` class.
Note that while the `pdFactor` and `pdMatrix` functions return the inverse of the scaled random effect covariance matrix or its factor, the `pdConstruct` function is sometimes initialised with estimates of the scaled covariance matrix, and sometimes intialized with its inverse.
### Value
A class `pdTens` object, or its coefficients or the matrix it represents or the factor of that matrix. `pdFactor` returns the factor as a vector (packed column-wise) (`pdMatrix` always returns a matrix).
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Pinheiro J.C. and Bates, D.M. (2000) Mixed effects Models in S and S-PLUS. Springer
The `nlme` source code.
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<te>` `<gamm>`
### Examples
```
# see gamm
```
r None
`coxpht` Additive Cox proportional hazard models with time varying covariates
------------------------------------------------------------------------------
### Description
The `cox.ph` family only allows one set of covariate values per subject. If each subject has several time varying covariate measurements then it is still possible to fit a proportional hazards regression model, via an equivalent Poisson model. The recipe is provided by Whitehead (1980) and is equally valid in the smooth additive case. Its drawback is that the equivalent Poisson dataset can be quite large.
The trick is to generate an artificial Poisson observation for each subject in the risk set at each non-censored event time. The corresponding covariate values for each subject are whatever they are at the event time, while the Poisson response is zero for all subjects except those experiencing the event at that time (this corresponds to Peto's correction for ties). The linear predictor for the model must include an intercept for each event time (the cumulative sum of the exponential of these is the Breslow estimate of the baseline hazard).
Below is some example code employing this trick for the `[pbcseq](../../survival/html/pbcseq)` data from the `survival` package. It uses `<bam>` for fitting with the `discrete=TRUE` option for efficiency: there is some approximation involved in doing this, and the exact equivalent to what is done in `[cox.ph](coxph)` is rather obtained by using `<gam>` with `method="REML"` (taking some 14 times the computational time for the example below).
The function `tdpois` in the example code uses crude piecewise constant interpolation for the covariates, in which the covariate value at an event time is taken to be whatever it was the previous time that it was measured. Obviously more sophisticated interpolation schemes might be preferable.
### References
Whitehead (1980) Fitting Cox's regression model to survival data using GLIM. Applied Statistics 29(3):268-275
### Examples
```
require(mgcv);require(survival)
## First define functions for producing Poisson model data frame
app <- function(x,t,to) {
## wrapper to approx for calling from apply...
y <- if (sum(!is.na(x))<1) rep(NA,length(to)) else
approx(t,x,to,method="constant",rule=2)$y
if (is.factor(x)) factor(levels(x)[y],levels=levels(x)) else y
} ## app
tdpois <- function(dat,event="z",et="futime",t="day",status="status1",
id="id") {
## dat is data frame. id is patient id; et is event time; t is
## observation time; status is 1 for death 0 otherwise;
## event is name for Poisson response.
if (event %in% names(dat)) warning("event name in use")
require(utils) ## for progress bar
te <- sort(unique(dat[[et]][dat[[status]]==1])) ## event times
sid <- unique(dat[[id]])
inter <- interactive()
if (inter) prg <- txtProgressBar(min = 0, max = length(sid), initial = 0,
char = "=",width = NA, title="Progress", style = 3)
## create dataframe for poisson model data
dat[[event]] <- 0; start <- 1
dap <- dat[rep(1:length(sid),length(te)),]
for (i in 1:length(sid)) { ## work through patients
di <- dat[dat[[id]]==sid[i],] ## ith patient's data
tr <- te[te <= di[[et]][1]] ## times required for this patient
## Now do the interpolation of covariates to event times...
um <- data.frame(lapply(X=di,FUN=app,t=di[[t]],to=tr))
## Mark the actual event...
if (um[[et]][1]==max(tr)&&um[[status]][1]==1) um[[event]][nrow(um)] <- 1
um[[et]] <- tr ## reset time to relevant event times
dap[start:(start-1+nrow(um)),] <- um ## copy to dap
start <- start + nrow(um)
if (inter) setTxtProgressBar(prg, i)
}
if (inter) close(prg)
dap[1:(start-1),]
} ## tdpois
## The following typically takes a minute or less...
## Convert pbcseq to equivalent Poisson form...
pbcseq$status1 <- as.numeric(pbcseq$status==2) ## death indicator
pb <- tdpois(pbcseq) ## conversion
pb$tf <- factor(pb$futime) ## add factor for event time
## Fit Poisson model...
b <- bam(z ~ tf - 1 + sex + trt + s(sqrt(protime)) + s(platelet)+ s(age)+
s(bili)+s(albumin), family=poisson,data=pb,discrete=TRUE,nthreads=2)
par(mfrow=c(2,3))
plot(b,scale=0)
## compute residuals...
chaz <- tapply(fitted(b),pb$id,sum) ## cum haz by subject
d <- tapply(pb$z,pb$id,sum) ## censoring indicator
mrsd <- d - chaz ## Martingale
drsd <- sign(mrsd)*sqrt(-2*(mrsd + d*log(chaz))) ## deviance
## plot survivor function and s.e. band for subject 25
te <- sort(unique(pb$futime)) ## event times
di <- pbcseq[pbcseq$id==25,] ## data for subject 25
pd <- data.frame(lapply(X=di,FUN=app,t=di$day,to=te)) ## interpolate to te
pd$tf <- factor(te)
X <- predict(b,newdata=pd,type="lpmatrix")
eta <- drop(X%*%coef(b)); H <- cumsum(exp(eta))
J <- apply(exp(eta)*X,2,cumsum)
se <- diag(J%*%vcov(b)%*%t(J))^.5
plot(stepfun(te,c(1,exp(-H))),do.points=FALSE,ylim=c(0.7,1),
ylab="S(t)",xlab="t (days)",main="",lwd=2)
lines(stepfun(te,c(1,exp(-H+se))),do.points=FALSE)
lines(stepfun(te,c(1,exp(-H-se))),do.points=FALSE)
rug(pbcseq$day[pbcseq$id==25]) ## measurement times
```
| programming_docs |
r None
`extract.lme.cov` Extract the data covariance matrix from an lme object
------------------------------------------------------------------------
### Description
This is a service routine for `<gamm>`. Extracts the estimated covariance matrix of the data from an `lme` object, allowing the user control about which levels of random effects to include in this calculation. `extract.lme.cov` forms the full matrix explicitly: `extract.lme.cov2` tries to be more economical than this.
### Usage
```
extract.lme.cov(b,data=NULL,start.level=1)
extract.lme.cov2(b,data=NULL,start.level=1)
```
### Arguments
| | |
| --- | --- |
| `b` | A fitted model object returned by a call to `[lme](../../nlme/html/lme)` |
.
| | |
| --- | --- |
| `data` | The data frame/ model frame that was supplied to `[lme](../../nlme/html/lme)`, but with any rows removed by the na action dropped. Uses the data stored in the model object if not supplied. |
| `start.level` | The level of nesting at which to start including random effects in the calculation. This is used to allow smooth terms to be estimated as random effects, but treated like fixed effects for variance calculations. |
### Details
The random effects, correlation structure and variance structure used for a linear mixed model combine to imply a covariance matrix for the response data being modelled. These routines extracts that covariance matrix. The process is slightly complicated, because different components of the fitted model object are stored in different orders (see function code for details!).
The `extract.lme.cov` calculation is not optimally efficient, since it forms the full matrix, which may in fact be sparse. `extract.lme.cov2` is more efficient. If the covariance matrix is diagonal, then only the leading diagonal is returned; if it can be written as a block diagonal matrix (under some permutation of the original data) then a list of matrices defining the non-zero blocks is returned along with an index indicating which row of the original data each row/column of the block diagonal matrix relates to. The block sizes are defined by the coarsest level of grouping in the random effect structure.
`<gamm>` uses `extract.lme.cov2`.
`extract.lme.cov` does not currently deal with the situation in which the grouping factors for a correlation structure are finer than those for the random effects. `extract.lme.cov2` does deal with this situation.
### Value
For `extract.lme.cov` an estimated covariance matrix.
For `extract.lme.cov2` a list containing the estimated covariance matrix and an indexing array. The covariance matrix is stored as the elements on the leading diagonal, a list of the matrices defining a block diagonal matrix, or a full matrix if the previous two options are not possible.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
For `lme` see:
Pinheiro J.C. and Bates, D.M. (2000) Mixed effects Models in S and S-PLUS. Springer
For details of how GAMMs are set up here for estimation using `lme` see:
Wood, S.N. (2006) Low rank scale invariant tensor product smooths for Generalized Additive Mixed Models. Biometrics 62(4):1025-1036
or
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<gamm>`, `[formXtViX](formxtvix)`
### Examples
```
## see also ?formXtViX for use of extract.lme.cov2
require(mgcv)
library(nlme)
data(Rail)
b <- lme(travel~1,Rail,~1|Rail)
extract.lme.cov(b)
extract.lme.cov2(b)
```
r None
`notExp2` Alternative to log parameterization for variance components
----------------------------------------------------------------------
### Description
`notLog2` and `notExp2` are alternatives to `log` and `exp` or `[notLog](notexp)` and `[notExp](notexp)` for re-parameterization of variance parameters. They are used by the `[pdTens](pdtens)` and `[pdIdnot](pdidnot)` classes which in turn implement smooths for `<gamm>`.
The functions are typically used to ensure that smoothing parameters are positive, but the `notExp2` is not monotonic: rather it cycles between ‘effective zero’ and ‘effective infinity’ as its argument changes. The `notLog2` is the inverse function of the `notExp2` only over an interval centered on zero.
Parameterizations using these functions ensure that estimated smoothing parameters remain positive, but also help to ensure that the likelihood is never indefinite: once a working parameter pushes a smoothing parameter below ‘effetive zero’ or above ‘effective infinity’ the cyclic nature of the `notExp2` causes the likelihood to decrease, where otherwise it might simply have flattened.
This parameterization is really just a numerical trick, in order to get `lme` to fit `gamm` models, without failing due to indefiniteness. Note in particular that asymptotic results on the likelihood/REML criterion are not invalidated by the trick, unless parameter estimates end up close to the effective zero or effective infinity: but if this is the case then the asymptotics would also have been invalid for a conventional monotonic parameterization.
This reparameterization was made necessary by some modifications to the underlying optimization method in `lme` introduced in nlme 3.1-62. It is possible that future releases will return to the `[notExp](notexp)` parameterization.
Note that you can reset ‘effective zero’ and ‘effective infinity’: see below.
### Usage
```
notExp2(x,d=.Options$mgcv.vc.logrange,b=1/d)
notLog2(x,d=.Options$mgcv.vc.logrange,b=1/d)
```
### Arguments
| | |
| --- | --- |
| `x` | Argument array of real numbers (`notExp`) or positive real numbers (`notLog`). |
| `d` | the range of `notExp2` runs from `exp(-d)` to `exp(d)`. To change the range used by `gamm` reset `mgcv.vc.logrange` using `[options](../../base/html/options)`. |
| `b` | determines the period of the cycle of `notExp2`. |
### Value
An array of function values evaluated at the supplied argument values.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`[pdTens](pdtens)`, `[pdIdnot](pdidnot)`, `<gamm>`
### Examples
```
## Illustrate the notExp2 function:
require(mgcv)
x <- seq(-50,50,length=1000)
op <- par(mfrow=c(2,2))
plot(x,notExp2(x),type="l")
lines(x,exp(x),col=2)
plot(x,log(notExp2(x)),type="l")
lines(x,log(exp(x)),col=2) # redundancy intended
x <- x/4
plot(x,notExp2(x),type="l")
lines(x,exp(x),col=2)
plot(x,log(notExp2(x)),type="l")
lines(x,log(exp(x)),col=2) # redundancy intended
par(op)
```
r None
`smooth.construct.ad.smooth.spec` Adaptive smooths in GAMs
-----------------------------------------------------------
### Description
`<gam>` can use adaptive smooths of one or two variables, specified via terms like `s(...,bs="ad",...)`. (`<gamm>` can not use such terms — check out package `AdaptFit` if this is a problem.) The basis for such a term is a (tensor product of) p-spline(s) or cubic regression spline(s). Discrete P-spline type penalties are applied directly to the coefficients of the basis, but the penalties themselves have a basis representation, allowing the strength of the penalty to vary with the covariates. The coefficients of the penalty basis are the smoothing parameters.
When invoking an adaptive smoother the `k` argument specifies the dimension of the smoothing basis (default 40 in 1D, 15 in 2D), while the `m` argument specifies the dimension of the penalty basis (default 5 in 1D, 3 in 2D). For an adaptive smooth of two variables `k` is taken as the dimension of both marginal bases: different marginal basis dimensions can be specified by making `k` a two element vector. Similarly, in the two dimensional case `m` is the dimension of both marginal bases for the penalties, unless it is a two element vector, which specifies different basis dimensions for each marginal (If the penalty basis is based on a thin plate spline then `m` specifies its dimension directly).
By default, P-splines are used for the smoothing and penalty bases, but this can be modified by supplying a list as argument `xt` with a character vector `xt$bs` specifying the smoothing basis type. Only `"ps"`, `"cp"`, `"cc"` and `"cr"` may be used for the smoothing basis. The penalty basis is always a B-spline, or a cyclic B-spline for cyclic bases.
The total number of smoothing parameters to be estimated for the term will be the dimension of the penalty basis. Bear in mind that adaptive smoothing places quite severe demands on the data. For example, setting `m=10` for a univariate smooth of 200 data is rather like estimating 10 smoothing parameters, each from a data series of length 20. The problem is particularly serious for smooths of 2 variables, where the number of smoothing parameters required to get reasonable flexibility in the penalty can grow rather fast, but it often requires a very large smoothing basis dimension to make good use of this flexibility. In short, adaptive smooths should be used sparingly and with care.
In practice it is often as effective to simply transform the smoothing covariate as it is to use an adaptive smooth.
### Usage
```
## S3 method for class 'ad.smooth.spec'
smooth.construct(object, data, knots)
```
### Arguments
| | |
| --- | --- |
| `object` | a smooth specification object, usually generated by a term `s(...,bs="ad",...)` |
| `data` | a list containing just the data (including any `by` variable) required by this term, with names corresponding to `object$term` (and `object$by`). The `by` variable is the last element. |
| `knots` | a list containing any knots supplied for basis setup — in same order and with same names as `data`. Can be `NULL` |
### Details
The constructor is not normally called directly, but is rather used internally by `<gam>`. To use for basis setup it is recommended to use `[smooth.construct2](smooth.construct)`.
This class can not be used as a marginal basis in a tensor product smooth, nor by `gamm`.
### Value
An object of class `"pspline.smooth"` in the 1D case or `"tensor.smooth"` in the 2D case.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### Examples
```
## Comparison using an example taken from AdaptFit
## library(AdaptFit)
require(mgcv)
set.seed(0)
x <- 1:1000/1000
mu <- exp(-400*(x-.6)^2)+5*exp(-500*(x-.75)^2)/3+2*exp(-500*(x-.9)^2)
y <- mu+0.5*rnorm(1000)
##fit with default knots
## y.fit <- asp(y~f(x))
par(mfrow=c(2,2))
## plot(y.fit,main=round(cor(fitted(y.fit),mu),digits=4))
## lines(x,mu,col=2)
b <- gam(y~s(x,bs="ad",k=40,m=5)) ## adaptive
plot(b,shade=TRUE,main=round(cor(fitted(b),mu),digits=4))
lines(x,mu-mean(mu),col=2)
b <- gam(y~s(x,k=40)) ## non-adaptive
plot(b,shade=TRUE,main=round(cor(fitted(b),mu),digits=4))
lines(x,mu-mean(mu),col=2)
b <- gam(y~s(x,bs="ad",k=40,m=5,xt=list(bs="cr")))
plot(b,shade=TRUE,main=round(cor(fitted(b),mu),digits=4))
lines(x,mu-mean(mu),col=2)
## A 2D example (marked, 'Not run' purely to reduce
## checking load on CRAN).
par(mfrow=c(2,2),mar=c(1,1,1,1))
x <- seq(-.5, 1.5, length= 60)
z <- x
f3 <- function(x,z,k=15) { r<-sqrt(x^2+z^2);f<-exp(-r^2*k);f}
f <- outer(x, z, f3)
op <- par(bg = "white")
## Plot truth....
persp(x,z,f,theta=30,phi=30,col="lightblue",ticktype="detailed")
n <- 2000
x <- runif(n)*2-.5
z <- runif(n)*2-.5
f <- f3(x,z)
y <- f + rnorm(n)*.1
## Try tprs for comparison...
b0 <- gam(y~s(x,z,k=150))
vis.gam(b0,theta=30,phi=30,ticktype="detailed")
## Tensor product with non-adaptive version of adaptive penalty
b1 <- gam(y~s(x,z,bs="ad",k=15,m=1),gamma=1.4)
vis.gam(b1,theta=30,phi=30,ticktype="detailed")
## Now adaptive...
b <- gam(y~s(x,z,bs="ad",k=15,m=3),gamma=1.4)
vis.gam(b,theta=30,phi=30,ticktype="detailed")
cor(fitted(b0),f);cor(fitted(b),f)
```
r None
`rig` Generate inverse Gaussian random deviates
------------------------------------------------
### Description
Generates inverse Gaussian random deviates.
### Usage
```
rig(n,mean,scale)
```
### Arguments
| | |
| --- | --- |
| `n` | the number of deviates required. If this has length > 1 then the length is taken as the number of deviates required. |
| `mean` | vector of mean values. |
| `scale` | vector of scale parameter values (lambda, see below) |
### Details
If x if the returned vector, then E(x) = `mean` while var(x) = `scale*mean^3`. For density and distribution functions see the `statmod` package. The algorithm used is Algorithm 5.7 of Gentle (2003), based on Michael et al. (1976). Note that `scale` here is the scale parameter in the GLM sense, which is the reciprocal of the usual ‘lambda’ parameter.
### Value
A vector of inverse Gaussian random deviates.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Gentle, J.E. (2003) Random Number Generation and Monte Carlo Methods (2nd ed.) Springer.
Michael, J.R., W.R. Schucany & R.W. Hass (1976) Generating random variates using transformations with multiple roots. The American Statistician 30, 88-90.
<https://www.maths.ed.ac.uk/~swood34/>
### Examples
```
require(mgcv)
set.seed(7)
## An inverse.gaussian GAM example, by modify `gamSim' output...
dat <- gamSim(1,n=400,dist="normal",scale=1)
dat$f <- dat$f/4 ## true linear predictor
Ey <- exp(dat$f);scale <- .5 ## mean and GLM scale parameter
## simulate inverse Gaussian response...
dat$y <- rig(Ey,mean=Ey,scale=.2)
big <- gam(y~ s(x0)+ s(x1)+s(x2)+s(x3),family=inverse.gaussian(link=log),
data=dat,method="REML")
plot(big,pages=1)
gam.check(big)
summary(big)
```
r None
`print.gam` Print a Generalized Additive Model object.
-------------------------------------------------------
### Description
The default print method for a `gam` object.
### Usage
```
## S3 method for class 'gam'
print(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x, ...` | fitted model objects of class `gam` as produced by `gam()`. |
### Details
Prints out the family, model formula, effective degrees of freedom for each smooth term, and optimized value of the smoothness selection criterion used. See `[gamObject](gamobject)` (or `names(x)`) for a listing of what the object contains. `<summary.gam>` provides more detail.
Note that the optimized smoothing parameter selection criterion reported is one of GCV, UBRE(AIC), GACV, negative log marginal likelihood (ML), or negative log restricted likelihood (REML).
If rank deficiency of the model was detected then the apparent rank is reported, along with the length of the cofficient vector (rank in absense of rank deficieny). Rank deficiency occurs when not all coefficients are identifiable given the data. Although the fitting routines (except `gamm`) deal gracefully with rank deficiency, interpretation of rank deficient models may be difficult.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). CRC/ Chapmand and Hall, Boca Raton, Florida.
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<gam>`, `<summary.gam>`
r None
`smooth.construct.sos.smooth.spec` Splines on the sphere
---------------------------------------------------------
### Description
`<gam>` can use isotropic smooths on the sphere, via terms like `s(la,lo,bs="sos",m=2,k=100)`. There must be exactly 2 arguments to such a smooth. The first is taken to be latitude (in degrees) and the second longitude (in degrees). `m` (default 0) is an integer in the range -1 to 4 determining the order of the penalty used. For `m>0`, `(m+2)/2` is the penalty order, with `m=2` equivalent to the usual second derivative penalty. `m=0` signals to use the 2nd order spline on the sphere, computed by Wendelberger's (1981) method. `m = -1` results in a `[Duchon.spline](smooth.construct.ds.smooth.spec)` being used (with m=2 and s=1/2), following an unpublished suggestion of Jean Duchon.
`k` (default 50) is the basis dimension.
### Usage
```
## S3 method for class 'sos.smooth.spec'
smooth.construct(object, data, knots)
## S3 method for class 'sos.smooth'
Predict.matrix(object, data)
```
### Arguments
| | |
| --- | --- |
| `object` | a smooth specification object, usually generated by a term `s(...,bs="sos",...)`. |
| `data` | a list containing just the data (including any `by` variable) required by this term, with names corresponding to `object$term` (and `object$by`). The `by` variable is the last element. |
| `knots` | a list containing any knots supplied for basis setup — in same order and with same names as `data`. Can be `NULL` |
### Details
For `m>0`, the smooths implemented here are based on the pseudosplines on the sphere of Wahba (1981) (there is a correction of table 1 in 1982, but the correction has a misprint in the definition of A — the A given in the 1981 paper is correct). For `m=0` (default) then a second order spline on the sphere is used which is the analogue of a second order thin plate spline in 2D: the computation is based on Chapter 4 of Wendelberger, 1981. Optimal low rank approximations are obtained using exactly the approach given in Wood (2003). For `m = -1` a smooth of the general type discussed in Duchon (1977) is used: the sphere is embedded in a 3D Euclidean space, but smoothing employs a penalty based on second derivatives (so that locally as the smoothing parameter tends to zero we recover a "normal" thin plate spline on the tangent space). This is an unpublished suggestion of Jean Duchon. `m = -2` is the same but with first derivative penalization.
Note that the null space of the penalty is always the space of constant functions on the sphere, whatever the order of penalty.
This class has a plot method, with 3 schemes. `scheme==0` plots one hemisphere of the sphere, projected onto a circle. The plotting sphere has the north pole at the top, and the 0 meridian running down the middle of the plot, and towards the viewer. The smoothing sphere is rotated within the plotting sphere, by specifying the location of its pole in the co-ordinates of the viewing sphere. `theta`, `phi` give the longitude and latitude of the smoothing sphere pole within the plotting sphere (in plotting sphere co-ordinates). (You can visualize the smoothing sphere as a globe, free to rotate within the fixed transparent plotting sphere.) The value of the smooth is shown by a heat map overlaid with a contour plot. lat, lon gridlines are also plotted.
`scheme==1` is as `scheme==0`, but in black and white, without the image plot. `scheme>1` calls the default plotting method with `scheme` decremented by 2.
### Value
An object of class `"sos.smooth"`. In addition to the usual elements of a smooth class documented under `<smooth.construct>`, this object will contain:
| | |
| --- | --- |
| `Xu` | A matrix of the unique covariate combinations for this smooth (the basis is constructed by first stripping out duplicate locations). |
| `UZ` | The matrix mapping the parameters of the reduced rank spline back to the parameters of a full spline. |
### Author(s)
Simon Wood [[email protected]](mailto:[email protected]), with help from Grace Wahba (m=0 case) and Jean Duchon (m = -1 case).
### References
Wahba, G. (1981) Spline interpolation and smoothing on the sphere. SIAM J. Sci. Stat. Comput. 2(1):5-16
Wahba, G. (1982) Erratum. SIAM J. Sci. Stat. Comput. 3(3):385-386.
Wendelberger, J. (1981) PhD Thesis, University of Winsconsin.
Wood, S.N. (2003) Thin plate regression splines. J.R.Statist.Soc.B 65(1):95-114
### See Also
`[Duchon.spline](smooth.construct.ds.smooth.spec)`
### Examples
```
require(mgcv)
set.seed(0)
n <- 400
f <- function(la,lo) { ## a test function...
sin(lo)*cos(la-.3)
}
## generate with uniform density on sphere...
lo <- runif(n)*2*pi-pi ## longitude
la <- runif(3*n)*pi-pi/2
ind <- runif(3*n)<=cos(la)
la <- la[ind];
la <- la[1:n]
ff <- f(la,lo)
y <- ff + rnorm(n)*.2 ## test data
## generate data for plotting truth...
lam <- seq(-pi/2,pi/2,length=30)
lom <- seq(-pi,pi,length=60)
gr <- expand.grid(la=lam,lo=lom)
fz <- f(gr$la,gr$lo)
zm <- matrix(fz,30,60)
require(mgcv)
dat <- data.frame(la = la *180/pi,lo = lo *180/pi,y=y)
## fit spline on sphere model...
bp <- gam(y~s(la,lo,bs="sos",k=60),data=dat)
## pure knot based alternative...
ind <- sample(1:n,100)
bk <- gam(y~s(la,lo,bs="sos",k=60),
knots=list(la=dat$la[ind],lo=dat$lo[ind]),data=dat)
b <- bp
cor(fitted(b),ff)
## plot results and truth...
pd <- data.frame(la=gr$la*180/pi,lo=gr$lo*180/pi)
fv <- matrix(predict(b,pd),30,60)
par(mfrow=c(2,2),mar=c(4,4,1,1))
contour(lom,lam,t(zm))
contour(lom,lam,t(fv))
plot(bp,rug=FALSE)
plot(bp,scheme=1,theta=-30,phi=20,pch=19,cex=.5)
```
| programming_docs |
r None
`Sl.initial.repara` Re-parametrizing model matrix X
----------------------------------------------------
### Description
INTERNAL routine to apply initial Sl re-parameterization to model matrix X, or, if `inverse==TRUE`, to apply inverse re-parametrization to parameter vector or covariance matrix.
### Usage
```
Sl.inirep(Sl,X,l,r,nt=1)
Sl.initial.repara(Sl, X, inverse = FALSE, both.sides = TRUE, cov = TRUE,
nt = 1)
```
### Arguments
| | |
| --- | --- |
| `Sl` | the output of `Sl.setup`. |
| `X` | the model matrix. |
| `l` | if non-zero apply transform (positive) or inverse transform from left. 1 or -1 of transform, 2 or -2 for transpose. |
| `r` | if non-zero apply transform (positive) or inverse transform from right. 1 or -1 of transform, 2 or -2 for transpose. |
| `inverse` | if `TRUE` an inverse re-parametrization is performed. |
| `both.sides` | if `inverse==TRUE` and `both.sides==FALSE` then the re-parametrization only applied to rhs, as appropriate for a choleski factor. If `both.sides==FALSE`, `X` is a vector and `inverse==FALSE` then `X` is taken as a coefficient vector (so re-parametrization is inverse of that for the model matrix). |
| `cov` | boolean indicating whether `X` is a covariance matrix. |
| `nt` | number of parallel threads to be used. |
### Value
A re-parametrized version of `X`.
### Author(s)
Simon N. Wood <[email protected]>.
r None
`predict.gam` Prediction from fitted GAM model
-----------------------------------------------
### Description
Takes a fitted `gam` object produced by `gam()` and produces predictions given a new set of values for the model covariates or the original values used for the model fit. Predictions can be accompanied by standard errors, based on the posterior distribution of the model coefficients. The routine can optionally return the matrix by which the model coefficients must be pre-multiplied in order to yield the values of the linear predictor at the supplied covariate values: this is useful for obtaining credible regions for quantities derived from the model (e.g. derivatives of smooths), and for lookup table prediction outside `R` (see example code below).
### Usage
```
## S3 method for class 'gam'
predict(object,newdata,type="link",se.fit=FALSE,terms=NULL,
exclude=NULL,block.size=NULL,newdata.guaranteed=FALSE,
na.action=na.pass,unconditional=FALSE,iterms.type=NULL,...)
```
### Arguments
| | |
| --- | --- |
| `object` | a fitted `gam` object as produced by `gam()`. |
| `newdata` | A data frame or list containing the values of the model covariates at which predictions are required. If this is not provided then predictions corresponding to the original data are returned. If `newdata` is provided then it should contain all the variables needed for prediction: a warning is generated if not. See details for use with `link{linear.functional.terms}`. |
| `type` | When this has the value `"link"` (default) the linear predictor (possibly with associated standard errors) is returned. When `type="terms"` each component of the linear predictor is returned seperately (possibly with standard errors): this includes parametric model components, followed by each smooth component, but excludes any offset and any intercept. `type="iterms"` is the same, except that any standard errors returned for smooth components will include the uncertainty about the intercept/overall mean. When `type="response"` predictions on the scale of the response are returned (possibly with approximate standard errors). When `type="lpmatrix"` then a matrix is returned which yields the values of the linear predictor (minus any offset) when postmultiplied by the parameter vector (in this case `se.fit` is ignored). The latter option is most useful for getting variance estimates for quantities derived from the model: for example integrated quantities, or derivatives of smooths. A linear predictor matrix can also be used to implement approximate prediction outside `R` (see example code, below). |
| `se.fit` | when this is TRUE (not default) standard error estimates are returned for each prediction. |
| `terms` | if `type=="terms"` or `type="iterms"` then only results for the terms (smooth or parametric) named in this array will be returned. Otherwise any smooth terms not named in this array will be set to zero. If `NULL` then all terms are included. |
| `exclude` | if `type=="terms"` or `type="iterms"` then terms (smooth or parametric) named in this array will not be returned. Otherwise any smooth terms named in this array will be set to zero. If `NULL` then no terms are excluded. Note that this is the term names as it appears in the model summary, see example. You can avoid providing the covariates for the excluded terms by setting `newdata.guaranteed=TRUE`, which will avoid all checks on `newdata`. |
| `block.size` | maximum number of predictions to process per call to underlying code: larger is quicker, but more memory intensive. Set to < 1 to use total number of predictions as this. If `NULL` then block size is 1000 if new data supplied, and the number of rows in the model frame otherwise. |
| `newdata.guaranteed` | Set to `TRUE` to turn off all checking of `newdata` except for sanity of factor levels: this can speed things up for large prediction tasks, but `newdata` must be complete, with no `NA` values for predictors required in the model. |
| `na.action` | what to do about `NA` values in `newdata`. With the default `na.pass`, any row of `newdata` containing `NA` values for required predictors, gives rise to `NA` predictions (even if the term concerned has no `NA` predictors). `na.exclude` or `na.omit` result in the dropping of `newdata` rows, if they contain any `NA` values for required predictors. If `newdata` is missing then `NA` handling is determined from `object$na.action`. |
| `unconditional` | if `TRUE` then the smoothing parameter uncertainty corrected covariance matrix is used, when available, otherwise the covariance matrix conditional on the estimated smoothing parameters is used. |
| `iterms.type` | if `type="iterms"` then standard errors can either include the uncertainty in the overall mean (default, withfixed and random effects included) or the uncertainty in the mean of the non-smooth fixed effects only (`iterms.type=2`). |
| `...` | other arguments. |
### Details
The standard errors produced by `predict.gam` are based on the Bayesian posterior covariance matrix of the parameters `Vp` in the fitted gam object.
When predicting from models with `<linear.functional.terms>` then there are two possibilities. If the summation convention is to be used in prediction, as it was in fitting, then `newdata` should be a list, with named matrix arguments corresponding to any variables that were matrices in fitting. Alternatively one might choose to simply evaluate the constitutent smooths at particular values in which case arguments that were matrices can be replaced by vectors (and `newdata` can be a dataframe). See `<linear.functional.terms>` for example code.
To facilitate plotting with `[termplot](../../stats/html/termplot)`, if `object` possesses an attribute `"para.only"` and `type=="terms"` then only parametric terms of order 1 are returned (i.e. those that `termplot` can handle).
Note that, in common with other prediction functions, any offset supplied to `<gam>` as an argument is always ignored when predicting, unlike offsets specified in the gam model formula.
See the examples for how to use the `lpmatrix` for obtaining credible regions for quantities derived from the model.
### Value
If `type=="lpmatrix"` then a matrix is returned which will give a vector of linear predictor values (minus any offest) at the supplied covariate values, when applied to the model coefficient vector. Otherwise, if `se.fit` is `TRUE` then a 2 item list is returned with items (both arrays) `fit` and `se.fit` containing predictions and associated standard error estimates, otherwise an array of predictions is returned. The dimensions of the returned arrays depends on whether `type` is `"terms"` or not: if it is then the array is 2 dimensional with each term in the linear predictor separate, otherwise the array is 1 dimensional and contains the linear predictor/predicted values (or corresponding s.e.s). The linear predictor returned termwise will not include the offset or the intercept.
`newdata` can be a data frame, list or model.frame: if it's a model frame then all variables must be supplied.
### WARNING
Predictions are likely to be incorrect if data dependent transformations of the covariates are used within calls to smooths. See examples.
Note that the behaviour of this function is not identical to `predict.gam()` in Splus.
`type=="terms"` does not exactly match what `predict.lm` does for parametric model components.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
The design is inspired by the S function of the same name described in Chambers and Hastie (1993) (but is not a clone).
### References
Chambers and Hastie (1993) Statistical Models in S. Chapman & Hall.
Marra, G and S.N. Wood (2012) Coverage Properties of Confidence Intervals for Generalized Additive Model Components. Scandinavian Journal of Statistics, 39(1), 53-74.
Wood S.N. (2006b) Generalized Additive Models: An Introduction with R. Chapman and Hall/CRC Press.
### See Also
`<gam>`, `<gamm>`, `<plot.gam>`
### Examples
```
library(mgcv)
n<-200
sig <- 2
dat <- gamSim(1,n=n,scale=sig)
b<-gam(y~s(x0)+s(I(x1^2))+s(x2)+offset(x3),data=dat)
newd <- data.frame(x0=(0:30)/30,x1=(0:30)/30,x2=(0:30)/30,x3=(0:30)/30)
pred <- predict.gam(b,newd)
pred0 <- predict(b,newd,exclude="s(x0)") ## prediction excluding a term
## ...and the same, but without needing to provide x0 prediction data...
newd1 <- newd;newd1$x0 <- NULL ## remove x0 from `newd1'
pred1 <- predict(b,newd1,exclude="s(x0)",newdata.guaranteed=TRUE)
#############################################
## difference between "terms" and "iterms"
#############################################
nd2 <- data.frame(x0=c(.25,.5),x1=c(.25,.5),x2=c(.25,.5),x3=c(.25,.5))
predict(b,nd2,type="terms",se=TRUE)
predict(b,nd2,type="iterms",se=TRUE)
#########################################################
## now get variance of sum of predictions using lpmatrix
#########################################################
Xp <- predict(b,newd,type="lpmatrix")
## Xp %*% coef(b) yields vector of predictions
a <- rep(1,31)
Xs <- t(a) %*% Xp ## Xs %*% coef(b) gives sum of predictions
var.sum <- Xs %*% b$Vp %*% t(Xs)
#############################################################
## Now get the variance of non-linear function of predictions
## by simulation from posterior distribution of the params
#############################################################
rmvn <- function(n,mu,sig) { ## MVN random deviates
L <- mroot(sig);m <- ncol(L);
t(mu + L%*%matrix(rnorm(m*n),m,n))
}
br <- rmvn(1000,coef(b),b$Vp) ## 1000 replicate param. vectors
res <- rep(0,1000)
for (i in 1:1000)
{ pr <- Xp %*% br[i,] ## replicate predictions
res[i] <- sum(log(abs(pr))) ## example non-linear function
}
mean(res);var(res)
## loop is replace-able by following ....
res <- colSums(log(abs(Xp %*% t(br))))
##################################################################
## The following shows how to use use an "lpmatrix" as a lookup
## table for approximate prediction. The idea is to create
## approximate prediction matrix rows by appropriate linear
## interpolation of an existing prediction matrix. The additivity
## of a GAM makes this possible.
## There is no reason to ever do this in R, but the following
## code provides a useful template for predicting from a fitted
## gam *outside* R: all that is needed is the coefficient vector
## and the prediction matrix. Use larger `Xp'/ smaller `dx' and/or
## higher order interpolation for higher accuracy.
###################################################################
xn <- c(.341,.122,.476,.981) ## want prediction at these values
x0 <- 1 ## intercept column
dx <- 1/30 ## covariate spacing in `newd'
for (j in 0:2) { ## loop through smooth terms
cols <- 1+j*9 +1:9 ## relevant cols of Xp
i <- floor(xn[j+1]*30) ## find relevant rows of Xp
w1 <- (xn[j+1]-i*dx)/dx ## interpolation weights
## find approx. predict matrix row portion, by interpolation
x0 <- c(x0,Xp[i+2,cols]*w1 + Xp[i+1,cols]*(1-w1))
}
dim(x0)<-c(1,28)
fv <- x0%*%coef(b) + xn[4];fv ## evaluate and add offset
se <- sqrt(x0%*%b$Vp%*%t(x0));se ## get standard error
## compare to normal prediction
predict(b,newdata=data.frame(x0=xn[1],x1=xn[2],
x2=xn[3],x3=xn[4]),se=TRUE)
##################################################################
# illustration of unsafe scale dependent transforms in smooths....
##################################################################
b0 <- gam(y~s(x0)+s(x1)+s(x2)+x3,data=dat) ## safe
b1 <- gam(y~s(x0)+s(I(x1/2))+s(x2)+scale(x3),data=dat) ## safe
b2 <- gam(y~s(x0)+s(scale(x1))+s(x2)+scale(x3),data=dat) ## unsafe
pd <- dat; pd$x1 <- pd$x1/2; pd$x3 <- pd$x3/2
par(mfrow=c(1,2))
plot(predict(b0,pd),predict(b1,pd),main="b0 and b1 predictions match")
abline(0,1,col=2)
plot(predict(b0,pd),predict(b2,pd),main="b2 unsafe, doesn't match")
abline(0,1,col=2)
####################################################################
## Differentiating the smooths in a model (with CIs for derivatives)
####################################################################
## simulate data and fit model...
dat <- gamSim(1,n=300,scale=sig)
b<-gam(y~s(x0)+s(x1)+s(x2)+s(x3),data=dat)
plot(b,pages=1)
## now evaluate derivatives of smooths with associated standard
## errors, by finite differencing...
x.mesh <- seq(0,1,length=200) ## where to evaluate derivatives
newd <- data.frame(x0 = x.mesh,x1 = x.mesh, x2=x.mesh,x3=x.mesh)
X0 <- predict(b,newd,type="lpmatrix")
eps <- 1e-7 ## finite difference interval
x.mesh <- x.mesh + eps ## shift the evaluation mesh
newd <- data.frame(x0 = x.mesh,x1 = x.mesh, x2=x.mesh,x3=x.mesh)
X1 <- predict(b,newd,type="lpmatrix")
Xp <- (X1-X0)/eps ## maps coefficients to (fd approx.) derivatives
colnames(Xp) ## can check which cols relate to which smooth
par(mfrow=c(2,2))
for (i in 1:4) { ## plot derivatives and corresponding CIs
Xi <- Xp*0
Xi[,(i-1)*9+1:9+1] <- Xp[,(i-1)*9+1:9+1] ## Xi%*%coef(b) = smooth deriv i
df <- Xi%*%coef(b) ## ith smooth derivative
df.sd <- rowSums(Xi%*%b$Vp*Xi)^.5 ## cheap diag(Xi%*%b$Vp%*%t(Xi))^.5
plot(x.mesh,df,type="l",ylim=range(c(df+2*df.sd,df-2*df.sd)))
lines(x.mesh,df+2*df.sd,lty=2);lines(x.mesh,df-2*df.sd,lty=2)
}
```
r None
`anova.gam` Approximate hypothesis tests related to GAM fits
-------------------------------------------------------------
### Description
Performs hypothesis tests relating to one or more fitted `gam` objects. For a single fitted `gam` object, Wald tests of the significance of each parametric and smooth term are performed, so interpretation is analogous to `[drop1](../../stats/html/add1)` rather than `anova.lm` (i.e. it's like type III ANOVA, rather than a sequential type I ANOVA). Otherwise the fitted models are compared using an analysis of deviance table or GLRT test: this latter approach should not be use to test the significance of terms which can be penalized to zero. Models to be compared should be fitted to the same data using the same smoothing parameter selection method.
### Usage
```
## S3 method for class 'gam'
anova(object, ..., dispersion = NULL, test = NULL,
freq = FALSE)
## S3 method for class 'anova.gam'
print(x, digits = max(3, getOption("digits") - 3),...)
```
### Arguments
| | |
| --- | --- |
| `object,...` | fitted model objects of class `gam` as produced by `gam()`. |
| `x` | an `anova.gam` object produced by a single model call to `anova.gam()`. |
| `dispersion` | a value for the dispersion parameter: not normally used. |
| `test` | what sort of test to perform for a multi-model call. One of `"Chisq"`, `"F"` or `"Cp"`. Reset to `"Chisq"` for extended and general families unless `NULL`. |
| `freq` | whether to use frequentist or Bayesian approximations for parametric term p-values. See `<summary.gam>` for details. |
| `digits` | number of digits to use when printing output. |
### Details
If more than one fitted model is provided than `anova.glm` is used, with the difference in model degrees of freedom being taken as the difference in effective degress of freedom (when possible this is a smoothing parameter uncertainty corrected version). For extended and general families this is set so that a GLRT test is used. The p-values resulting from the multi-model case are only approximate, and must be used with care. The approximation is most accurate when the comparison relates to unpenalized terms, or smoothers with a null space of dimension greater than zero. (Basically we require that the difference terms could be well approximated by unpenalized terms with degrees of freedom approximately the effective degrees of freedom). In simulations the p-values are usually slightly too low. For terms with a zero-dimensional null space (i.e. those which can be penalized to zero) the approximation is often very poor, and significance can be greatly overstated: i.e. p-values are often substantially too low. This case applies to random effect terms.
Note also that in the multi-model call to `anova.gam`, it is quite possible for a model with more terms to end up with lower effective degrees of freedom, but better fit, than the notionally null model with fewer terms. In such cases it is very rare that it makes sense to perform any sort of test, since there is then no basis on which to accept the notional null model.
If only one model is provided then the significance of each model term is assessed using Wald like tests, conditional on the smoothing parameter estimates: see `<summary.gam>` and Wood (2013a,b) for details. The p-values provided here are better justified than in the multi model case, and have close to the correct distribution under the null, unless smoothing parameters are poorly identified. ML or REML smoothing parameter selection leads to the best results in simulations as they tend to avoid occasional severe undersmoothing. In replication of the full simulation study of Scheipl et al. (2008) the tests give almost indistinguishable power to the method recommended there, but slightly too low p-values under the null in their section 3.1.8 test for a smooth interaction (the Scheipl et al. recommendation is not used directly, because it only applies in the Gaussian case, and requires model refits, but it is available in package `RLRsim`).
In the single model case `print.anova.gam` is used as the printing method.
By default the p-values for parametric model terms are also based on Wald tests using the Bayesian covariance matrix for the coefficients. This is appropriate when there are "re" terms present, and is otherwise rather similar to the results using the frequentist covariance matrix (`freq=TRUE`), since the parametric terms themselves are usually unpenalized. Default P-values for parameteric terms that are penalized using the `paraPen` argument will not be good.
### Value
In the multi-model case `anova.gam` produces output identical to `[anova.glm](../../stats/html/anova.glm)`, which it in fact uses.
In the single model case an object of class `anova.gam` is produced, which is in fact an object returned from `<summary.gam>`.
`print.anova.gam` simply produces tabulated output.
### WARNING
If models 'a' and 'b' differ only in terms with no un-penalized components (such as random effects) then p values from anova(a,b) are unreliable, and usually much too low.
Default P-values will usually be wrong for parametric terms penalized using ‘paraPen’: use freq=TRUE to obtain better p-values when the penalties are full rank and represent conventional random effects.
For a single model, interpretation is similar to drop1, not anova.lm.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected]) with substantial improvements by Henric Nilsson.
### References
Scheipl, F., Greven, S. and Kuchenhoff, H. (2008) Size and power of tests for a zero random effect variance or polynomial regression in additive and linear mixed models. Comp. Statist. Data Anal. 52, 3283-3299
Wood, S.N. (2013a) On p-values for smooth components of an extended generalized additive model. Biometrika 100:221-228
Wood, S.N. (2013b) A simple test for random effects in regression models. Biometrika 100:1005-1010
### See Also
`<gam>`, `<predict.gam>`, `<gam.check>`, `<summary.gam>`
### Examples
```
library(mgcv)
set.seed(0)
dat <- gamSim(5,n=200,scale=2)
b<-gam(y ~ x0 + s(x1) + s(x2) + s(x3),data=dat)
anova(b)
b1<-gam(y ~ x0 + s(x1) + s(x2),data=dat)
anova(b,b1,test="F")
```
| programming_docs |
r None
`in.out` Which of a set of points lie within a polygon defined region
----------------------------------------------------------------------
### Description
Tests whether each of a set of points lie within a region defined by one or more (possibly nested) polygons. Points count as ‘inside’ if they are interior to an odd number of polygons.
### Usage
```
in.out(bnd,x)
```
### Arguments
| | |
| --- | --- |
| `bnd` | A two column matrix, the rows of which define the vertices of polygons defining the boundary of a region. Different polygons should be separated by an `NA` row, and the polygons are assumed closed. Alternatively can be a lists where `bnd[[i]][[1]]`, `bnd[[i]][[2]]` defines the ith boundary loop. |
| `x` | A two column matrix. Each row is a point to test for inclusion in the region defined by `bnd`. Can also be a 2-vector, defining a single point. |
### Details
The algorithm works by counting boundary crossings (using compiled C code).
### Value
A logical vector of length `nrow(x)`. `TRUE` if the corresponding row of `x` is inside the boundary and `FALSE` otherwise.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
<https://www.maths.ed.ac.uk/~swood34/>
### Examples
```
library(mgcv)
data(columb.polys)
bnd <- columb.polys[[2]]
plot(bnd,type="n")
polygon(bnd)
x <- seq(7.9,8.7,length=20)
y <- seq(13.7,14.3,length=20)
gr <- as.matrix(expand.grid(x,y))
inside <- in.out(bnd,gr)
points(gr,col=as.numeric(inside)+1)
```
r None
`gam.check` Some diagnostics for a fitted gam model
----------------------------------------------------
### Description
Takes a fitted `gam` object produced by `gam()` and produces some diagnostic information about the fitting procedure and results. The default is to produce 4 residual plots, some information about the convergence of the smoothness selection optimization, and to run diagnostic tests of whether the basis dimension choises are adequate. Care should be taken in interpreting the results when applied to `gam` objects returned by `<gamm>`.
### Usage
```
gam.check(b, old.style=FALSE,
type=c("deviance","pearson","response"),
k.sample=5000,k.rep=200,
rep=0, level=.9, rl.col=2, rep.col="gray80", ...)
```
### Arguments
| | |
| --- | --- |
| `b` | a fitted `gam` object as produced by `<gam>()`. |
| `old.style` | If you want old fashioned plots, exactly as in Wood, 2006, set to `TRUE`. |
| `type` | type of residuals, see `<residuals.gam>`, used in all plots. |
| `k.sample` | Above this k testing uses a random sub-sample of data. |
| `k.rep` | how many re-shuffles to do to get p-value for k testing. |
| `rep, level, rl.col, rep.col` | arguments passed to `<qq.gam>()` when `old.style` is false, see there. |
| `...` | extra graphics parameters to pass to plotting functions. |
### Details
Checking a fitted `gam` is like checking a fitted `glm`, with two main differences. Firstly, the basis dimensions used for smooth terms need to be checked, to ensure that they are not so small that they force oversmoothing: the defaults are arbitrary. `<choose.k>` provides more detail, but the diagnostic tests described below and reported by this function may also help. Secondly, fitting may not always be as robust to violation of the distributional assumptions as would be the case for a regular GLM, so slightly more care may be needed here. In particular, the thoery of quasi-likelihood implies that if the mean variance relationship is OK for a GLM, then other departures from the assumed distribution are not problematic: GAMs can sometimes be more sensitive. For example, un-modelled overdispersion will typically lead to overfit, as the smoothness selection criterion tries to reduce the scale parameter to the one specified. Similarly, it is not clear how sensitive REML and ML smoothness selection will be to deviations from the assumed response dsistribution. For these reasons this routine uses an enhanced residual QQ plot.
This function plots 4 standard diagnostic plots, some smoothing parameter estimation convergence information and the results of tests which may indicate if the smoothing basis dimension for a term is too low.
Usually the 4 plots are various residual plots. For the default optimization methods the convergence information is summarized in a readable way, but for other optimization methods, whatever is returned by way of convergence diagnostics is simply printed.
The test of whether the basis dimension for a smooth is adequate (Wood, 2017, section 5.9) is based on computing an estimate of the residual variance based on differencing residuals that are near neighbours according to the (numeric) covariates of the smooth. This estimate divided by the residual variance is the `k-index` reported. The further below 1 this is, the more likely it is that there is missed pattern left in the residuals. The `p-value` is computed by simulation: the residuals are randomly re-shuffled `k.rep` times to obtain the null distribution of the differencing variance estimator, if there is no pattern in the residuals. For models fitted to more than `k.sample` data, the tests are based of `k.sample` randomly sampled data. Low p-values may indicate that the basis dimension, `k`, has been set too low, especially if the reported `edf` is close to k', the maximum possible EDF for the term. Note the disconcerting fact that if the test statistic itself is based on random resampling and the null is true, then the associated p-values will of course vary widely from one replicate to the next. Currently smooths of factor variables are not supported and will give an `NA` p-value.
Doubling a suspect `k` and re-fitting is sensible: if the reported `edf` increases substantially then you may have been missing something in the first fit. Of course p-values can be low for reasons other than a too low `k`. See `<choose.k>` for fuller discussion.
The QQ plot produced is usually created by a call to `<qq.gam>`, and plots deviance residuals against approximate theoretical quantilies of the deviance residual distribution, according to the fitted model. If this looks odd then investigate further using `<qq.gam>`. Note that residuals for models fitted to binary data contain very little information useful for model checking (it is necessary to find some way of aggregating them first), so the QQ plot is unlikely to be useful in this case.
Take care when interpreting results from applying this function to a model fitted using `<gamm>`. In this case the returned `gam` object is based on the working model used for estimation, and will treat all the random effects as part of the error. This means that the residuals extracted from the `gam` object are not standardized for the family used or for the random effects or correlation structure. Usually it is necessary to produce your own residual checks based on consideration of the model structure you have used.
### Value
A vector of reference quantiles for the residual distribution, if these can be computed.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
N.H. Augustin, E-A Sauleaub, S.N. Wood (2012) On quantile quantile plots for generalized linear models. Computational Statistics & Data Analysis. 56(8), 2404-3409.
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`<choose.k>`, `<gam>`, `<magic>`
### Examples
```
library(mgcv)
set.seed(0)
dat <- gamSim(1,n=200)
b<-gam(y~s(x0)+s(x1)+s(x2)+s(x3),data=dat)
plot(b,pages=1)
gam.check(b,pch=19,cex=.3)
```
r None
`summary.gam` Summary for a GAM fit
------------------------------------
### Description
Takes a fitted `gam` object produced by `gam()` and produces various useful summaries from it. (See `[sink](../../base/html/sink)` to divert output to a file.)
### Usage
```
## S3 method for class 'gam'
summary(object, dispersion=NULL, freq=FALSE, re.test=TRUE, ...)
## S3 method for class 'summary.gam'
print(x,digits = max(3, getOption("digits") - 3),
signif.stars = getOption("show.signif.stars"),...)
```
### Arguments
| | |
| --- | --- |
| `object` | a fitted `gam` object as produced by `gam()`. |
| `x` | a `summary.gam` object produced by `summary.gam()`. |
| `dispersion` | A known dispersion parameter. `NULL` to use estimate or default (e.g. 1 for Poisson). |
| `freq` | By default p-values for parametric terms are calculated using the Bayesian estimated covariance matrix of the parameter estimators. If this is set to `TRUE` then the frequentist covariance matrix of the parameters is used instead. |
| `re.test` | Should tests be performed for random effect terms (including any term with a zero dimensional null space)? For large models these tests can be computationally expensive. |
| `digits` | controls number of digits printed in output. |
| `signif.stars` | Should significance stars be printed alongside output. |
| `...` | other arguments. |
### Details
Model degrees of freedom are taken as the trace of the influence (or hat) matrix *A* for the model fit. Residual degrees of freedom are taken as number of data minus model degrees of freedom. Let *P\_i* be the matrix giving the parameters of the ith smooth when applied to the data (or pseudodata in the generalized case) and let *X* be the design matrix of the model. Then *tr(XP\_i)* is the edf for the ith term. Clearly this definition causes the edf's to add up properly! An alternative version of EDF is more appropriate for p-value computation, and is based on the trace of *2A - AA*.
`print.summary.gam` tries to print various bits of summary information useful for term selection in a pretty way.
P-values for smooth terms are usually based on a test statistic motivated by an extension of Nychka's (1988) analysis of the frequentist properties of Bayesian confidence intervals for smooths (Marra and Wood, 2012). These have better frequentist performance (in terms of power and distribution under the null) than the alternative strictly frequentist approximation. When the Bayesian intervals have good across the function properties then the p-values have close to the correct null distribution and reasonable power (but there are no optimality results for the power). Full details are in Wood (2013b), although what is computed is actually a slight variant in which the components of the test statistic are weighted by the iterative fitting weights.
Note that for terms with no unpenalized terms (such as Gaussian random effects) the Nychka (1988) requirement for smoothing bias to be substantially less than variance breaks down (see e.g. appendix of Marra and Wood, 2012), and this results in incorrect null distribution for p-values computed using the above approach. In this case it is necessary to use an alternative approach designed for random effects variance components, and this is done. See Wood (2013a) for details: the test is based on a likelihood ratio statistic (with the reference distribution appropriate for the null hypothesis on the boundary of the parameter space).
All p-values are computed without considering uncertainty in the smoothing parameter estimates.
In simulations the p-values have best behaviour under ML smoothness selection, with REML coming second. In general the p-values behave well, but neglecting smoothing parameter uncertainty means that they may be somewhat too low when smoothing parameters are highly uncertain. High uncertainty happens in particular when smoothing parameters are poorly identified, which can occur with nested smooths or highly correlated covariates (high concurvity).
By default the p-values for parametric model terms are also based on Wald tests using the Bayesian covariance matrix for the coefficients. This is appropriate when there are "re" terms present, and is otherwise rather similar to the results using the frequentist covariance matrix (`freq=TRUE`), since the parametric terms themselves are usually unpenalized. Default P-values for parameteric terms that are penalized using the `paraPen` argument will not be good. However if such terms represent conventional random effects with full rank penalties, then setting `freq=TRUE` is appropriate.
### Value
`summary.gam` produces a list of summary information for a fitted `gam` object.
| | |
| --- | --- |
| `p.coeff` | is an array of estimates of the strictly parametric model coefficients. |
| `p.t` | is an array of the `p.coeff`'s divided by their standard errors. |
| `p.pv` | is an array of p-values for the null hypothesis that the corresponding parameter is zero. Calculated with reference to the t distribution with the estimated residual degrees of freedom for the model fit if the dispersion parameter has been estimated, and the standard normal if not. |
| `m` | The number of smooth terms in the model. |
| `chi.sq` | An array of test statistics for assessing the significance of model smooth terms. See details. |
| `s.pv` | An array of approximate p-values for the null hypotheses that each smooth term is zero. Be warned, these are only approximate. |
| `se` | array of standard error estimates for all parameter estimates. |
| `r.sq` | The adjusted r-squared for the model. Defined as the proportion of variance explained, where original variance and residual variance are both estimated using unbiased estimators. This quantity can be negative if your model is worse than a one parameter constant model, and can be higher for the smaller of two nested models! The proportion null deviance explained is probably more appropriate for non-normal errors. Note that `r.sq` does not include any offset in the one parameter model. |
| `dev.expl` | The proportion of the null deviance explained by the model. The null deviance is computed taking account of any offset, so `dev.expl` can be substantially lower than `r.sq` when an offset is present. |
| `edf` | array of estimated degrees of freedom for the model terms. |
| `residual.df` | estimated residual degrees of freedom. |
| `n` | number of data. |
| `np` | number of model coefficients (regression coefficients, not smoothing parameters or other parameters of likelihood). |
| `rank` | apparent model rank. |
| `method` | The smoothing selection criterion used. |
| `sp.criterion` | The minimized value of the smoothness selection criterion. Note that for ML and REML methods, what is reported is the negative log marginal likelihood or negative log restricted likelihood. |
| `scale` | estimated (or given) scale parameter. |
| `family` | the family used. |
| `formula` | the original GAM formula. |
| `dispersion` | the scale parameter. |
| `pTerms.df` | the degrees of freedom associated with each parametric term (excluding the constant). |
| `pTerms.chi.sq` | a Wald statistic for testing the null hypothesis that the each parametric term is zero. |
| `pTerms.pv` | p-values associated with the tests that each term is zero. For penalized fits these are approximate. The reference distribution is an appropriate chi-squared when the scale parameter is known, and is based on an F when it is not. |
| `cov.unscaled` | The estimated covariance matrix of the parameters (or estimators if `freq=TRUE`), divided by scale parameter. |
| `cov.scaled` | The estimated covariance matrix of the parameters (estimators if `freq=TRUE`). |
| `p.table` | significance table for parameters |
| `s.table` | significance table for smooths |
| `p.Terms` | significance table for parametric model terms |
### WARNING
The p-values are approximate and neglect smoothing parameter uncertainty. They are likely to be somewhat too low when smoothing parameter estimates are highly uncertain: do read the details section. If the exact values matter, read Wood (2013a or b).
P-values for terms penalized via ‘paraPen’ are unlikely to be correct.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected]) with substantial improvements by Henric Nilsson.
### References
Marra, G and S.N. Wood (2012) Coverage Properties of Confidence Intervals for Generalized Additive Model Components. Scandinavian Journal of Statistics, 39(1), 53-74.
Nychka (1988) Bayesian Confidence Intervals for Smoothing Splines. Journal of the American Statistical Association 83:1134-1143.
Wood, S.N. (2013a) A simple test for random effects in regression models. Biometrika 100:1005-1010
Wood, S.N. (2013b) On p-values for smooth components of an extended generalized additive model. Biometrika 100:221-228
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
### See Also
`<gam>`, `<predict.gam>`, `<gam.check>`, `<anova.gam>`, `<gam.vcomp>`, `<sp.vcov>`
### Examples
```
library(mgcv)
set.seed(0)
dat <- gamSim(1,n=200,scale=2) ## simulate data
b <- gam(y~s(x0)+s(x1)+s(x2)+s(x3),data=dat)
plot(b,pages=1)
summary(b)
## now check the p-values by using a pure regression spline.....
b.d <- round(summary(b)$edf)+1 ## get edf per smooth
b.d <- pmax(b.d,3) # can't have basis dimension less than 3!
bc<-gam(y~s(x0,k=b.d[1],fx=TRUE)+s(x1,k=b.d[2],fx=TRUE)+
s(x2,k=b.d[3],fx=TRUE)+s(x3,k=b.d[4],fx=TRUE),data=dat)
plot(bc,pages=1)
summary(bc)
## Example where some p-values are less reliable...
dat <- gamSim(6,n=200,scale=2)
b <- gam(y~s(x0,m=1)+s(x1)+s(x2)+s(x3)+s(fac,bs="re"),data=dat)
## Here s(x0,m=1) can be penalized to zero, so p-value approximation
## cruder than usual...
summary(b)
## p-value check - increase k to make this useful!
k<-20;n <- 200;p <- rep(NA,k)
for (i in 1:k)
{ b<-gam(y~te(x,z),data=data.frame(y=rnorm(n),x=runif(n),z=runif(n)),
method="ML")
p[i]<-summary(b)$s.p[1]
}
plot(((1:k)-0.5)/k,sort(p))
abline(0,1,col=2)
ks.test(p,"punif") ## how close to uniform are the p-values?
## A Gamma example, by modify `gamSim' output...
dat <- gamSim(1,n=400,dist="normal",scale=1)
dat$f <- dat$f/4 ## true linear predictor
Ey <- exp(dat$f);scale <- .5 ## mean and GLM scale parameter
## Note that `shape' and `scale' in `rgamma' are almost
## opposite terminology to that used with GLM/GAM...
dat$y <- rgamma(Ey*0,shape=1/scale,scale=Ey*scale)
bg <- gam(y~ s(x0)+ s(x1)+s(x2)+s(x3),family=Gamma(link=log),
data=dat,method="REML")
summary(bg)
```
r None
`gevlss` Generalized Extreme Value location-scale model family
---------------------------------------------------------------
### Description
The `gevlss` family implements Generalized Extreme Value location scale additive models in which the location, scale and shape parameters depend on additive smooth predictors. Usable only with `<gam>`, the linear predictors are specified via a list of formulae.
### Usage
```
gevlss(link=list("identity","identity","logit"))
```
### Arguments
| | |
| --- | --- |
| `link` | three item list specifying the link for the location scale and shape parameters. See details. |
### Details
Used with `<gam>` to fit Generalized Extreme Value location scale and shape models. `gam` is called with a list containing 3 formulae: the first specifies the response on the left hand side and the structure of the linear predictor for the location parameter on the right hand side. The second is one sided, specifying the linear predictor for the log scale parameter on the right hand side. The third is one sided specifying the linear predictor for the shape parameter.
Link functions `"identity"` and `"log"` are available for the location (mu) parameter. There is no choice of link for the log scale parameter (*rho = log sigma*). The shape parameter (xi) defaults to a modified logit link restricting its range to (-1,.5), the upper limit is required to ensure finite variance, while the lower limit ensures consistency of the MLE (Smith, 1985).
The fitted values for this family will be a three column matrix. The first column is the location parameter, the second column is the log scale parameter, the third column is the shape parameter.
This family does not produce a null deviance. Note that the distribution for *xi=0* is approximated by setting *xi* to a small number.
The derivative system code for this family is mostly auto-generated, and the family is still somewhat experimental.
The GEV distribution is rather challenging numerically, and for small datasets or poorly fitting models improved numerical robustness may be obtained by using the extended Fellner-Schall method of Wood and Fasiolo (2017) for smoothing parameter estimation. See examples.
### Value
An object inheriting from class `general.family`.
### References
Smith, R.L. (1985) Maximum likelihood estimation in a class of nonregular cases. Biometrika 72(1):67-90
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
Wood, S.N. and M. Fasiolo (2017) A generalized Fellner-Schall method for smoothing parameter optimization with application to Tweedie location, scale and shape models. Biometrics 73(4): 1071-1081. doi: [10.1111/biom.12666](https://doi.org/10.1111/biom.12666)
### Examples
```
library(mgcv)
Fi.gev <- function(z,mu,sigma,xi) {
## GEV inverse cdf.
xi[abs(xi)<1e-8] <- 1e-8 ## approximate xi=0, by small xi
x <- mu + ((-log(z))^-xi-1)*sigma/xi
}
## simulate test data...
f0 <- function(x) 2 * sin(pi * x)
f1 <- function(x) exp(2 * x)
f2 <- function(x) 0.2 * x^11 * (10 * (1 - x))^6 + 10 *
(10 * x)^3 * (1 - x)^10
set.seed(1)
n <- 500
x0 <- runif(n);x1 <- runif(n);x2 <- runif(n)
mu <- f2(x2)
rho <- f0(x0)
xi <- (f1(x1)-4)/9
y <- Fi.gev(runif(n),mu,exp(rho),xi)
dat <- data.frame(y,x0,x1,x2);pairs(dat)
## fit model....
b <- gam(list(y~s(x2),~s(x0),~s(x1)),family=gevlss,data=dat)
## same fit using the extended Fellner-Schall method which
## can provide improved numerical robustness...
b <- gam(list(y~s(x2),~s(x0),~s(x1)),family=gevlss,data=dat,
optimizer="efs")
## plot and look at residuals...
plot(b,pages=1,scale=0)
summary(b)
par(mfrow=c(2,2))
mu <- fitted(b)[,1];rho <- fitted(b)[,2]
xi <- fitted(b)[,3]
## Get the predicted expected response...
fv <- mu + exp(rho)*(gamma(1-xi)-1)/xi
rsd <- residuals(b)
plot(fv,rsd);qqnorm(rsd)
plot(fv,residuals(b,"pearson"))
plot(fv,residuals(b,"response"))
```
| programming_docs |
r None
`smooth.construct.tensor.smooth.spec` Tensor product smoothing constructor
---------------------------------------------------------------------------
### Description
A special `smooth.construct` method function for creating tensor product smooths from any combination of single penalty marginal smooths.
### Usage
```
## S3 method for class 'tensor.smooth.spec'
smooth.construct(object, data, knots)
```
### Arguments
| | |
| --- | --- |
| `object` | a smooth specification object of class `tensor.smooth.spec`, usually generated by a term like `te(x,z)` in a `<gam>` model formula |
| `data` | a list containing just the data (including any `by` variable) required by this term, with names corresponding to `object$term` (and `object$by`). The `by` variable is the last element. |
| `knots` | a list containing any knots supplied for basis setup — in same order and with same names as `data`. Can be `NULL`. See details for further information. |
### Details
Tensor product smooths are smooths of several variables which allow the degree of smoothing to be different with respect to different variables. They are useful as smooth interaction terms, as they are invariant to linear rescaling of the covariates, which means, for example, that they are insensitive to the measurement units of the different covariates. They are also useful whenever isotropic smoothing is inappropriate. See `<te>`, `<smooth.construct>` and `<smooth.terms>`.
### Value
An object of class `"tensor.smooth"`. See `<smooth.construct>`, for the elements that this object will contain.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2006) Low rank scale invariant tensor product smooths for generalized additive mixed models. Biometrics 62(4):1025-1036
### See Also
`[cSplineDes](csplinedes)`
### Examples
```
## see ?gam
```
r None
`gamlss.gH` Calculating derivatives of log-likelihood wrt regression coefficients
----------------------------------------------------------------------------------
### Description
Mainly intended for internal use with location scale model families. Given the derivatives of the log-likelihood wrt the linear predictor, this function obtains the derivatives and Hessian wrt the regression coefficients and derivatives of the Hessian w.r.t. the smoothing parameters. For input derivative array packing conventions see `<trind.generator>`.
### Usage
```
gamlss.gH(X, jj, l1, l2, i2, l3 = 0, i3 = 0, l4 = 0, i4 = 0, d1b = 0,
d2b = 0, deriv = 0, fh = NULL, D = NULL)
```
### Arguments
| | |
| --- | --- |
| `X` | matrix containing the model matrices of all the linear predictors. |
| `jj` | list of index vectors such that `X[,jj[[i]]]` is the model matrix of the i-th linear predictor. |
| `l1` | array of 1st order derivatives of each element of the log-likelihood wrt each parameter. |
| `l2` | array of 2nd order derivatives of each element of the log-likelihood wrt each parameter. |
| `i2` | two-dimensional index array, such that `l2[,i2[i,j]]` contains the partial w.r.t. params indexed by i,j with no restriction on the index values (except that they are in 1,...,ncol(l1)). |
| `l3` | array of 3rd order derivatives of each element of the log-likelihood wrt each parameter. |
| `i3` | third-dimensional index array, such that `l3[,i3[i,j,k]]` contains the partial w.r.t. params indexed by i,j,k. |
| `l4` | array of 4th order derivatives of each element of the log-likelihood wrt each parameter. |
| `i4` | third-dimensional index array, such that `l4[,i4[i,j,k,l]]` contains the partial w.r.t. params indexed by i,j,k,l. |
| `d1b` | first derivatives of the regression coefficients wrt the smoothing parameters. |
| `d2b` | second derivatives of the regression coefficients wrt the smoothing parameters. |
| `deriv` | if `deriv==0` only first and second order derivatives will be calculated. If `deriv==1` the function return also the diagonal of the first derivative of the Hessian, if `deriv==2` it return the full 3rd order derivative and if `deriv==3` it provides also 4th order derivatives. |
| `fh` | eigen-decomposition or Cholesky factor of the penalized Hessian. |
| `D` | diagonal matrix, used to provide some scaling. |
### Value
A list containing `lb` - the grad vector w.r.t. coefs; `lbb` - the Hessian matrix w.r.t. coefs; `d1H` - either a list of the derivatives of the Hessian w.r.t. the smoothing parameters, or a single matrix whose columns are the leading diagonals of these dervative matrices; `trHid2H` - the trace of the inverse Hessian multiplied by the second derivative of the Hessian w.r.t. all combinations of smoothing parameters.
### Author(s)
Simon N. Wood <[email protected]>.
### See Also
`<trind.generator>`
r None
`mroot` Smallest square root of matrix
---------------------------------------
### Description
Find a square root of a positive semi-definite matrix, having as few columns as possible. Uses either pivoted choleski decomposition or singular value decomposition to do this.
### Usage
```
mroot(A,rank=NULL,method="chol")
```
### Arguments
| | |
| --- | --- |
| `A` | The positive semi-definite matrix, a square root of which is to be found. |
| `rank` | if the rank of the matrix `A` is known then it should be supplied. `NULL` or <1 imply that it should be estimated. |
| `method` | `"chol"` to use pivoted choloeski decompositon, which is fast but tends to over-estimate rank. `"svd"` to use singular value decomposition, which is slow, but is the most accurate way to estimate rank. |
### Details
The function uses SVD, or a pivoted Choleski routine. It is primarily of use for turning penalized regression problems into ordinary regression problems.
### Value
A matrix, *B* with as many columns as the rank of *A*, and such that *A=BB'*.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### Examples
```
require(mgcv)
set.seed(0)
a <- matrix(runif(24),6,4)
A <- a%*%t(a) ## A is +ve semi-definite, rank 4
B <- mroot(A) ## default pivoted choleski method
tol <- 100*.Machine$double.eps
chol.err <- max(abs(A-B%*%t(B)));chol.err
if (chol.err>tol) warning("mroot (chol) suspect")
B <- mroot(A,method="svd") ## svd method
svd.err <- max(abs(A-B%*%t(B)));svd.err
if (svd.err>tol) warning("mroot (svd) suspect")
```
r None
`logLik.gam` AIC and Log likelihood for a fitted GAM
-----------------------------------------------------
### Description
Function to extract the log-likelihood for a fitted `gam` model (note that the models are usually fitted by penalized likelihood maximization). Used by `[AIC](../../stats/html/aic)`. See details for more information on AIC computation.
### Usage
```
## S3 method for class 'gam'
logLik(object,...)
```
### Arguments
| | |
| --- | --- |
| `object` | fitted model objects of class `gam` as produced by `gam()`. |
| `...` | un-used in this case |
### Details
Modification of `logLik.glm` which corrects the degrees of freedom for use with `gam` objects.
The function is provided so that `[AIC](../../stats/html/aic)` functions correctly with `gam` objects, and uses the appropriate degrees of freedom (accounting for penalization). See e.g. Wood, Pya and Saefken (2016) for a derivation of an appropriate AIC.
There are two possibile AIC's that might be considered for use with GAMs. Marginal AIC is based on the marginal likelihood of the GAM, that is the likelihood based on treating penalized (e.g. spline) coefficients as random and integrating them out. The degrees of freedom is then the number of smoothing/variance parameters + the number of fixed effects. The problem with Marginal AIC is that marginal likelihood underestimates variance components/oversmooths, so that the approach favours simpler models excessively (substituting REML does not work, because REML is not comparable between models with different unpenalized/fixed components). Conditional AIC uses the likelihood of all the model coefficients, evaluated at the penalized MLE. The degrees of freedom to use then is the effective degrees of freedom for the model. However, Greven and Kneib (2010) show that the neglect of smoothing parameter uncertainty can lead to this conditional AIC being excessively likely to select larger models. Wood, Pya and Saefken (2016) propose a simple correction to the effective degrees of freedom to fix this problem. `mgcv` applies this correction whenever possible: that is when using `ML` or `REML` smoothing parameter selection with `<gam>` or `<bam>`. The correction is not computable when using the Extended Fellner Schall or BFGS optimizer (since the correction requires an estimate of the covariance matrix of the log smoothing parameters).
### Value
Standard `logLik` object: see `[logLik](../../stats/html/loglik)`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected]) based directly on `logLik.glm`
### References
Greven, S., and Kneib, T. (2010), On the Behaviour of Marginal and Conditional AIC in Linear Mixed Models, Biometrika, 97, 773-789.
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models (with discussion). Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
### See Also
`[AIC](../../stats/html/aic)`
r None
`null.space.dimension` The basis of the space of un-penalized functions for a TPRS
-----------------------------------------------------------------------------------
### Description
The thin plate spline penalties give zero penalty to some functions. The space of these functions is spanned by a set of polynomial terms. `null.space.dimension` finds the dimension of this space, *M*, given the number of covariates that the smoother is a function of, *d*, and the order of the smoothing penalty, *m*. If *m* does not satisfy *2m>d* then the smallest possible dimension for the null space is found given *d* and the requirement that the smooth should be visually smooth.
### Usage
```
null.space.dimension(d,m)
```
### Arguments
| | |
| --- | --- |
| `d` | is a positive integer - the number of variables of which the t.p.s. is a function. |
| `m` | a non-negative integer giving the order of the penalty functional, or signalling that the default order should be used. |
### Details
Thin plate splines are only visually smooth if the order of the wiggliness penalty, *m*, satisfies *2m > d+1*. If *2m<d+1* then this routine finds the smallest *m* giving visual smoothness for the given *d*, otherwise the supplied *m* is used. The null space dimension is given by:
*M=(m+d-1)!/(d!(m-1)!*
which is the value returned.
### Value
An integer (array), the null space dimension *M*.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N. (2003) Thin plate regression splines. J.R.Statist.Soc.B 65(1):95-114
<https://www.maths.ed.ac.uk/~swood34/>
### See Also
`[tprs](smooth.construct.tp.smooth.spec)`
### Examples
```
require(mgcv)
null.space.dimension(2,0)
```
r None
`gammals` Gamma location-scale model family
--------------------------------------------
### Description
The `gammals` family implements gamma location scale additive models in which the log of the mean and the log of the scale parameter (see details) can depend on additive smooth predictors. Useable only with `<gam>`, the linear predictors are specified via a list of formulae.
### Usage
```
gammals(link=list("identity","log"),b=-7)
```
### Arguments
| | |
| --- | --- |
| `link` | two item list specifying the link for the mean and the standard deviation. See details for meaning which may not be intuitive. |
| `b` | The minumum log scale parameter. |
### Details
Used with `<gam>` to fit gamma location - scale models parameterized in terms of the log mean and the log scale parameter (the response variance is the squared mean multiplied by the scale parameter). Note that `identity` links mean that the linear predictors give the log mean and log scale directly. By default the `log` link for the scale parameter simply forces the log scale parameter to have a lower limit given by argument `b`: if *l* is the linear predictor for the log scale parameter, *s*, then *log(s) = b + log(1+e^l)*.
`gam` is called with a list containing 2 formulae, the first specifies the response on the left hand side and the structure of the linear predictor for the log mean on the right hand side. The second is one sided, specifying the linear predictor for the log scale on the right hand side.
The fitted values for this family will be a two column matrix. The first column is the mean (on original, not log, scale), and the second column is the log scale. Predictions using `<predict.gam>` will also produce 2 column matrices for `type` `"link"` and `"response"`. The first column is on the original data scale when `type="response"` and on the log mean scale of the linear predictor when `type="link"`. The second column when `type="response"` is again the log scale parameter, but is on the linear predictor when `type="link"`.
The null deviance reported for this family computed by setting the fitted values to the mean response, but using the model estimated scale.
### Value
An object inheriting from class `general.family`.
### References
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
### Examples
```
library(mgcv)
## simulate some data
f0 <- function(x) 2 * sin(pi * x)
f1 <- function(x) exp(2 * x)
f2 <- function(x) 0.2 * x^11 * (10 * (1 - x))^6 + 10 *
(10 * x)^3 * (1 - x)^10
f3 <- function(x) 0 * x
n <- 400;set.seed(9)
x0 <- runif(n);x1 <- runif(n);
x2 <- runif(n);x3 <- runif(n);
mu <- exp((f0(x0)+f2(x2))/5)
th <- exp(f1(x1)/2-2)
y <- rgamma(n,shape=1/th,scale=mu*th)
b1 <- gam(list(y~s(x0)+s(x2),~s(x1)+s(x3)),family=gammals)
plot(b1,pages=1)
summary(b1)
gam.check(b1)
plot(mu,fitted(b1)[,1]);abline(0,1,col=2)
plot(log(th),fitted(b1)[,2]);abline(0,1,col=2)
```
r None
`Rrank` Find rank of upper triangular matrix
---------------------------------------------
### Description
Finds rank of upper triangular matrix R, by estimating condition number of upper `rank` by `rank` block, and reducing `rank` until this is acceptably low. Assumes R has been computed by a method that uses pivoting, usually pivoted QR or Choleski.
### Usage
```
Rrank(R,tol=.Machine$double.eps^.9)
```
### Arguments
| | |
| --- | --- |
| `R` | An upper triangular matrix, obtained by pivoted QR or pivoted Choleski. |
| `tol` | the tolerance to use for judging rank. |
### Details
The method is based on Cline et al. (1979) as described in Golub and van Loan (1996).
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Cline, A.K., C.B. Moler, G.W. Stewart and J.H. Wilkinson (1979) An estimate for the condition number of a matrix. SIAM J. Num. Anal. 16, 368-375
Golub, G.H, and C.F. van Loan (1996) Matrix Computations 3rd ed. Johns Hopkins University Press, Baltimore.
### Examples
```
set.seed(0)
n <- 10;p <- 5
X <- matrix(runif(n*(p-1)),n,p)
qrx <- qr(X,LAPACK=TRUE)
Rrank(qr.R(qrx))
```
r None
`twlss` Tweedie location scale family
--------------------------------------
### Description
Tweedie family in which the mean, power and scale parameters can all depend on smooth linear predictors. Restricted to estimation via the extended Fellner Schall method of Wood and Fasiolo (2017). Only usable with `<gam>`. Tweedie distributions are exponential family with variance given by *s\*m^p* where *s* is a scale parameter, *p* a parameter (here between 1 and 2) and *m* is the mean.
### Usage
```
twlss(link=list("log","identity","identity"),a=1.01,b=1.99)
```
### Arguments
| | |
| --- | --- |
| `link` | The link function list: currently no choise. |
| `a` | lower limit on the power parameter relating variance to mean. |
| `b` | upper limit on power parameter. |
### Details
A Tweedie random variable with 1<p<2 is a sum of `N` gamma random variables where `N` has a Poisson distribution. The p=1 case is a generalization of a Poisson distribution and is a discrete distribution supported on integer multiples of the scale parameter. For 1<p<2 the distribution is supported on the positive reals with a point mass at zero. p=2 is a gamma distribution. As p gets very close to 1 the continuous distribution begins to converge on the discretely supported limit at p=1, and is therefore highly multimodal. See `[ldTweedie](ldtweedie)` for more on this behaviour.
The Tweedie density involves a normalizing constant with no closed form, so this is evaluated using the series evaluation method of Dunn and Smyth (2005), with extensions to also compute the derivatives w.r.t. `p` and the scale parameter. Without restricting `p` to (1,2) the calculation of Tweedie densities is more difficult, and there does not currently seem to be an implementation which offers any benefit over `[quasi](../../stats/html/family)`. If you need this case then the `tweedie` package is the place to start.
### Value
An object inheriting from class `general.family`.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected]).
### References
Dunn, P.K. and G.K. Smyth (2005) Series evaluation of Tweedie exponential dispersion model densities. Statistics and Computing 15:267-280
Tweedie, M. C. K. (1984). An index which distinguishes between some important exponential families. Statistics: Applications and New Directions. Proceedings of the Indian Statistical Institute Golden Jubilee International Conference (Eds. J. K. Ghosh and J. Roy), pp. 579-604. Calcutta: Indian Statistical Institute.
Wood, S.N. and Fasiolo, M., (2017). A generalized Fellner-Schall method for smoothing parameter optimization with application to Tweedie location, scale and shape models. Biometrics, 73(4), pp.1071-1081. <https://onlinelibrary.wiley.com/doi/full/10.1111/biom.12666>
Wood, S.N., N. Pya and B. Saefken (2016). Smoothing parameter and model selection for general smooth models. Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
### See Also
`[Tweedie](tweedie)`, `[ldTweedie](ldtweedie)`, `[rTweedie](rtweedie)`
### Examples
```
library(mgcv)
set.seed(3)
n<-400
## Simulate data...
dat <- gamSim(1,n=n,dist="poisson",scale=.2)
dat$y <- rTweedie(exp(dat$f),p=1.3,phi=.5) ## Tweedie response
## Fit a fixed p Tweedie, with wrong link ...
b <- gam(list(y~s(x0)+s(x1)+s(x2)+s(x3),~1,~1),family=twlss(),
data=dat)
plot(b,pages=1)
print(b)
rm(dat)
```
r None
`sp.vcov` Extract smoothing parameter estimator covariance matrix from (RE)ML GAM fit
--------------------------------------------------------------------------------------
### Description
Extracts the estimated covariance matrix for the log smoothing parameter estimates from a (RE)ML estimated `gam` object, provided the fit was with a method that evaluated the required Hessian.
### Usage
```
sp.vcov(x,edge.correct=TRUE,reg=1e-3)
```
### Arguments
| | |
| --- | --- |
| `x` | a fitted model object of class `gam` as produced by `gam()`. |
| `edge.correct` | if the model was fitted with `edge.correct=TRUE` (see `<gam.control>`), then thereturned covariance matrix will be for the edge corrected log smoothing parameters. |
| `reg` | regularizer for Hessian - default is equivalent to prior variance of 1000 on log smoothing parameters. |
### Details
Just extracts the inverse of the hessian matrix of the negative (restricted) log likelihood w.r.t the log smoothing parameters, if this has been obtained as part of fitting.
### Value
A matrix corresponding to the estimated covariance matrix of the log smoothing parameter estimators, if this can be extracted, otherwise `NULL`. If the scale parameter has been (RE)ML estimated (i.e. if the method was `"ML"` or `"REML"` and the scale parameter was unknown) then the last row and column relate to the log scale parameter. If `edge.correct=TRUE` and this was used in fitting then the edge corrected smoothing parameters are in attribute `lsp` of the returned matrix.
### Author(s)
Simon N. Wood [[email protected]](mailto:[email protected])
### References
Wood, S.N., N. Pya and B. Saefken (2016), Smoothing parameter and model selection for general smooth models (with discussion). Journal of the American Statistical Association 111, 1548-1575 doi: [10.1080/01621459.2016.1180986](https://doi.org/10.1080/01621459.2016.1180986)
### See Also
`<gam>`, `<gam.vcomp>`
### Examples
```
require(mgcv)
n <- 100
x <- runif(n);z <- runif(n)
y <- sin(x*2*pi) + rnorm(n)*.2
mod <- gam(y~s(x,bs="cc",k=10)+s(z),knots=list(x=seq(0,1,length=10)),
method="REML")
sp.vcov(mod)
```
| programming_docs |
r None
`showTree` Print Lisp-Style Representation of R Expression
-----------------------------------------------------------
### Description
Prints a Lisp-style representation of R expression. This can be useful for understanding how some things are parsed.
### Usage
```
showTree(e, write = cat)
```
### Arguments
| | |
| --- | --- |
| `e` | R expression. |
| `write` | function of one argument to write the result. |
### Author(s)
Luke Tierney
### Examples
```
showTree(quote(-3))
showTree(quote("x"<-1))
showTree(quote("f"(x)))
```
r None
`codetools` Low Level Code Analysis Tools for R
------------------------------------------------
### Description
These functions provide some tools for analysing R code. Mainly intended to support the other tools in this package and byte code compilation.
### Usage
```
collectLocals(e, collect)
collectUsage(fun, name = "<anonymous>", ...)
constantFold(e, env = NULL, fail = NULL)
findFuncLocals(formals, body)
findLocals(e, envir = .BaseEnv)
findLocalsList(elist, envir = .BaseEnv)
flattenAssignment(e)
getAssignedVar(e)
isConstantValue(v, w)
makeCodeWalker(..., handler, call, leaf)
makeLocalsCollector(..., leaf, handler, isLocal, exit, collect)
makeUsageCollector(fun, ..., name, enterLocal, enterGlobal, enterInternal,
startCollectLocals, finishCollectLocals, warn,
signal)
walkCode(e, w = makeCodeWalker())
```
### Arguments
| | |
| --- | --- |
| `e` | R expression. |
| `elist` | list of R expressions. |
| `v` | R object. |
| `fun` | closure. |
| `formals` | formal arguments of a closure. |
| `body` | body of a closure. |
| `name` | character. |
| `env` | character. |
| `envir` | environment. |
| `w` | code walker. |
| `...` | extra elements for code walker. |
| `collect` | function. |
| `fail` | function. |
| `handler` | function. |
| `call` | function. |
| `leaf` | function. |
| `isLocal` | function. |
| `exit` | function. |
| `enterLocal` | function. |
| `enterGlobal` | function. |
| `enterInternal` | function. |
| `startCollectLocals` | function. |
| `finishCollectLocals` | function. |
| `warn` | function. |
| `signal` | function. |
### Author(s)
Luke Tierney
r None
`findGlobals` Find Global Functions and Variables Used by a Closure
--------------------------------------------------------------------
### Description
Finds global functions and variables used by a closure.
### Usage
```
findGlobals(fun, merge = TRUE)
```
### Arguments
| | |
| --- | --- |
| `fun` | function object; usually a closure. |
| `merge` | logical |
### Details
The result is an approximation. R semantics only allow variables that might be local to be identified (and event that assumes no use of `assign` and `rm`).
### Value
Character vector if `merge` is true; otherwise, a list with `functions` and `variables` character vector components. Character vectors are of length zero For non-closures.
### Author(s)
Luke Tierney
### Examples
```
findGlobals(findGlobals)
findGlobals(findGlobals, merge = FALSE)
```
r None
`checkUsage` Check R Code for Possible Problems
------------------------------------------------
### Description
Check R code for possible problems.
### Usage
```
checkUsage(fun, name = "<anonymous>", report = cat, all = FALSE,
suppressLocal = FALSE, suppressParamAssigns = !all,
suppressParamUnused = !all, suppressFundefMismatch = FALSE,
suppressLocalUnused = FALSE, suppressNoLocalFun = !all,
skipWith = FALSE, suppressUndefined = dfltSuppressUndefined,
suppressPartialMatchArgs = TRUE)
checkUsageEnv(env, ...)
checkUsagePackage(pack, ...)
```
### Arguments
| | |
| --- | --- |
| `fun` | closure. |
| `name` | character; name of closure. |
| `env` | environment containing closures to check. |
| `pack` | character naming package to check. |
| `...` | options to be passed to `checkUsage`. |
| `report` | function to use to report possible problems. |
| `all` | logical; report all possible problems if TRUE. |
| `suppressLocal` | suppress all local variable warnings. |
| `suppressParamAssigns` | suppress warnings about assignments to formal parameters. |
| `suppressParamUnused` | suppress warnings about unused formal parameters. |
| `suppressFundefMismatch` | suppress warnings about multiple local function definitions with different formal argument lists |
| `suppressLocalUnused` | suppress warnings about unused local variables |
| `suppressNoLocalFun` | suppress warnings about using local variables as functions with no apparent local function definition |
| `skipWith` | logical; if true, do no examine code portion of `with` expressions. |
| `suppressUndefined` | suppress warnings about undefined global functions and variables. |
| `suppressPartialMatchArgs` | suppress warnings about partial argument matching |
### Details
`checkUsage` checks a single R closure. Options control which possible problems to report. The default settings are moderately verbose. A first pass might use `suppressLocal=TRUE` to suppress all information related to local variable usage. The `suppressXYZ` values can either be scalar logicals or character vectors; then they are character vectors they only suppress problem reports for the variables with names in the vector.
`checkUsageEnv` and `checkUsagePackage` are convenience functions that apply `checkUsage` to all closures in an environment or a package. `checkUsagePackage` requires that the package be loaded. If the package has a name space then the internal name space frame is checked.
### Author(s)
Luke Tierney
### Examples
```
checkUsage(checkUsage)
checkUsagePackage("codetools",all=TRUE)
## Not run: checkUsagePackage("base",suppressLocal=TRUE)
```
r None
`ppinit` Read a Point Process Object from a File
-------------------------------------------------
### Description
Read a file in standard format and create a point process object.
### Usage
```
ppinit(file)
```
### Arguments
| | |
| --- | --- |
| `file` | string giving file name |
### Details
The file should contain
the number of points
a header (ignored)
xl xu yl yu scale
x y (repeated n times)
### Value
class `"pp"` object with components `x`, `y`, `xl`, `xu`, `yl`, `yu`
### Side Effects
Calls `ppregion` to set the domain.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<ppregion>`
### Examples
```
towns <- ppinit("towns.dat")
par(pty="s")
plot(Kfn(towns, 10), type="b", xlab="distance", ylab="L(t)")
```
r None
`ppgetregion` Get Domain for Spatial Point Pattern Analyses
------------------------------------------------------------
### Description
Retrieves the rectangular domain `(xl, xu)` *x* `(yl, yu)` from the underlying `C` code.
### Usage
```
ppgetregion()
```
### Value
A vector of length four with names `c("xl", "xu", "yl", "yu")`.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<ppregion>`
r None
`variogram` Compute Spatial Variogram
--------------------------------------
### Description
Compute spatial (semi-)variogram of spatial data or residuals.
### Usage
```
variogram(krig, nint, plotit = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `krig` | trend-surface or kriging object with columns `x`, `y`, and `z` |
| `nint` | number of bins used |
| `plotit` | logical for plotting |
| `...` | parameters for the plot |
### Details
Divides range of data into `nint` bins, and computes the average squared difference for pairs with separation in each bin. Returns results for bins with 6 or more pairs.
### Value
`x` and `y` coordinates of the variogram and `cnt`, the number of pairs averaged per bin.
### Side Effects
Plots the variogram if `plotit = TRUE`
### References
Ripley, B. D. (1981) *Spatial Statistics.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<correlogram>`
### Examples
```
data(topo, package="MASS")
topo.kr <- surf.ls(2, topo)
variogram(topo.kr, 25)
```
r None
`Psim` Simulate Binomial Spatial Point Process
-----------------------------------------------
### Description
Simulate Binomial spatial point process.
### Usage
```
Psim(n)
```
### Arguments
| | |
| --- | --- |
| `n` | number of points |
### Details
relies on the region being set by `ppinit` or `ppregion`.
### Value
list of vectors of `x` and `y` coordinates.
### Side Effects
uses the random number generator.
### References
Ripley, B. D. (1981) *Spatial Statistics.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[SSI](ssi)`, `[Strauss](strauss)`
### Examples
```
towns <- ppinit("towns.dat")
par(pty="s")
plot(Kfn(towns, 10), type="s", xlab="distance", ylab="L(t)")
for(i in 1:10) lines(Kfn(Psim(69), 10))
```
r None
`expcov` Spatial Covariance Functions
--------------------------------------
### Description
Spatial covariance functions for use with `surf.gls`.
### Usage
```
expcov(r, d, alpha = 0, se = 1)
gaucov(r, d, alpha = 0, se = 1)
sphercov(r, d, alpha = 0, se = 1, D = 2)
```
### Arguments
| | |
| --- | --- |
| `r` | vector of distances at which to evaluate the covariance |
| `d` | range parameter |
| `alpha` | proportion of nugget effect |
| `se` | standard deviation at distance zero |
| `D` | dimension of spheres. |
### Value
vector of covariance values.
### References
Ripley, B. D. (1981) *Spatial Statistics.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<surf.gls>`
### Examples
```
data(topo, package="MASS")
topo.kr <- surf.ls(2, topo)
correlogram(topo.kr, 25)
d <- seq(0, 7, 0.1)
lines(d, expcov(d, 0.7))
```
r None
`Kaver` Average K-functions from Simulations
---------------------------------------------
### Description
Forms the average of a series of (usually simulated) K-functions.
### Usage
```
Kaver(fs, nsim, ...)
```
### Arguments
| | |
| --- | --- |
| `fs` | full scale for K-fn |
| `nsim` | number of simulations |
| `...` | arguments to simulate one point process object |
### Value
list with components `x` and `y` of the average K-fn on L-scale.
### References
Ripley, B. D. (1981) *Spatial Statistics.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[Kfn](kfn)`, `[Kenvl](kenvl)`
### Examples
```
towns <- ppinit("towns.dat")
par(pty="s")
plot(Kfn(towns, 40), type="b")
plot(Kfn(towns, 10), type="b", xlab="distance", ylab="L(t)")
for(i in 1:10) lines(Kfn(Psim(69), 10))
lims <- Kenvl(10,100,Psim(69))
lines(lims$x,lims$lower, lty=2, col="green")
lines(lims$x,lims$upper, lty=2, col="green")
lines(Kaver(10,25,Strauss(69,0.5,3.5)), col="red")
```
r None
`Kfn` Compute K-fn of a Point Pattern
--------------------------------------
### Description
Actually computes *L = sqrt(K/pi)*.
### Usage
```
Kfn(pp, fs, k=100)
```
### Arguments
| | |
| --- | --- |
| `pp` | a list such as a pp object, including components `x` and `y` |
| `fs` | full scale of the plot |
| `k` | number of regularly spaced distances in (0, `fs`) |
### Details
relies on the domain D having been set by `ppinit` or `ppregion`.
### Value
A list with components
| | |
| --- | --- |
| `x` | vector of distances |
| `y` | vector of L-fn values |
| `k` | number of distances returned – may be less than `k` if `fs` is too large |
| `dmin` | minimum distance between pair of points |
| `lm` | maximum deviation from L(t) = t |
### References
Ripley, B. D. (1981) *Spatial Statistics.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<ppinit>`, `<ppregion>`, `[Kaver](kaver)`, `[Kenvl](kenvl)`
### Examples
```
towns <- ppinit("towns.dat")
par(pty="s")
plot(Kfn(towns, 10), type="s", xlab="distance", ylab="L(t)")
```
r None
`Kenvl` Compute Envelope and Average of Simulations of K-fns
-------------------------------------------------------------
### Description
Computes envelope (upper and lower limits) and average of simulations of K-fns
### Usage
```
Kenvl(fs, nsim, ...)
```
### Arguments
| | |
| --- | --- |
| `fs` | full scale for K-fn |
| `nsim` | number of simulations |
| `...` | arguments to produce one simulation |
### Value
list with components
| | |
| --- | --- |
| `x` | distances |
| `lower` | min of K-fns |
| `upper` | max of K-fns |
| `aver` | average of K-fns |
### References
Ripley, B. D. (1981) *Spatial Statistics.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[Kfn](kfn)`, `[Kaver](kaver)`
### Examples
```
towns <- ppinit("towns.dat")
par(pty="s")
plot(Kfn(towns, 40), type="b")
plot(Kfn(towns, 10), type="b", xlab="distance", ylab="L(t)")
for(i in 1:10) lines(Kfn(Psim(69), 10))
lims <- Kenvl(10,100,Psim(69))
lines(lims$x,lims$lower, lty=2, col="green")
lines(lims$x,lims$upper, lty=2, col="green")
lines(Kaver(10,25,Strauss(69,0.5,3.5)), col="red")
```
r None
`surf.gls` Fits a Trend Surface by Generalized Least-squares
-------------------------------------------------------------
### Description
Fits a trend surface by generalized least-squares.
### Usage
```
surf.gls(np, covmod, x, y, z, nx = 1000, ...)
```
### Arguments
| | |
| --- | --- |
| `np` | degree of polynomial surface |
| `covmod` | function to evaluate covariance or correlation function |
| `x` | x coordinates or a data frame with columns `x`, `y`, `z` |
| `y` | y coordinates |
| `z` | z coordinates. Will supersede `x$z` |
| `nx` | Number of bins for table of the covariance. Increasing adds accuracy, and increases size of the object. |
| `...` | parameters for `covmod` |
### Value
list with components
| | |
| --- | --- |
| `beta` | the coefficients |
| `x` | |
| `y` | |
| `z` | and others for internal use only. |
### References
Ripley, B. D. (1981) *Spatial Statistics.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<trmat>`, `<surf.ls>`, `<prmat>`, `<semat>`, `<expcov>`, `[gaucov](expcov)`, `[sphercov](expcov)`
### Examples
```
library(MASS) # for eqscplot
data(topo, package="MASS")
topo.kr <- surf.gls(2, expcov, topo, d=0.7)
trsurf <- trmat(topo.kr, 0, 6.5, 0, 6.5, 50)
eqscplot(trsurf, type = "n")
contour(trsurf, add = TRUE)
prsurf <- prmat(topo.kr, 0, 6.5, 0, 6.5, 50)
contour(prsurf, levels=seq(700, 925, 25))
sesurf <- semat(topo.kr, 0, 6.5, 0, 6.5, 30)
eqscplot(sesurf, type = "n")
contour(sesurf, levels = c(22, 25), add = TRUE)
```
r None
`predict.trls` Predict method for trend surface fits
-----------------------------------------------------
### Description
Predicted values based on trend surface model object
### Usage
```
## S3 method for class 'trls'
predict(object, x, y, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | Fitted trend surface model object returned by `surf.ls` |
| `x` | Vector of prediction location eastings (x coordinates) |
| `y` | Vector of prediction location northings (y coordinates) |
| `...` | further arguments passed to or from other methods. |
### Value
`predict.trls` produces a vector of predictions corresponding to the prediction locations. To display the output with `image` or `contour`, use `trmat` or convert the returned vector to matrix form.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<surf.ls>`, `<trmat>`
### Examples
```
data(topo, package="MASS")
topo2 <- surf.ls(2, topo)
topo4 <- surf.ls(4, topo)
x <- c(1.78, 2.21)
y <- c(6.15, 6.15)
z2 <- predict(topo2, x, y)
z4 <- predict(topo4, x, y)
cat("2nd order predictions:", z2, "\n4th order predictions:", z4, "\n")
```
r None
`semat` Evaluate Kriging Standard Error of Prediction over a Grid
------------------------------------------------------------------
### Description
Evaluate Kriging standard error of prediction over a grid.
### Usage
```
semat(obj, xl, xu, yl, yu, n, se)
```
### Arguments
| | |
| --- | --- |
| `obj` | object returned by `surf.gls` |
| `xl` | limits of the rectangle for grid |
| `xu` | |
| `yl` | |
| `yu` | |
| `n` | use `n` x `n` grid within the rectangle |
| `se` | standard error at distance zero as a multiple of the supplied covariance. Otherwise estimated, and it assumed that a correlation function was supplied. |
### Value
list with components x, y and z suitable for `contour` and `image`.
### References
Ripley, B. D. (1981) *Spatial Statistics.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<surf.gls>`, `<trmat>`, `<prmat>`
### Examples
```
data(topo, package="MASS")
topo.kr <- surf.gls(2, expcov, topo, d=0.7)
prsurf <- prmat(topo.kr, 0, 6.5, 0, 6.5, 50)
contour(prsurf, levels=seq(700, 925, 25))
sesurf <- semat(topo.kr, 0, 6.5, 0, 6.5, 30)
contour(sesurf, levels=c(22,25))
```
r None
`trmat` Evaluate Trend Surface over a Grid
-------------------------------------------
### Description
Evaluate trend surface over a grid.
### Usage
```
trmat(obj, xl, xu, yl, yu, n)
```
### Arguments
| | |
| --- | --- |
| `obj` | object returned by `surf.ls` or `surf.gls` |
| `xl` | limits of the rectangle for grid |
| `xu` | |
| `yl` | |
| `yu` | |
| `n` | use `n` x `n` grid within the rectangle |
### Value
list with components `x`, `y` and `z` suitable for `contour` and `image`.
### References
Ripley, B. D. (1981) *Spatial Statistics.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<surf.ls>`, `<surf.gls>`
### Examples
```
data(topo, package="MASS")
topo.kr <- surf.ls(2, topo)
trsurf <- trmat(topo.kr, 0, 6.5, 0, 6.5, 50)
```
r None
`trls.influence` Regression diagnostics for trend surfaces
-----------------------------------------------------------
### Description
This function provides the basic quantities which are used in forming a variety of diagnostics for checking the quality of regression fits for trend surfaces calculated by `surf.ls`.
### Usage
```
trls.influence(object)
## S3 method for class 'trls'
plot(x, border = "red", col = NA, pch = 4, cex = 0.6,
add = FALSE, div = 8, ...)
```
### Arguments
| | |
| --- | --- |
| `object, x` | Fitted trend surface model from `surf.ls` |
| `div` | scaling factor for influence circle radii in `plot.trls` |
| `add` | add influence plot to existing graphics if `TRUE` |
| `border, col, pch, cex, ...` | additional graphical parameters |
### Value
`trls.influence` returns a list with components:
| | |
| --- | --- |
| `r` | raw residuals as given by `residuals.trls` |
| `hii` | diagonal elements of the Hat matrix |
| `stresid` | standardised residuals |
| `Di` | Cook's statistic |
### References
Unwin, D. J., Wrigley, N. (1987) Towards a general-theory of control point distribution effects in trend surface models. *Computers and Geosciences,* **13**, 351–355.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<surf.ls>`, `[influence.measures](../../stats/html/influence.measures)`, `[plot.lm](../../stats/html/plot.lm)`
### Examples
```
library(MASS) # for eqscplot
data(topo, package = "MASS")
topo2 <- surf.ls(2, topo)
infl.topo2 <- trls.influence(topo2)
(cand <- as.data.frame(infl.topo2)[abs(infl.topo2$stresid) > 1.5, ])
cand.xy <- topo[as.integer(rownames(cand)), c("x", "y")]
trsurf <- trmat(topo2, 0, 6.5, 0, 6.5, 50)
eqscplot(trsurf, type = "n")
contour(trsurf, add = TRUE, col = "grey")
plot(topo2, add = TRUE, div = 3)
points(cand.xy, pch = 16, col = "orange")
text(cand.xy, labels = rownames(cand.xy), pos = 4, offset = 0.5)
```
| programming_docs |
r None
`surf.ls` Fits a Trend Surface by Least-squares
------------------------------------------------
### Description
Fits a trend surface by least-squares.
### Usage
```
surf.ls(np, x, y, z)
```
### Arguments
| | |
| --- | --- |
| `np` | degree of polynomial surface |
| `x` | x coordinates or a data frame with columns `x`, `y`, `z` |
| `y` | y coordinates |
| `z` | z coordinates. Will supersede `x$z` |
### Value
list with components
| | |
| --- | --- |
| `beta` | the coefficients |
| `x` | |
| `y` | |
| `z` | and others for internal use only. |
### References
Ripley, B. D. (1981) *Spatial Statistics.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<trmat>`, `<surf.gls>`
### Examples
```
library(MASS) # for eqscplot
data(topo, package="MASS")
topo.kr <- surf.ls(2, topo)
trsurf <- trmat(topo.kr, 0, 6.5, 0, 6.5, 50)
eqscplot(trsurf, type = "n")
contour(trsurf, add = TRUE)
points(topo)
eqscplot(trsurf, type = "n")
contour(trsurf, add = TRUE)
plot(topo.kr, add = TRUE)
title(xlab= "Circle radius proportional to Cook's influence statistic")
```
r None
`Strauss` Simulates Strauss Spatial Point Process
--------------------------------------------------
### Description
Simulates Strauss spatial point process.
### Usage
```
Strauss(n, c=0, r)
```
### Arguments
| | |
| --- | --- |
| `n` | number of points |
| `c` | parameter `c` in *[0, 1]*. `c = 0` corresponds to complete inhibition at distances up to `r`. |
| `r` | inhibition distance |
### Details
Uses spatial birth-and-death process for 4`n` steps, or for 40`n` steps starting from a binomial pattern on the first call from an other function. Uses the region set by `ppinit` or `ppregion`.
### Value
list of vectors of *x* and *y* coordinates
### Side Effects
uses the random number generator
### References
Ripley, B. D. (1981) *Spatial Statistics.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[Psim](psim)`, `[SSI](ssi)`
### Examples
```
towns <- ppinit("towns.dat")
par(pty="s")
plot(Kfn(towns, 10), type="b", xlab="distance", ylab="L(t)")
lines(Kaver(10, 25, Strauss(69,0.5,3.5)))
```
r None
`correlogram` Compute Spatial Correlograms
-------------------------------------------
### Description
Compute spatial correlograms of spatial data or residuals.
### Usage
```
correlogram(krig, nint, plotit = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `krig` | trend-surface or kriging object with columns `x`, `y`, and `z` |
| `nint` | number of bins used |
| `plotit` | logical for plotting |
| `...` | parameters for the plot |
### Details
Divides range of data into `nint` bins, and computes the covariance for pairs with separation in each bin, then divides by the variance. Returns results for bins with 6 or more pairs.
### Value
`x` and `y` coordinates of the correlogram, and `cnt`, the number of pairs averaged per bin.
### Side Effects
Plots the correlogram if `plotit = TRUE`.
### References
Ripley, B. D. (1981) *Spatial Statistics.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<variogram>`
### Examples
```
data(topo, package="MASS")
topo.kr <- surf.ls(2, topo)
correlogram(topo.kr, 25)
d <- seq(0, 7, 0.1)
lines(d, expcov(d, 0.7))
```
r None
`ppregion` Set Domain for Spatial Point Pattern Analyses
---------------------------------------------------------
### Description
Sets the rectangular domain `(xl, xu)` *x* `(yl, yu)`.
### Usage
```
ppregion(xl = 0, xu = 1, yl = 0, yu = 1)
```
### Arguments
| | |
| --- | --- |
| `xl` | Either `xl` or a list containing components `xl`, `xu`, `yl`, `yu` (such as a point-process object) |
| `xu` | |
| `yl` | |
| `yu` | |
### Value
none
### Side Effects
initializes variables in the `C` subroutines.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<ppinit>`, `<ppgetregion>`
r None
`anova.trls` Anova tables for fitted trend surface objects
-----------------------------------------------------------
### Description
Compute analysis of variance tables for one or more fitted trend surface model objects; where `anova.trls` is called with multiple objects, it passes on the arguments to `anovalist.trls`.
### Usage
```
## S3 method for class 'trls'
anova(object, ...)
anovalist.trls(object, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | A fitted trend surface model object from `surf.ls` |
| `...` | Further objects of the same kind |
### Value
`anova.trls` and `anovalist.trls` return objects corresponding to their printed tabular output.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<surf.ls>`
### Examples
```
library(stats)
data(topo, package="MASS")
topo0 <- surf.ls(0, topo)
topo1 <- surf.ls(1, topo)
topo2 <- surf.ls(2, topo)
topo3 <- surf.ls(3, topo)
topo4 <- surf.ls(4, topo)
anova(topo0, topo1, topo2, topo3, topo4)
summary(topo4)
```
r None
`prmat` Evaluate Kriging Surface over a Grid
---------------------------------------------
### Description
Evaluate Kriging surface over a grid.
### Usage
```
prmat(obj, xl, xu, yl, yu, n)
```
### Arguments
| | |
| --- | --- |
| `obj` | object returned by `surf.gls` |
| `xl` | limits of the rectangle for grid |
| `xu` | |
| `yl` | |
| `yu` | |
| `n` | use `n` x `n` grid within the rectangle |
### Value
list with components `x`, `y` and `z` suitable for `contour` and `image`.
### References
Ripley, B. D. (1981) *Spatial Statistics.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`<surf.gls>`, `<trmat>`, `<semat>`
### Examples
```
data(topo, package="MASS")
topo.kr <- surf.gls(2, expcov, topo, d=0.7)
prsurf <- prmat(topo.kr, 0, 6.5, 0, 6.5, 50)
contour(prsurf, levels=seq(700, 925, 25))
```
r None
`pplik` Pseudo-likelihood Estimation of a Strauss Spatial Point Process
------------------------------------------------------------------------
### Description
Pseudo-likelihood estimation of a Strauss spatial point process.
### Usage
```
pplik(pp, R, ng=50, trace=FALSE)
```
### Arguments
| | |
| --- | --- |
| `pp` | a pp object |
| `R` | the fixed parameter `R` |
| `ng` | use a `ng` x `ng` grid with border `R` in the domain for numerical integration. |
| `trace` | logical? Should function evaluations be printed? |
### Value
estimate for `c` in the interval *[0, 1]*.
### References
Ripley, B. D. (1988) *Statistical Inference for Spatial Processes.* Cambridge.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[Strauss](strauss)`
### Examples
```
pines <- ppinit("pines.dat")
pplik(pines, 0.7)
```
r None
`SSI` Simulates Sequential Spatial Inhibition Point Process
------------------------------------------------------------
### Description
Simulates SSI (sequential spatial inhibition) point process.
### Usage
```
SSI(n, r)
```
### Arguments
| | |
| --- | --- |
| `n` | number of points |
| `r` | inhibition distance |
### Details
uses the region set by `ppinit` or `ppregion`.
### Value
list of vectors of `x` and `y` coordinates
### Side Effects
uses the random number generator.
### Warnings
will never return if `r` is too large and it cannot place `n` points.
### References
Ripley, B. D. (1981) *Spatial Statistics.* Wiley.
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer.
### See Also
`[Psim](psim)`, `[Strauss](strauss)`
### Examples
```
towns <- ppinit("towns.dat")
par(pty = "s")
plot(Kfn(towns, 10), type = "b", xlab = "distance", ylab = "L(t)")
lines(Kaver(10, 25, SSI(69, 1.2)))
```
r None
`symbols` Draw Symbols (Circles, Squares, Stars, Thermometers, Boxplots)
-------------------------------------------------------------------------
### Description
This function draws symbols on a plot. One of six symbols; *circles*, *squares*, *rectangles*, *stars*, *thermometers*, and *boxplots*, can be plotted at a specified set of x and y coordinates. Specific aspects of the symbols, such as relative size, can be customized by additional parameters.
### Usage
```
symbols(x, y = NULL, circles, squares, rectangles, stars,
thermometers, boxplots, inches = TRUE, add = FALSE,
fg = par("col"), bg = NA,
xlab = NULL, ylab = NULL, main = NULL,
xlim = NULL, ylim = NULL, ...)
```
### Arguments
| | |
| --- | --- |
| `x, y` | the x and y co-ordinates for the centres of the symbols. They can be specified in any way which is accepted by `[xy.coords](../../grdevices/html/xy.coords)`. |
| `circles` | a vector giving the radii of the circles. |
| `squares` | a vector giving the length of the sides of the squares. |
| `rectangles` | a matrix with two columns. The first column gives widths and the second the heights of rectangles. |
| `stars` | a matrix with three or more columns giving the lengths of the rays from the center of the stars. `NA` values are replaced by zeroes. |
| `thermometers` | a matrix with three or four columns. The first two columns give the width and height of the thermometer symbols. If there are three columns, the third is taken as a proportion: the thermometers are filled (using colour `fg`) from their base to this proportion of their height. If there are four columns, the third and fourth columns are taken as proportions and the thermometers are filled between these two proportions of their heights. The part of the box not filled in `fg` will be filled in the background colour (default transparent) given by `bg`. |
| `boxplots` | a matrix with five columns. The first two columns give the width and height of the boxes, the next two columns give the lengths of the lower and upper whiskers and the fifth the proportion (with a warning if not in [0,1]) of the way up the box that the median line is drawn. |
| `inches` | `TRUE`, `FALSE` or a positive number. See ‘Details’. |
| `add` | if `add` is `TRUE`, the symbols are added to an existing plot, otherwise a new plot is created. |
| `fg` | colour(s) the symbols are to be drawn in. |
| `bg` | if specified, the symbols are filled with colour(s), the vector `bg` being recycled to the number of symbols. The default is to leave the symbols unfilled. |
| `xlab` | the x label of the plot if `add` is not true. Defaults to the `[deparse](../../base/html/deparse)`d expression used for `x`. |
| `ylab` | the y label of the plot. Unused if `add = TRUE`. |
| `main` | a main title for the plot. Unused if `add = TRUE`. |
| `xlim` | numeric vector of length 2 giving the x limits for the plot. Unused if `add = TRUE`. |
| `ylim` | numeric vector of length 2 giving the y limits for the plot. Unused if `add = TRUE`. |
| `...` | graphics parameters can also be passed to this function, as can the plot aspect ratio `asp` (see `<plot.window>`). |
### Details
Observations which have missing coordinates or missing size parameters are not plotted. The exception to this is *stars*. In that case, the length of any ray which is `NA` is reset to zero.
Argument `inches` controls the sizes of the symbols. If `TRUE` (the default), the symbols are scaled so that the largest dimension of any symbol is one inch. If a positive number is given the symbols are scaled to make largest dimension this size in inches (so `TRUE` and `1` are equivalent). If `inches` is `FALSE`, the units are taken to be those of the appropriate axes. (For circles, squares and stars the units of the x axis are used. For boxplots, the lengths of the whiskers are regarded as dimensions alongside width and height when scaling by `inches`, and are otherwise interpreted in the units of the y axis.)
Circles of radius zero are plotted at radius one pixel (which is device-dependent). Circles of a very small non-zero radius may or may not be visible, and may be smaller than circles of radius zero. On `windows` devices circles are plotted at radius at least one pixel as some Windows versions omit smaller circles.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
W. S. Cleveland (1985) *The Elements of Graphing Data.* Monterey, California: Wadsworth.
Murrell, P. (2005) *R Graphics*. Chapman & Hall/CRC Press.
### See Also
`<stars>` for drawing *stars* with a bit more flexibility.
If you are thinking about doing ‘bubble plots’ by `symbols(*, circles=*)`, you should *really* consider using `<sunflowerplot>` instead.
### Examples
```
require(stats); require(grDevices)
x <- 1:10
y <- sort(10*runif(10))
z <- runif(10)
z3 <- cbind(z, 2*runif(10), runif(10))
symbols(x, y, thermometers = cbind(.5, 1, z), inches = .5, fg = 1:10)
symbols(x, y, thermometers = z3, inches = FALSE)
text(x, y, apply(format(round(z3, digits = 2)), 1, paste, collapse = ","),
adj = c(-.2,0), cex = .75, col = "purple", xpd = NA)
## Note that example(trees) shows more sensible plots!
N <- nrow(trees)
with(trees, {
## Girth is diameter in inches
symbols(Height, Volume, circles = Girth/24, inches = FALSE,
main = "Trees' Girth") # xlab and ylab automatically
## Colours too:
op <- palette(rainbow(N, end = 0.9))
symbols(Height, Volume, circles = Girth/16, inches = FALSE, bg = 1:N,
fg = "gray30", main = "symbols(*, circles = Girth/16, bg = 1:N)")
palette(op)
})
```
r None
`polygon` Polygon Drawing
--------------------------
### Description
`polygon` draws the polygons whose vertices are given in `x` and `y`.
### Usage
```
polygon(x, y = NULL, density = NULL, angle = 45,
border = NULL, col = NA, lty = par("lty"),
..., fillOddEven = FALSE)
```
### Arguments
| | |
| --- | --- |
| `x, y` | vectors containing the coordinates of the vertices of the polygon. |
| `density` | the density of shading lines, in lines per inch. The default value of `NULL` means that no shading lines are drawn. A zero value of `density` means no shading nor filling whereas negative values and `NA` suppress shading (and so allow color filling). |
| `angle` | the slope of shading lines, given as an angle in degrees (counter-clockwise). |
| `col` | the color for filling the polygon. The default, `NA`, is to leave polygons unfilled, unless `density` is specified. (For back-compatibility, `NULL` is equivalent to `NA`.) If `density` is specified with a positive value this gives the color of the shading lines. |
| `border` | the color to draw the border. The default, `NULL`, means to use `<par>("fg")`. Use `border = NA` to omit borders. For compatibility with S, `border` can also be logical, in which case `FALSE` is equivalent to `NA` (borders omitted) and `TRUE` is equivalent to `NULL` (use the foreground colour), |
| `lty` | the line type to be used, as in `<par>`. |
| `...` | graphical parameters such as `xpd`, `lend`, `ljoin` and `lmitre` can be given as arguments. |
| `fillOddEven` | logical controlling the polygon shading mode: see below for details. Default `FALSE`. |
### Details
The coordinates can be passed in a plotting structure (a list with `x` and `y` components), a two-column matrix, .... See `[xy.coords](../../grdevices/html/xy.coords)`.
It is assumed that the polygon is to be closed by joining the last point to the first point.
The coordinates can contain missing values. The behaviour is similar to that of `<lines>`, except that instead of breaking a line into several lines, `NA` values break the polygon into several complete polygons (including closing the last point to the first point). See the examples below.
When multiple polygons are produced, the values of `density`, `angle`, `col`, `border`, and `lty` are recycled in the usual manner.
Shading of polygons is only implemented for linear plots: if either axis is on log scale then shading is omitted, with a warning.
### Bugs
Self-intersecting polygons may be filled using either the “odd-even” or “non-zero” rule. These fill a region if the polygon border encircles it an odd or non-zero number of times, respectively. Shading lines are handled internally by **R** according to the `fillOddEven` argument, but device-based solid fills depend on the graphics device. The `windows`, `[pdf](../../grdevices/html/pdf)` and `[postscript](../../grdevices/html/postscript)` devices have their own `fillOddEven` argument to control this.
### Author(s)
The code implementing polygon shading was donated by Kevin Buhr [[email protected]](mailto:[email protected]).
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Murrell, P. (2005) *R Graphics*. Chapman & Hall/CRC Press.
### See Also
`<segments>` for even more flexibility, `<lines>`, `<rect>`, `<box>`, `<abline>`.
`<par>` for how to specify colors.
### Examples
```
x <- c(1:9, 8:1)
y <- c(1, 2*(5:3), 2, -1, 17, 9, 8, 2:9)
op <- par(mfcol = c(3, 1))
for(xpd in c(FALSE, TRUE, NA)) {
plot(1:10, main = paste("xpd =", xpd))
box("figure", col = "pink", lwd = 3)
polygon(x, y, xpd = xpd, col = "orange", lty = 2, lwd = 2, border = "red")
}
par(op)
n <- 100
xx <- c(0:n, n:0)
yy <- c(c(0, cumsum(stats::rnorm(n))), rev(c(0, cumsum(stats::rnorm(n)))))
plot (xx, yy, type = "n", xlab = "Time", ylab = "Distance")
polygon(xx, yy, col = "gray", border = "red")
title("Distance Between Brownian Motions")
# Multiple polygons from NA values
# and recycling of col, border, and lty
op <- par(mfrow = c(2, 1))
plot(c(1, 9), 1:2, type = "n")
polygon(1:9, c(2,1,2,1,1,2,1,2,1),
col = c("red", "blue"),
border = c("green", "yellow"),
lwd = 3, lty = c("dashed", "solid"))
plot(c(1, 9), 1:2, type = "n")
polygon(1:9, c(2,1,2,1,NA,2,1,2,1),
col = c("red", "blue"),
border = c("green", "yellow"),
lwd = 3, lty = c("dashed", "solid"))
par(op)
# Line-shaded polygons
plot(c(1, 9), 1:2, type = "n")
polygon(1:9, c(2,1,2,1,NA,2,1,2,1),
density = c(10, 20), angle = c(-45, 45))
```
r None
`legend` Add Legends to Plots
------------------------------
### Description
This function can be used to add legends to plots. Note that a call to the function `<locator>(1)` can be used in place of the `x` and `y` arguments.
### Usage
```
legend(x, y = NULL, legend, fill = NULL, col = par("col"),
border = "black", lty, lwd, pch,
angle = 45, density = NULL, bty = "o", bg = par("bg"),
box.lwd = par("lwd"), box.lty = par("lty"), box.col = par("fg"),
pt.bg = NA, cex = 1, pt.cex = cex, pt.lwd = lwd,
xjust = 0, yjust = 1, x.intersp = 1, y.intersp = 1,
adj = c(0, 0.5), text.width = NULL, text.col = par("col"),
text.font = NULL, merge = do.lines && has.pch, trace = FALSE,
plot = TRUE, ncol = 1, horiz = FALSE, title = NULL,
inset = 0, xpd, title.col = text.col, title.adj = 0.5,
seg.len = 2)
```
### Arguments
| | |
| --- | --- |
| `x, y` | the x and y co-ordinates to be used to position the legend. They can be specified by keyword or in any way which is accepted by `[xy.coords](../../grdevices/html/xy.coords)`: See ‘Details’. |
| `legend` | a character or [expression](../../base/html/expression) vector of length *≥ 1* to appear in the legend. Other objects will be coerced by `[as.graphicsAnnot](../../grdevices/html/as.graphicsannot)`. |
| `fill` | if specified, this argument will cause boxes filled with the specified colors (or shaded in the specified colors) to appear beside the legend text. |
| `col` | the color of points or lines appearing in the legend. |
| `border` | the border color for the boxes (used only if `fill` is specified). |
| `lty, lwd` | the line types and widths for lines appearing in the legend. One of these two *must* be specified for line drawing. |
| `pch` | the plotting symbols appearing in the legend, as numeric vector or a vector of 1-character strings (see `<points>`). Unlike `points`, this can all be specified as a single multi-character string. *Must* be specified for symbol drawing. |
| `angle` | angle of shading lines. |
| `density` | the density of shading lines, if numeric and positive. If `NULL` or negative or `NA` color filling is assumed. |
| `bty` | the type of box to be drawn around the legend. The allowed values are `"o"` (the default) and `"n"`. |
| `bg` | the background color for the legend box. (Note that this is only used if `bty != "n"`.) |
| `box.lty, box.lwd, box.col` | the line type, width and color for the legend box (if `bty = "o"`). |
| `pt.bg` | the background color for the `<points>`, corresponding to its argument `bg`. |
| `cex` | character expansion factor **relative** to current `par("cex")`. Used for text, and provides the default for `pt.cex`. |
| `pt.cex` | expansion factor(s) for the points. |
| `pt.lwd` | line width for the points, defaults to the one for lines, or if that is not set, to `par("lwd")`. |
| `xjust` | how the legend is to be justified relative to the legend x location. A value of 0 means left justified, 0.5 means centered and 1 means right justified. |
| `yjust` | the same as `xjust` for the legend y location. |
| `x.intersp` | character interspacing factor for horizontal (x) spacing. |
| `y.intersp` | the same for vertical (y) line distances. |
| `adj` | numeric of length 1 or 2; the string adjustment for legend text. Useful for y-adjustment when `labels` are [plotmath](../../grdevices/html/plotmath) expressions. |
| `text.width` | the width of the legend text in x (`"user"`) coordinates. (Should be a single positive number even for a reversed x axis.) Defaults to the proper value computed by `<strwidth>(legend)`. |
| `text.col` | the color used for the legend text. |
| `text.font` | the font used for the legend text, see `<text>`. |
| `merge` | logical; if `TRUE`, merge points and lines but not filled boxes. Defaults to `TRUE` if there are points and lines. |
| `trace` | logical; if `TRUE`, shows how `legend` does all its magical computations. |
| `plot` | logical. If `FALSE`, nothing is plotted but the sizes are returned. |
| `ncol` | the number of columns in which to set the legend items (default is 1, a vertical legend). |
| `horiz` | logical; if `TRUE`, set the legend horizontally rather than vertically (specifying `horiz` overrides the `ncol` specification). |
| `title` | a character string or length-one expression giving a title to be placed at the top of the legend. Other objects will be coerced by `[as.graphicsAnnot](../../grdevices/html/as.graphicsannot)`. |
| `inset` | inset distance(s) from the margins as a fraction of the plot region when legend is placed by keyword. |
| `xpd` | if supplied, a value of the [graphical parameter](par) `xpd` to be used while the legend is being drawn. |
| `title.col` | color for `title`. |
| `title.adj` | horizontal adjustment for `title`: see the help for `<par>("adj")`. |
| `seg.len` | the length of lines drawn to illustrate `lty` and/or `lwd` (in units of character widths). |
### Details
Arguments `x`, `y`, `legend` are interpreted in a non-standard way to allow the coordinates to be specified *via* one or two arguments. If `legend` is missing and `y` is not numeric, it is assumed that the second argument is intended to be `legend` and that the first argument specifies the coordinates.
The coordinates can be specified in any way which is accepted by `[xy.coords](../../grdevices/html/xy.coords)`. If this gives the coordinates of one point, it is used as the top-left coordinate of the rectangle containing the legend. If it gives the coordinates of two points, these specify opposite corners of the rectangle (either pair of corners, in any order).
The location may also be specified by setting `x` to a single keyword from the list `"bottomright"`, `"bottom"`, `"bottomleft"`, `"left"`, `"topleft"`, `"top"`, `"topright"`, `"right"` and `"center"`. This places the legend on the inside of the plot frame at the given location. Partial argument matching is used. The optional `inset` argument specifies how far the legend is inset from the plot margins. If a single value is given, it is used for both margins; if two values are given, the first is used for `x`- distance, the second for `y`-distance.
Attribute arguments such as `col`, `pch`, `lty`, etc, are recycled if necessary: `merge` is not. Set entries of `lty` to `0` or set entries of `lwd` to `NA` to suppress lines in corresponding legend entries; set `pch` values to `NA` to suppress points.
Points are drawn *after* lines in order that they can cover the line with their background color `pt.bg`, if applicable.
See the examples for how to right-justify labels.
Since they are not used for Unicode code points, values `-31:-1` are silently omitted, as are `NA` and `""` values.
### Value
A list with list components
| | |
| --- | --- |
| `rect` | a list with components
`w`, `h`
positive numbers giving **w**idth and **h**eight of the legend's box.
`left`, `top`
x and y coordinates of upper left corner of the box. |
| `text` | a list with components `x, y`
numeric vectors of length `length(legend)`, giving the x and y coordinates of the legend's text(s). |
returned invisibly.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Murrell, P. (2005) *R Graphics*. Chapman & Hall/CRC Press.
### See Also
`[plot](plot.default)`, `<barplot>` which uses `legend()`, and `<text>` for more examples of math expressions.
### Examples
```
## Run the example in '?matplot' or the following:
leg.txt <- c("Setosa Petals", "Setosa Sepals",
"Versicolor Petals", "Versicolor Sepals")
y.leg <- c(4.5, 3, 2.1, 1.4, .7)
cexv <- c(1.2, 1, 4/5, 2/3, 1/2)
matplot(c(1, 8), c(0, 4.5), type = "n", xlab = "Length", ylab = "Width",
main = "Petal and Sepal Dimensions in Iris Blossoms")
for (i in seq(cexv)) {
text (1, y.leg[i] - 0.1, paste("cex=", formatC(cexv[i])), cex = 0.8, adj = 0)
legend(3, y.leg[i], leg.txt, pch = "sSvV", col = c(1, 3), cex = cexv[i])
}
## cex *vector* [in R <= 3.5.1 has 'if(xc < 0)' w/ length(xc) == 2]
legend(6,1, leg.txt, pch = "sSvV", col = c(1, 3), cex = 1+(-1:2)/8)
## 'merge = TRUE' for merging lines & points:
x <- seq(-pi, pi, length.out = 65)
plot(x, sin(x), type = "l", ylim = c(-1.2, 1.8), col = 3, lty = 2)
points(x, cos(x), pch = 3, col = 4)
lines(x, tan(x), type = "b", lty = 1, pch = 4, col = 6)
title("legend(..., lty = c(2, -1, 1), pch = c(NA, 3, 4), merge = TRUE)",
cex.main = 1.1)
legend(-1, 1.9, c("sin", "cos", "tan"), col = c(3, 4, 6),
text.col = "green4", lty = c(2, -1, 1), pch = c(NA, 3, 4),
merge = TRUE, bg = "gray90")
## right-justifying a set of labels: thanks to Uwe Ligges
x <- 1:5; y1 <- 1/x; y2 <- 2/x
plot(rep(x, 2), c(y1, y2), type = "n", xlab = "x", ylab = "y")
lines(x, y1); lines(x, y2, lty = 2)
temp <- legend("topright", legend = c(" ", " "),
text.width = strwidth("1,000,000"),
lty = 1:2, xjust = 1, yjust = 1,
title = "Line Types")
text(temp$rect$left + temp$rect$w, temp$text$y,
c("1,000", "1,000,000"), pos = 2)
##--- log scaled Examples ------------------------------
leg.txt <- c("a one", "a two")
par(mfrow = c(2, 2))
for(ll in c("","x","y","xy")) {
plot(2:10, log = ll, main = paste0("log = '", ll, "'"))
abline(1, 1)
lines(2:3, 3:4, col = 2)
points(2, 2, col = 3)
rect(2, 3, 3, 2, col = 4)
text(c(3,3), 2:3, c("rect(2,3,3,2, col=4)",
"text(c(3,3),2:3,\"c(rect(...)\")"), adj = c(0, 0.3))
legend(list(x = 2,y = 8), legend = leg.txt, col = 2:3, pch = 1:2,
lty = 1, merge = TRUE) #, trace = TRUE)
}
par(mfrow = c(1,1))
##-- Math expressions: ------------------------------
x <- seq(-pi, pi, length.out = 65)
plot(x, sin(x), type = "l", col = 2, xlab = expression(phi),
ylab = expression(f(phi)))
abline(h = -1:1, v = pi/2*(-6:6), col = "gray90")
lines(x, cos(x), col = 3, lty = 2)
ex.cs1 <- expression(plain(sin) * phi, paste("cos", phi)) # 2 ways
utils::str(legend(-3, .9, ex.cs1, lty = 1:2, plot = FALSE,
adj = c(0, 0.6))) # adj y !
legend(-3, 0.9, ex.cs1, lty = 1:2, col = 2:3, adj = c(0, 0.6))
require(stats)
x <- rexp(100, rate = .5)
hist(x, main = "Mean and Median of a Skewed Distribution")
abline(v = mean(x), col = 2, lty = 2, lwd = 2)
abline(v = median(x), col = 3, lty = 3, lwd = 2)
ex12 <- expression(bar(x) == sum(over(x[i], n), i == 1, n),
hat(x) == median(x[i], i == 1, n))
utils::str(legend(4.1, 30, ex12, col = 2:3, lty = 2:3, lwd = 2))
## 'Filled' boxes -- for more, see example(plot.factor)
op <- par(bg = "white") # to get an opaque box for the legend
plot(cut(weight, 3) ~ group, data = PlantGrowth, col = NULL,
density = 16*(1:3))
par(op)
## Using 'ncol' :
x <- 0:64/64
matplot(x, outer(x, 1:7, function(x, k) sin(k * pi * x)),
type = "o", col = 1:7, ylim = c(-1, 1.5), pch = "*")
op <- par(bg = "antiquewhite1")
legend(0, 1.5, paste("sin(", 1:7, "pi * x)"), col = 1:7, lty = 1:7,
pch = "*", ncol = 4, cex = 0.8)
legend(.8,1.2, paste("sin(", 1:7, "pi * x)"), col = 1:7, lty = 1:7,
pch = "*", cex = 0.8)
legend(0, -.1, paste("sin(", 1:4, "pi * x)"), col = 1:4, lty = 1:4,
ncol = 2, cex = 0.8)
legend(0, -.4, paste("sin(", 5:7, "pi * x)"), col = 4:6, pch = 24,
ncol = 2, cex = 1.5, lwd = 2, pt.bg = "pink", pt.cex = 1:3)
par(op)
## point covering line :
y <- sin(3*pi*x)
plot(x, y, type = "l", col = "blue",
main = "points with bg & legend(*, pt.bg)")
points(x, y, pch = 21, bg = "white")
legend(.4,1, "sin(c x)", pch = 21, pt.bg = "white", lty = 1, col = "blue")
## legends with titles at different locations
plot(x, y, type = "n")
legend("bottomright", "(x,y)", pch=1, title= "bottomright")
legend("bottom", "(x,y)", pch=1, title= "bottom")
legend("bottomleft", "(x,y)", pch=1, title= "bottomleft")
legend("left", "(x,y)", pch=1, title= "left")
legend("topleft", "(x,y)", pch=1, title= "topleft, inset = .05", inset = .05)
legend("top", "(x,y)", pch=1, title= "top")
legend("topright", "(x,y)", pch=1, title= "topright, inset = .02",inset = .02)
legend("right", "(x,y)", pch=1, title= "right")
legend("center", "(x,y)", pch=1, title= "center")
# using text.font (and text.col):
op <- par(mfrow = c(2, 2), mar = rep(2.1, 4))
c6 <- terrain.colors(10)[1:6]
for(i in 1:4) {
plot(1, type = "n", axes = FALSE, ann = FALSE); title(paste("text.font =",i))
legend("top", legend = LETTERS[1:6], col = c6,
ncol = 2, cex = 2, lwd = 3, text.font = i, text.col = c6)
}
par(op)
```
| programming_docs |
r None
`stars` Star (Spider/Radar) Plots and Segment Diagrams
-------------------------------------------------------
### Description
Draw star plots or segment diagrams of a multivariate data set. With one single location, also draws ‘spider’ (or ‘radar’) plots.
### Usage
```
stars(x, full = TRUE, scale = TRUE, radius = TRUE,
labels = dimnames(x)[[1]], locations = NULL,
nrow = NULL, ncol = NULL, len = 1,
key.loc = NULL, key.labels = dimnames(x)[[2]],
key.xpd = TRUE,
xlim = NULL, ylim = NULL, flip.labels = NULL,
draw.segments = FALSE,
col.segments = 1:n.seg, col.stars = NA, col.lines = NA,
axes = FALSE, frame.plot = axes,
main = NULL, sub = NULL, xlab = "", ylab = "",
cex = 0.8, lwd = 0.25, lty = par("lty"), xpd = FALSE,
mar = pmin(par("mar"),
1.1+ c(2*axes+ (xlab != ""),
2*axes+ (ylab != ""), 1, 0)),
add = FALSE, plot = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | matrix or data frame of data. One star or segment plot will be produced for each row of `x`. Missing values (`NA`) are allowed, but they are treated as if they were 0 (after scaling, if relevant). |
| `full` | logical flag: if `TRUE`, the segment plots will occupy a full circle. Otherwise, they occupy the (upper) semicircle only. |
| `scale` | logical flag: if `TRUE`, the columns of the data matrix are scaled independently so that the maximum value in each column is 1 and the minimum is 0. If `FALSE`, the presumption is that the data have been scaled by some other algorithm to the range *[0, 1]*. |
| `radius` | logical flag: in `TRUE`, the radii corresponding to each variable in the data will be drawn. |
| `labels` | vector of character strings for labeling the plots. Unlike the S function `stars`, no attempt is made to construct labels if `labels = NULL`. |
| `locations` | Either two column matrix with the x and y coordinates used to place each of the segment plots; or numeric of length 2 when all plots should be superimposed (for a ‘spider plot’). By default, `locations = NULL`, the segment plots will be placed in a rectangular grid. |
| `nrow, ncol` | integers giving the number of rows and columns to use when `locations` is `NULL`. By default, `nrow == ncol`, a square layout will be used. |
| `len` | scale factor for the length of radii or segments. |
| `key.loc` | vector with x and y coordinates of the unit key. |
| `key.labels` | vector of character strings for labeling the segments of the unit key. If omitted, the second component of `dimnames(x)` is used, if available. |
| `key.xpd` | clipping switch for the unit key (drawing and labeling), see `<par>("xpd")`. |
| `xlim` | vector with the range of x coordinates to plot. |
| `ylim` | vector with the range of y coordinates to plot. |
| `flip.labels` | logical indicating if the label locations should flip up and down from diagram to diagram. Defaults to a somewhat smart heuristic. |
| `draw.segments` | logical. If `TRUE` draw a segment diagram. |
| `col.segments` | color vector (integer or character, see `<par>`), each specifying a color for one of the segments (variables). Ignored if `draw.segments = FALSE`. |
| `col.stars` | color vector (integer or character, see `<par>`), each specifying a color for one of the stars (cases). Ignored if `draw.segments = TRUE`. |
| `col.lines` | color vector (integer or character, see `<par>`), each specifying a color for one of the lines (cases). Ignored if `draw.segments = TRUE`. |
| `axes` | logical flag: if `TRUE` axes are added to the plot. |
| `frame.plot` | logical flag: if `TRUE`, the plot region is framed. |
| `main` | a main title for the plot. |
| `sub` | a sub title for the plot. |
| `xlab` | a label for the x axis. |
| `ylab` | a label for the y axis. |
| `cex` | character expansion factor for the labels. |
| `lwd` | line width used for drawing. |
| `lty` | line type used for drawing. |
| `xpd` | logical or NA indicating if clipping should be done, see `<par>(xpd = .)`. |
| `mar` | argument to `<par>(mar = *)`, typically choosing smaller margins than by default. |
| `...` | further arguments, passed to the first call of `plot()`, see `<plot.default>` and to `<box>()` if `frame.plot` is true. |
| `add` | logical, if `TRUE` *add* stars to current plot. |
| `plot` | logical, if `FALSE`, nothing is plotted. |
### Details
Missing values are treated as 0.
Each star plot or segment diagram represents one row of the input `x`. Variables (columns) start on the right and wind counterclockwise around the circle. The size of the (scaled) column is shown by the distance from the center to the point on the star or the radius of the segment representing the variable.
Only one page of output is produced.
### Value
Returns the locations of the plots in a two column matrix, invisibly when `plot = TRUE`.
### Note
This code started life as spatial star plots by David A. Andrews.
Prior to **R** 1.4.1, scaling only shifted the maximum to 1, although documented as here.
### Author(s)
Thomas S. Dye
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<symbols>` for another way to draw stars and other symbols.
### Examples
```
require(grDevices)
stars(mtcars[, 1:7], key.loc = c(14, 2),
main = "Motor Trend Cars : stars(*, full = F)", full = FALSE)
stars(mtcars[, 1:7], key.loc = c(14, 1.5),
main = "Motor Trend Cars : full stars()", flip.labels = FALSE)
## 'Spider' or 'Radar' plot:
stars(mtcars[, 1:7], locations = c(0, 0), radius = FALSE,
key.loc = c(0, 0), main = "Motor Trend Cars", lty = 2)
## Segment Diagrams:
palette(rainbow(12, s = 0.6, v = 0.75))
stars(mtcars[, 1:7], len = 0.8, key.loc = c(12, 1.5),
main = "Motor Trend Cars", draw.segments = TRUE)
stars(mtcars[, 1:7], len = 0.6, key.loc = c(1.5, 0),
main = "Motor Trend Cars", draw.segments = TRUE,
frame.plot = TRUE, nrow = 4, cex = .7)
## scale linearly (not affinely) to [0, 1]
USJudge <- apply(USJudgeRatings, 2, function(x) x/max(x))
Jnam <- row.names(USJudgeRatings)
Snam <- abbreviate(substring(Jnam, 1, regexpr("[,.]",Jnam) - 1), 7)
stars(USJudge, labels = Jnam, scale = FALSE,
key.loc = c(13, 1.5), main = "Judge not ...", len = 0.8)
stars(USJudge, labels = Snam, scale = FALSE,
key.loc = c(13, 1.5), radius = FALSE)
loc <- stars(USJudge, labels = NULL, scale = FALSE,
radius = FALSE, frame.plot = TRUE,
key.loc = c(13, 1.5), main = "Judge not ...", len = 1.2)
text(loc, Snam, col = "blue", cex = 0.8, xpd = TRUE)
## 'Segments':
stars(USJudge, draw.segments = TRUE, scale = FALSE, key.loc = c(13,1.5))
## 'Spider':
stars(USJudgeRatings, locations = c(0, 0), scale = FALSE, radius = FALSE,
col.stars = 1:10, key.loc = c(0, 0), main = "US Judges rated")
## Same as above, but with colored lines instead of filled polygons.
stars(USJudgeRatings, locations = c(0, 0), scale = FALSE, radius = FALSE,
col.lines = 1:10, key.loc = c(0, 0), main = "US Judges rated")
## 'Radar-Segments'
stars(USJudgeRatings[1:10,], locations = 0:1, scale = FALSE,
draw.segments = TRUE, col.segments = 0, col.stars = 1:10, key.loc = 0:1,
main = "US Judges 1-10 ")
palette("default")
stars(cbind(1:16, 10*(16:1)), draw.segments = TRUE,
main = "A Joke -- do *not* use symbols on 2D data!")
```
r None
`xspline` Draw an X-spline
---------------------------
### Description
Draw an X-spline, a curve drawn relative to control points.
### Usage
```
xspline(x, y = NULL, shape = 0, open = TRUE, repEnds = TRUE,
draw = TRUE, border = par("fg"), col = NA, ...)
```
### Arguments
| | |
| --- | --- |
| `x,y` | vectors containing the coordinates of the vertices of the polygon. See `[xy.coords](../../grdevices/html/xy.coords)` for alternatives. |
| `shape` | A numeric vector of values between -1 and 1, which control the shape of the spline relative to the control points. |
| `open` | A logical value indicating whether the spline is an open or a closed shape. |
| `repEnds` | For open X-splines, a logical value indicating whether the first and last control points should be replicated for drawing the curve. Ignored for closed X-splines. |
| `draw` | logical: should the X-spline be drawn? If false, a set of line segments to draw the curve is returned, and nothing is drawn. |
| `border` | the color to draw the curve. Use `border = NA` to omit borders. |
| `col` | the color for filling the shape. The default, `NA`, is to leave unfilled. |
| `...` | [graphical parameters](par) such as `lty`, `xpd`, `lend`, `ljoin` and `lmitre` can be given as arguments. |
### Details
An X-spline is a line drawn relative to control points. For each control point, the line may pass through (interpolate) the control point or it may only approach (approximate) the control point; the behaviour is determined by a shape parameter for each control point.
If the shape parameter is greater than zero, the spline approximates the control points (and is very similar to a cubic B-spline when the shape is 1). If the shape parameter is less than zero, the spline interpolates the control points (and is very similar to a Catmull-Rom spline when the shape is -1). If the shape parameter is 0, the spline forms a sharp corner at that control point.
For open X-splines, the start and end control points must have a shape of 0 (and non-zero values are silently converted to zero).
For open X-splines, by default the start and end control points are replicated before the curve is drawn. A curve is drawn between (interpolating or approximating) the second and third of each set of four control points, so this default behaviour ensures that the resulting curve starts at the first control point you have specified and ends at the last control point. The default behaviour can be turned off via the `repEnds` argument.
### Value
If `draw = TRUE`, `NULL` otherwise a list with elements `x` and `y` which could be passed to `<lines>`, `<polygon>` and so on.
Invisible in both cases.
### Note
Two-dimensional splines need to be created in an isotropic coordinate system. Device coordinates are used (with an anisotropy correction if needed.)
### References
Blanc, C. and Schlick, C. (1995), *X-splines : A Spline Model Designed for the End User*, in *Proceedings of SIGGRAPH 95*, pp. 377–386. <https://dept-info.labri.fr/~schlick/DOC/sig1.html>
### See Also
`<polygon>`.
`<par>` for how to specify colors.
### Examples
```
## based on examples in ?grid.xspline
xsplineTest <- function(s, open = TRUE,
x = c(1,1,3,3)/4,
y = c(1,3,3,1)/4, ...) {
plot(c(0,1), c(0,1), type = "n", axes = FALSE, xlab = "", ylab = "")
points(x, y, pch = 19)
xspline(x, y, s, open, ...)
text(x+0.05*c(-1,-1,1,1), y+0.05*c(-1,1,1,-1), s)
}
op <- par(mfrow = c(3,3), mar = rep(0,4), oma = c(0,0,2,0))
xsplineTest(c(0, -1, -1, 0))
xsplineTest(c(0, -1, 0, 0))
xsplineTest(c(0, -1, 1, 0))
xsplineTest(c(0, 0, -1, 0))
xsplineTest(c(0, 0, 0, 0))
xsplineTest(c(0, 0, 1, 0))
xsplineTest(c(0, 1, -1, 0))
xsplineTest(c(0, 1, 0, 0))
xsplineTest(c(0, 1, 1, 0))
title("Open X-splines", outer = TRUE)
par(mfrow = c(3,3), mar = rep(0,4), oma = c(0,0,2,0))
xsplineTest(c(0, -1, -1, 0), FALSE, col = "grey80")
xsplineTest(c(0, -1, 0, 0), FALSE, col = "grey80")
xsplineTest(c(0, -1, 1, 0), FALSE, col = "grey80")
xsplineTest(c(0, 0, -1, 0), FALSE, col = "grey80")
xsplineTest(c(0, 0, 0, 0), FALSE, col = "grey80")
xsplineTest(c(0, 0, 1, 0), FALSE, col = "grey80")
xsplineTest(c(0, 1, -1, 0), FALSE, col = "grey80")
xsplineTest(c(0, 1, 0, 0), FALSE, col = "grey80")
xsplineTest(c(0, 1, 1, 0), FALSE, col = "grey80")
title("Closed X-splines", outer = TRUE)
par(op)
x <- sort(stats::rnorm(5))
y <- sort(stats::rnorm(5))
plot(x, y, pch = 19)
res <- xspline(x, y, 1, draw = FALSE)
lines(res)
## the end points may be very close together,
## so use last few for direction
nr <- length(res$x)
arrows(res$x[1], res$y[1], res$x[4], res$y[4], code = 1, length = 0.1)
arrows(res$x[nr-3], res$y[nr-3], res$x[nr], res$y[nr], code = 2, length = 0.1)
```
r None
`arrows` Add Arrows to a Plot
------------------------------
### Description
Draw arrows between pairs of points.
### Usage
```
arrows(x0, y0, x1 = x0, y1 = y0, length = 0.25, angle = 30,
code = 2, col = par("fg"), lty = par("lty"),
lwd = par("lwd"), ...)
```
### Arguments
| | |
| --- | --- |
| `x0, y0` | coordinates of points **from** which to draw. |
| `x1, y1` | coordinates of points **to** which to draw. At least one must the supplied |
| `length` | length of the edges of the arrow head (in inches). |
| `angle` | angle from the shaft of the arrow to the edge of the arrow head. |
| `code` | integer code, determining *kind* of arrows to be drawn. |
| `col, lty, lwd` | [graphical parameters](par), possible vectors. `NA` values in `col` cause the arrow to be omitted. |
| `...` | [graphical parameters](par) such as `xpd` and the line characteristics `lend`, `ljoin` and `lmitre`: see `<par>`. |
### Details
For each `i`, an arrow is drawn between the point `(x0[i],
y0[i])` and the point `(x1[i], y1[i])`. The coordinate vectors will be recycled to the length of the longest.
If `code = 1` an arrowhead is drawn at `(x0[i], y0[i])` and if `code = 2` an arrowhead is drawn at `(x1[i], y1[i])`. If `code = 3` a head is drawn at both ends of the arrow. Unless `length = 0`, when no head is drawn.
The [graphical parameters](par) `col`, `lty` and `lwd` can be vectors of length greater than one and will be recycled if necessary.
The direction of a zero-length arrow is indeterminate, and hence so is the direction of the arrowheads. To allow for rounding error, arrowheads are omitted (with a warning) on any arrow of length less than 1/1000 inch.
### Note
The first four arguments in the comparable S function are named `x1, y1, x2, y2`.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<segments>` to draw segments.
### Examples
```
x <- stats::runif(12); y <- stats::rnorm(12)
i <- order(x, y); x <- x[i]; y <- y[i]
plot(x,y, main = "arrows(.) and segments(.)")
## draw arrows from point to point :
s <- seq(length(x)-1) # one shorter than data
arrows(x[s], y[s], x[s+1], y[s+1], col = 1:3)
s <- s[-length(s)]
segments(x[s], y[s], x[s+2], y[s+2], col = "pink")
```
r None
`persp` Perspective Plots
--------------------------
### Description
This function draws perspective plots of a surface over the x–y plane. `persp` is a generic function.
### Usage
```
persp(x, ...)
## Default S3 method:
persp(x = seq(0, 1, length.out = nrow(z)),
y = seq(0, 1, length.out = ncol(z)),
z, xlim = range(x), ylim = range(y),
zlim = range(z, na.rm = TRUE),
xlab = NULL, ylab = NULL, zlab = NULL,
main = NULL, sub = NULL,
theta = 0, phi = 15, r = sqrt(3), d = 1,
scale = TRUE, expand = 1,
col = "white", border = NULL, ltheta = -135, lphi = 0,
shade = NA, box = TRUE, axes = TRUE, nticks = 5,
ticktype = "simple", ...)
```
### Arguments
| | |
| --- | --- |
| `x, y` | locations of grid lines at which the values in `z` are measured. These must be in ascending order. By default, equally spaced values from 0 to 1 are used. If `x` is a `list`, its components `x$x` and `x$y` are used for `x` and `y`, respectively. |
| `z` | a matrix containing the values to be plotted (`NA`s are allowed). Note that `x` can be used instead of `z` for convenience. |
| `xlim, ylim, zlim` | x-, y- and z-limits. These should be chosen to cover the range of values of the surface: see ‘Details’. |
| `xlab, ylab, zlab` | titles for the axes. N.B. These must be character strings; expressions are not accepted. Numbers will be coerced to character strings. |
| `main, sub` | main and sub title, as for `<title>`. |
| `theta, phi` | angles defining the viewing direction. `theta` gives the azimuthal direction and `phi` the colatitude. |
| `r` | the distance of the eyepoint from the centre of the plotting box. |
| `d` | a value which can be used to vary the strength of the perspective transformation. Values of `d` greater than 1 will lessen the perspective effect and values less and 1 will exaggerate it. |
| `scale` | before viewing the x, y and z coordinates of the points defining the surface are transformed to the interval [0,1]. If `scale` is `TRUE` the x, y and z coordinates are transformed separately. If `scale` is `FALSE` the coordinates are scaled so that aspect ratios are retained. This is useful for rendering things like DEM information. |
| `expand` | a expansion factor applied to the `z` coordinates. Often used with `0 < expand < 1` to shrink the plotting box in the `z` direction. |
| `col` | the color(s) of the surface facets. Transparent colours are ignored. This is recycled to the *(nx-1)(ny-1)* facets. |
| `border` | the color of the line drawn around the surface facets. The default, `NULL`, corresponds to `par("fg")`. A value of `NA` will disable the drawing of borders: this is sometimes useful when the surface is shaded. |
| `ltheta, lphi` | if finite values are specified for `ltheta` and `lphi`, the surface is shaded as though it was being illuminated from the direction specified by azimuth `ltheta` and colatitude `lphi`. |
| `shade` | the shade at a surface facet is computed as `((1+d)/2)^shade`, where `d` is the dot product of a unit vector normal to the facet and a unit vector in the direction of a light source. Values of `shade` close to one yield shading similar to a point light source model and values close to zero produce no shading. Values in the range 0.5 to 0.75 provide an approximation to daylight illumination. |
| `box` | should the bounding box for the surface be displayed. The default is `TRUE`. |
| `axes` | should ticks and labels be added to the box. The default is `TRUE`. If `box` is `FALSE` then no ticks or labels are drawn. |
| `ticktype` | character: `"simple"` draws just an arrow parallel to the axis to indicate direction of increase; `"detailed"` draws normal ticks as per 2D plots. |
| `nticks` | the (approximate) number of tick marks to draw on the axes. Has no effect if `ticktype` is `"simple"`. |
| `...` | additional [graphical parameters](par) (see `<par>`). |
### Details
The plots are produced by first transforming the (x,y,z) coordinates to the interval [0,1] using the limits supplied or computed from the range of the data. The surface is then viewed by looking at the origin from a direction defined by `theta` and `phi`. If `theta` and `phi` are both zero the viewing direction is directly down the negative y axis. Changing `theta` will vary the azimuth and changing `phi` the colatitude.
There is a hook called `"persp"` (see `[setHook](../../base/html/userhooks)`) called after the plot is completed, which is used in the testing code to annotate the plot page. The hook function(s) are called with no argument.
Notice that `persp` interprets the `z` matrix as a table of `f(x[i], y[j])` values, so that the x axis corresponds to row number and the y axis to column number, with column 1 at the bottom, so that with the standard rotation angles, the top left corner of the matrix is displayed at the left hand side, closest to the user.
The sizes and fonts of the axis labels and the annotations for `ticktype = "detailed"` are controlled by graphics parameters `"cex.lab"`/`"font.lab"` and `"cex.axis"`/`"font.axis"` respectively.
The bounding box is drawn with edges of faces facing away from the viewer (and hence at the back of the box) with solid lines and other edges dashed and on top of the surface. This (and the plotting of the axes) assumes that the axis limits are chosen so that the surface is within the box, and the function will warn if this is not the case.
### Value
`persp()` returns the *viewing transformation matrix*, say `VT`, a *4 x 4* matrix suitable for projecting 3D coordinates *(x,y,z)* into the 2D plane using homogeneous 4D coordinates *(x,y,z,t)*. It can be used to superimpose additional graphical elements on the 3D plot, by `<lines>()` or `<points>()`, using the function `[trans3d](../../grdevices/html/trans3d)()`.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<contour>` and `<image>`; `[trans3d](../../grdevices/html/trans3d)`.
Rotatable 3D plots can be produced by package [rgl](https://CRAN.R-project.org/package=rgl): other ways to produce static perspective plots are available in packages [lattice](https://CRAN.R-project.org/package=lattice) and [scatterplot3d](https://CRAN.R-project.org/package=scatterplot3d).
### Examples
```
require(grDevices) # for trans3d
## More examples in demo(persp) !!
## -----------
# (1) The Obligatory Mathematical surface.
# Rotated sinc function.
x <- seq(-10, 10, length.out = 30)
y <- x
f <- function(x, y) { r <- sqrt(x^2+y^2); 10 * sin(r)/r }
z <- outer(x, y, f)
z[is.na(z)] <- 1
op <- par(bg = "white")
persp(x, y, z, theta = 30, phi = 30, expand = 0.5, col = "lightblue")
persp(x, y, z, theta = 30, phi = 30, expand = 0.5, col = "lightblue",
ltheta = 120, shade = 0.75, ticktype = "detailed",
xlab = "X", ylab = "Y", zlab = "Sinc( r )"
) -> res
round(res, 3)
# (2) Add to existing persp plot - using trans3d() :
xE <- c(-10,10); xy <- expand.grid(xE, xE)
points(trans3d(xy[,1], xy[,2], 6, pmat = res), col = 2, pch = 16)
lines (trans3d(x, y = 10, z = 6 + sin(x), pmat = res), col = 3)
phi <- seq(0, 2*pi, length.out = 201)
r1 <- 7.725 # radius of 2nd maximum
xr <- r1 * cos(phi)
yr <- r1 * sin(phi)
lines(trans3d(xr,yr, f(xr,yr), res), col = "pink", lwd = 2)
## (no hidden lines)
# (3) Visualizing a simple DEM model
z <- 2 * volcano # Exaggerate the relief
x <- 10 * (1:nrow(z)) # 10 meter spacing (S to N)
y <- 10 * (1:ncol(z)) # 10 meter spacing (E to W)
## Don't draw the grid lines : border = NA
par(bg = "slategray")
persp(x, y, z, theta = 135, phi = 30, col = "green3", scale = FALSE,
ltheta = -120, shade = 0.75, border = NA, box = FALSE)
# (4) Surface colours corresponding to z-values
par(bg = "white")
x <- seq(-1.95, 1.95, length.out = 30)
y <- seq(-1.95, 1.95, length.out = 35)
z <- outer(x, y, function(a, b) a*b^2)
nrz <- nrow(z)
ncz <- ncol(z)
# Create a function interpolating colors in the range of specified colors
jet.colors <- colorRampPalette( c("blue", "green") )
# Generate the desired number of colors from this palette
nbcol <- 100
color <- jet.colors(nbcol)
# Compute the z-value at the facet centres
zfacet <- z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz, -ncz]
# Recode facet z-values into color indices
facetcol <- cut(zfacet, nbcol)
persp(x, y, z, col = color[facetcol], phi = 30, theta = -30)
par(op)
```
| programming_docs |
r None
`text` Add Text to a Plot
--------------------------
### Description
`text` draws the strings given in the vector `labels` at the coordinates given by `x` and `y`. `y` may be missing since `[xy.coords](../../grdevices/html/xy.coords)(x, y)` is used for construction of the coordinates.
### Usage
```
text(x, ...)
## Default S3 method:
text(x, y = NULL, labels = seq_along(x$x), adj = NULL,
pos = NULL, offset = 0.5, vfont = NULL,
cex = 1, col = NULL, font = NULL, ...)
```
### Arguments
| | |
| --- | --- |
| `x, y` | numeric vectors of coordinates where the text `labels` should be written. If the length of `x` and `y` differs, the shorter one is recycled. |
| `labels` | a character vector or [expression](../../base/html/expression) specifying the *text* to be written. An attempt is made to coerce other language objects (names and calls) to expressions, and vectors and other classed objects to character vectors by `[as.character](../../base/html/character)`. If `labels` is longer than `x` and `y`, the coordinates are recycled to the length of `labels`. |
| `adj` | one or two values in *[0, 1]* which specify the x (and optionally y) adjustment (‘justification’) of the labels, with 0 for left/bottom, 1 for right/top, and 0.5 for centered. On most devices values outside *[0, 1]* will also work. See below. |
| `pos` | a position specifier for the text. If specified this overrides any `adj` value given. Values of `1`, `2`, `3` and `4`, respectively indicate positions below, to the left of, above and to the right of the specified `(x,y)` coordinates. |
| `offset` | when `pos` is specified, this value controls the distance (‘offset’) of the text label from the specified coordinate in fractions of a character width. |
| `vfont` | `NULL` for the current font family, or a character vector of length 2 for Hershey vector fonts. The first element of the vector selects a typeface and the second element selects a style. Ignored if `labels` is an expression. |
| `cex` | numeric **c**haracter **ex**pansion factor; multiplied by `<par>("cex")` yields the final character size. `NULL` and `NA` are equivalent to `1.0`. |
| `col, font` | the color and (if `vfont = NULL`) font to be used, possibly vectors. These default to the values of the global [graphical parameters](par) in `<par>()`. |
| `...` | further [graphical parameters](par) (from `<par>`), such as `srt`, `family` and `xpd`. |
### Details
`labels` must be of type `[character](../../base/html/character)` or `[expression](../../base/html/expression)` (or be coercible to such a type). In the latter case, quite a bit of mathematical notation is available such as sub- and superscripts, greek letters, fractions, etc.
`adj` allows *adj*ustment of the text position with respect to `(x, y)`. Values of 0, 0.5, and 1 specify that `(x, y)` should align with the left/bottom, middle and right/top of the text, respectively. The default is for centered text, i.e., `adj = c(0.5, NA)`. Accurate vertical centering needs character metric information on individual characters which is only available on some devices. Vertical alignment is done slightly differently for character strings and for expressions: `adj = c(0,0)` means to left-justify and to align on the baseline for strings but on the bottom of the bounding box for expressions. This also affects vertical centering: for strings the centering excludes any descenders whereas for expressions it includes them. Using `NA` for strings centers them, including descenders.
The `pos` and `offset` arguments can be used in conjunction with values returned by `identify` to recreate an interactively labelled plot.
Text can be rotated by using [graphical parameters](par) `srt` (see `<par>`). When `adj` is specified, a non-zero `srt` rotates the label about `(x, y)`. If `pos` is specified, `srt` rotates the text about the point on its bounding box which is closest to `(x, y)`: top center for `pos = 1`, right center for `pos = 2`, bottom center for `pos = 3`, and left center for `pos = 4`. The `pos` interface is not as useful for rotated text because the result is no longer centered vertically or horizontally with respect to `(x, y)`. At present there is no interface in the graphics package for directly rotating text about its center which is achievable however by fiddling with `adj` and `srt` simultaneously.
Graphical parameters `col`, `cex` and `font` can be vectors and will then be applied cyclically to the `labels` (and extra values will be ignored). `NA` values of `font` are replaced by `par("font")`, and similarly for `col`.
Labels whose `x`, `y` or `labels` value is `NA` are omitted from the plot.
What happens when `font = 5` (the symbol font) is selected can be both device- and locale-dependent. Most often `labels` will be interpreted in the Adobe symbol encoding, so e.g. `"d"` is delta, and `"\300"` is aleph.
### Euro symbol
The Euro symbol may not be available in older fonts. In current versions of Adobe symbol fonts it is character 160, so `text(x,
y, "\xA0", font = 5)` may work. People using Western European locales on Unix-alikes can probably select ISO-8895-15 (Latin-9) which has the Euro as character 165: this can also be used for `[postscript](../../grdevices/html/postscript)` and `[pdf](../../grdevices/html/pdf)`. It is \u20ac in Unicode, which can be used in UTF-8 locales.
The Euro should be rendered correctly by `[X11](../../grdevices/html/x11)` in UTF-8 locales, but the corresponding single-byte encoding in `[postscript](../../grdevices/html/postscript)` and `[pdf](../../grdevices/html/pdf)` will need to be selected as `ISOLatin9.enc` (and the font will need to contain the Euro glyph, which for example older printers may not).
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Murrell, P. (2005) *R Graphics*. Chapman & Hall/CRC Press.
### See Also
`[text.formula](plot.formula)` for the formula method; `<mtext>`, `<title>`, `[Hershey](../../grdevices/html/hershey)` for details on Hershey vector fonts, `[plotmath](../../grdevices/html/plotmath)` for details and more examples on mathematical annotation.
### Examples
```
plot(-1:1, -1:1, type = "n", xlab = "Re", ylab = "Im")
K <- 16; text(exp(1i * 2 * pi * (1:K) / K), col = 2)
## The following two examples use latin1 characters: these may not
## appear correctly (or be omitted entirely).
plot(1:10, 1:10, main = "text(...) examples\n~~~~~~~~~~~~~~",
sub = "R is GNU ©, but not ® ...")
mtext("«Latin-1 accented chars»: éè øØ å<Å æ<Æ", side = 3)
points(c(6,2), c(2,1), pch = 3, cex = 4, col = "red")
text(6, 2, "the text is CENTERED around (x,y) = (6,2) by default",
cex = .8)
text(2, 1, "or Left/Bottom - JUSTIFIED at (2,1) by 'adj = c(0,0)'",
adj = c(0,0))
text(4, 9, expression(hat(beta) == (X^t * X)^{-1} * X^t * y))
text(4, 8.4, "expression(hat(beta) == (X^t * X)^{-1} * X^t * y)",
cex = .75)
text(4, 7, expression(bar(x) == sum(frac(x[i], n), i==1, n)))
## Two more latin1 examples
text(5, 10.2,
"Le français, c'est façile: Règles, Liberté, Egalité, Fraternité...")
text(5, 9.8,
"Jetz no chli züritüütsch: (noch ein bißchen Zürcher deutsch)")
```
r None
`fourfoldplot` Fourfold Plots
------------------------------
### Description
Creates a fourfold display of a 2 by 2 by *k* contingency table on the current graphics device, allowing for the visual inspection of the association between two dichotomous variables in one or several populations (strata).
### Usage
```
fourfoldplot(x, color = c("#99CCFF", "#6699CC"),
conf.level = 0.95,
std = c("margins", "ind.max", "all.max"),
margin = c(1, 2), space = 0.2, main = NULL,
mfrow = NULL, mfcol = NULL)
```
### Arguments
| | |
| --- | --- |
| `x` | a 2 by 2 by *k* contingency table in array form, or as a 2 by 2 matrix if *k* is 1. |
| `color` | a vector of length 2 specifying the colors to use for the smaller and larger diagonals of each 2 by 2 table. |
| `conf.level` | confidence level used for the confidence rings on the odds ratios. Must be a single nonnegative number less than 1; if set to 0, confidence rings are suppressed. |
| `std` | a character string specifying how to standardize the table. Must match one of `"margins"`, `"ind.max"`, or `"all.max"`, and can be abbreviated to the initial letter. If set to `"margins"`, each 2 by 2 table is standardized to equate the margins specified by `margin` while preserving the odds ratio. If `"ind.max"` or `"all.max"`, the tables are either individually or simultaneously standardized to a maximal cell frequency of 1. |
| `margin` | a numeric vector with the margins to equate. Must be one of `1`, `2`, or `c(1, 2)` (the default), which corresponds to standardizing the row, column, or both margins in each 2 by 2 table. Only used if `std` equals `"margins"`. |
| `space` | the amount of space (as a fraction of the maximal radius of the quarter circles) used for the row and column labels. |
| `main` | character string for the fourfold title. |
| `mfrow` | a numeric vector of the form `c(nr, nc)`, indicating that the displays for the 2 by 2 tables should be arranged in an `nr` by `nc` layout, filled by rows. |
| `mfcol` | a numeric vector of the form `c(nr, nc)`, indicating that the displays for the 2 by 2 tables should be arranged in an `nr` by `nc` layout, filled by columns. |
### Details
The fourfold display is designed for the display of 2 by 2 by *k* tables.
Following suitable standardization, the cell frequencies *f[i,j]* of each 2 by 2 table are shown as a quarter circle whose radius is proportional to *sqrt(f[i,j])* so that its area is proportional to the cell frequency. An association (odds ratio different from 1) between the binary row and column variables is indicated by the tendency of diagonally opposite cells in one direction to differ in size from those in the other direction; color is used to show this direction. Confidence rings for the odds ratio allow a visual test of the null of no association; the rings for adjacent quadrants overlap if and only if the observed counts are consistent with the null hypothesis.
Typically, the number *k* corresponds to the number of levels of a stratifying variable, and it is of interest to see whether the association is homogeneous across strata. The fourfold display visualizes the pattern of association. Note that the confidence rings for the individual odds ratios are not adjusted for multiple testing.
### References
Friendly, M. (1994). A fourfold display for 2 by 2 by *k* tables. Technical Report 217, York University, Psychology Department. <http://datavis.ca/papers/4fold/4fold.pdf>
### See Also
`<mosaicplot>`
### Examples
```
## Use the Berkeley admission data as in Friendly (1995).
x <- aperm(UCBAdmissions, c(2, 1, 3))
dimnames(x)[[2]] <- c("Yes", "No")
names(dimnames(x)) <- c("Sex", "Admit?", "Department")
stats::ftable(x)
## Fourfold display of data aggregated over departments, with
## frequencies standardized to equate the margins for admission
## and sex.
## Figure 1 in Friendly (1994).
fourfoldplot(marginSums(x, c(1, 2)))
## Fourfold display of x, with frequencies in each table
## standardized to equate the margins for admission and sex.
## Figure 2 in Friendly (1994).
fourfoldplot(x)
## Fourfold display of x, with frequencies in each table
## standardized to equate the margins for admission. but not
## for sex.
## Figure 3 in Friendly (1994).
fourfoldplot(x, margin = 2)
```
r None
`image` Display a Color Image
------------------------------
### Description
Creates a grid of colored or gray-scale rectangles with colors corresponding to the values in `z`. This can be used to display three-dimensional or spatial data aka *images*. This is a generic function.
*NOTE:* the grid is drawn as a set of rectangles by default; see the `useRaster` argument to draw the grid as a raster image.
The function `[hcl.colors](../../grdevices/html/palettes)` provides a broad range of sequential color palettes that are suitable for displaying ordered data, with `n` giving the number of colors desired.
### Usage
```
image(x, ...)
## Default S3 method:
image(x, y, z, zlim, xlim, ylim,
col = hcl.colors(12, "YlOrRd", rev = TRUE),
add = FALSE, xaxs = "i", yaxs = "i", xlab, ylab,
breaks, oldstyle = FALSE, useRaster, ...)
```
### Arguments
| | |
| --- | --- |
| `x, y` | locations of grid lines at which the values in `z` are measured. These must be finite, non-missing and in (strictly) ascending order. By default, equally spaced values from 0 to 1 are used. If `x` is a `list`, its components `x$x` and `x$y` are used for `x` and `y`, respectively. If the list has component `z` this is used for `z`. |
| `z` | a numeric or logical matrix containing the values to be plotted (`NA`s are allowed). Note that `x` can be used instead of `z` for convenience. |
| `zlim` | the minimum and maximum `z` values for which colors should be plotted, defaulting to the range of the finite values of `z`. Each of the given colors will be used to color an equispaced interval of this range. The *midpoints* of the intervals cover the range, so that values just outside the range will be plotted. |
| `xlim, ylim` | ranges for the plotted `x` and `y` values, defaulting to the ranges of `x` and `y`. |
| `col` | a list of colors such as that generated by `[hcl.colors](../../grdevices/html/palettes)`, `[gray.colors](../../grdevices/html/gray.colors)` or similar functions. |
| `add` | logical; if `TRUE`, add to current plot (and disregard the following four arguments). This is rarely useful because `image` ‘paints’ over existing graphics. |
| `xaxs, yaxs` | style of x and y axis. The default `"i"` is appropriate for images. See `<par>`. |
| `xlab, ylab` | each a character string giving the labels for the x and y axis. Default to the ‘call names’ of `x` or `y`, or to `""` if these were unspecified. |
| `breaks` | a set of finite numeric breakpoints for the colours: must have one more breakpoint than colour and be in increasing order. Unsorted vectors will be sorted, with a warning. |
| `oldstyle` | logical. If true the midpoints of the colour intervals are equally spaced, and `zlim[1]` and `zlim[2]` were taken to be midpoints. The default is to have colour intervals of equal lengths between the limits. |
| `useRaster` | logical; if `TRUE` a bitmap raster is used to plot the image instead of polygons. The grid must be regular in that case, otherwise an error is raised. For the behaviour when this is not specified, see ‘Details’. |
| `...` | [graphical parameters](par) for `[plot](plot.default)` may also be passed as arguments to this function, as can the plot aspect ratio `asp` and `axes` (see `<plot.window>`). |
### Details
The length of `x` should be equal to the `nrow(z)+1` or `nrow(z)`. In the first case `x` specifies the boundaries between the cells: in the second case `x` specifies the midpoints of the cells. Similar reasoning applies to `y`. It probably only makes sense to specify the midpoints of an equally-spaced grid. If you specify just one row or column and a length-one `x` or `y`, the whole user area in the corresponding direction is filled. For logarithmic `x` or `y` axes the boundaries between cells must be specified.
Rectangles corresponding to missing values are not plotted (and so are transparent and (unless `add = TRUE`) the default background painted in `par("bg")` will show through and if that is transparent, the canvas colour will be seen).
If `breaks` is specified then `zlim` is unused and the algorithm used follows `[cut](../../base/html/cut)`, so intervals are closed on the right and open on the left except for the lowest interval which is closed at both ends.
The axes (where plotted) make use of the classes of `xlim` and `ylim` (and hence by default the classes of `x` and `y`): this will mean that for example dates are labelled as such.
Notice that `image` interprets the `z` matrix as a table of `f(x[i], y[j])` values, so that the x axis corresponds to row number and the y axis to column number, with column 1 at the bottom, i.e. a 90 degree counter-clockwise rotation of the conventional printed layout of a matrix.
Images for large `z` on a regular grid are rendered more efficiently with `useRaster = TRUE` and can prevent rare anti-aliasing artifacts, but may not be supported by all graphics devices. Some devices (such as `postscript` and `X11(type =
"Xlib")`) which do not support semi-transparent colours may emit missing values as white rather than transparent, and there may be limitations on the size of a raster image. (Problems with the rendering of raster images have been reported by users of `windows()` devices under Remote Desktop, at least under its default settings.)
The graphics files in PDF and PostScript can be much smaller under this option.
If `useRaster` is not specified, raster images are used when the `[getOption](../../base/html/options)("preferRaster")` is true, the grid is regular and either `[dev.capabilities](../../grdevices/html/dev.capabilities)("rasterImage")$rasterImage` is `"yes"` or it is `"non-missing"` and there are no missing values.
### Note
Originally based on a function by Thomas Lumley.
### See Also
`<filled.contour>` or `[heatmap](../../stats/html/heatmap)` which can look nicer (but are less modular), `<contour>`; The [lattice](https://CRAN.R-project.org/package=lattice) equivalent of `image` is `[levelplot](../../lattice/html/levelplot)`.
`[hcl.colors](../../grdevices/html/palettes)`, `[gray.colors](../../grdevices/html/gray.colors)`, `[hcl](../../grdevices/html/hcl)`, `[hsv](../../grdevices/html/hsv)`, `<par>`.
`[dev.capabilities](../../grdevices/html/dev.capabilities)` to see if `useRaster = TRUE` is supported on the current device.
### Examples
```
require("grDevices") # for colours
x <- y <- seq(-4*pi, 4*pi, length.out = 27)
r <- sqrt(outer(x^2, y^2, "+"))
image(z = z <- cos(r^2)*exp(-r/6), col = gray.colors(33))
image(z, axes = FALSE, main = "Math can be beautiful ...",
xlab = expression(cos(r^2) * e^{-r/6}))
contour(z, add = TRUE, drawlabels = FALSE)
# Volcano data visualized as matrix. Need to transpose and flip
# matrix horizontally.
image(t(volcano)[ncol(volcano):1,])
# A prettier display of the volcano
x <- 10*(1:nrow(volcano))
y <- 10*(1:ncol(volcano))
image(x, y, volcano, col = hcl.colors(100, "terrain"), axes = FALSE)
contour(x, y, volcano, levels = seq(90, 200, by = 5),
add = TRUE, col = "brown")
axis(1, at = seq(100, 800, by = 100))
axis(2, at = seq(100, 600, by = 100))
box()
title(main = "Maunga Whau Volcano", font.main = 4)
```
r None
`stripchart` 1-D Scatter Plots
-------------------------------
### Description
`stripchart` produces one dimensional scatter plots (or dot plots) of the given data. These plots are a good alternative to `<boxplot>`s when sample sizes are small.
### Usage
```
stripchart(x, ...)
## S3 method for class 'formula'
stripchart(x, data = NULL, dlab = NULL, ...,
subset, na.action = NULL)
## Default S3 method:
stripchart(x, method = "overplot", jitter = 0.1, offset = 1/3,
vertical = FALSE, group.names, add = FALSE,
at = NULL, xlim = NULL, ylim = NULL,
ylab = NULL, xlab = NULL, dlab = "", glab = "",
log = "", pch = 0, col = par("fg"), cex = par("cex"),
axes = TRUE, frame.plot = axes, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | the data from which the plots are to be produced. In the default method the data can be specified as a single numeric vector, or as list of numeric vectors, each corresponding to a component plot. In the `formula` method, a symbolic specification of the form `y ~ g` can be given, indicating the observations in the vector `y` are to be grouped according to the levels of the factor `g`. `NA`s are allowed in the data. |
| `data` | a data.frame (or list) from which the variables in `x` should be taken. |
| `subset` | an optional vector specifying a subset of observations to be used for plotting. |
| `na.action` | a function which indicates what should happen when the data contain `NA`s. The default is to ignore missing values in either the response or the group. |
| `...` | additional parameters passed to the default method, or by it to `<plot.window>`, `<points>`, `<axis>` and `<title>` to control the appearance of the plot. |
| `method` | the method to be used to separate coincident points. The default method `"overplot"` causes such points to be overplotted, but it is also possible to specify `"jitter"` to jitter the points, or `"stack"` have coincident points stacked. The last method only makes sense for very granular data. |
| `jitter` | when `method = "jitter"` is used, `jitter` gives the amount of jittering applied. |
| `offset` | when stacking is used, points are stacked this many line-heights (symbol widths) apart. |
| `vertical` | when vertical is `TRUE` the plots are drawn vertically rather than the default horizontal. |
| `group.names` | group labels which will be printed alongside (or underneath) each plot. |
| `add` | logical, if true *add* the chart to the current plot. |
| `at` | numeric vector giving the locations where the charts should be drawn, particularly when `add = TRUE`; defaults to `1:n` where `n` is the number of boxes. |
| `ylab, xlab` | labels: see `<title>`. |
| `dlab, glab` | alternate way to specify axis labels: see ‘Details’. |
| `xlim, ylim` | plot limits: see `<plot.window>`. |
| `log` | on which axes to use a log scale: see `<plot.default>` |
| `pch, col, cex` | Graphical parameters: see `<par>`. |
| `axes, frame.plot` | Axis control: see `<plot.default>`. |
### Details
Extensive examples of the use of this kind of plot can be found in Box, Hunter and Hunter or Seber and Wild.
The `dlab` and `glab` labels may be used instead of `xlab` and `ylab` if those are not specified. `dlab` applies to the continuous data axis (the X axis unless `vertical` is `TRUE`), `glab` to the group axis.
### Examples
```
x <- stats::rnorm(50)
xr <- round(x, 1)
stripchart(x) ; m <- mean(par("usr")[1:2])
text(m, 1.04, "stripchart(x, \"overplot\")")
stripchart(xr, method = "stack", add = TRUE, at = 1.2)
text(m, 1.35, "stripchart(round(x,1), \"stack\")")
stripchart(xr, method = "jitter", add = TRUE, at = 0.7)
text(m, 0.85, "stripchart(round(x,1), \"jitter\")")
stripchart(decrease ~ treatment,
main = "stripchart(OrchardSprays)",
vertical = TRUE, log = "y", data = OrchardSprays)
stripchart(decrease ~ treatment, at = c(1:8)^2,
main = "stripchart(OrchardSprays)",
vertical = TRUE, log = "y", data = OrchardSprays)
```
| programming_docs |
r None
`boxplot.matrix` Draw a Boxplot for each Column (Row) of a Matrix
------------------------------------------------------------------
### Description
Interpreting the columns (or rows) of a matrix as different groups, draw a boxplot for each.
### Usage
```
## S3 method for class 'matrix'
boxplot(x, use.cols = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a numeric matrix. |
| `use.cols` | logical indicating if columns (by default) or rows (`use.cols = FALSE`) should be plotted. |
| `...` | Further arguments to `<boxplot>`. |
### Value
A list as for `<boxplot>`.
### Author(s)
Martin Maechler, 1995, for S+, then **R** package [sfsmisc](https://CRAN.R-project.org/package=sfsmisc).
### See Also
`[boxplot.default](boxplot)` which already works nowadays with data.frames; `[boxplot.formula](boxplot)`, `<plot.factor>` which work with (the more general concept) of a grouping factor.
### Examples
```
## Very similar to the example in ?boxplot
mat <- cbind(Uni05 = (1:100)/21, Norm = rnorm(100),
T5 = rt(100, df = 5), Gam2 = rgamma(100, shape = 2))
boxplot(mat, main = "boxplot.matrix(...., main = ...)",
notch = TRUE, col = 1:4)
```
r None
`plot.xy` Basic Internal Plot Function
---------------------------------------
### Description
This is *the* internal function that does the basic plotting of points and lines. Usually, one should rather use the higher level functions instead and refer to their help pages for explanation of the arguments.
### Usage
```
plot.xy(xy, type, pch = par("pch"), lty = par("lty"),
col = par("col"), bg = NA,
cex = 1, lwd = par("lwd"), ...)
```
### Arguments
| | |
| --- | --- |
| `xy` | A four-element list as results from `[xy.coords](../../grdevices/html/xy.coords)`. |
| `type` | 1 character code: see `<plot.default>`. `NULL` is accepted as a synonym for `"p"`. |
| `pch` | character or integer code for kind of points, see `[points.default](points)`. |
| `lty` | line type code, see `<lines>`. |
| `col` | color code or name, see `[colors](../../grdevices/html/colors)`, `[palette](../../grdevices/html/palette)`. Here `NULL` means colour 0. |
| `bg` | background (fill) color for the open plot symbols 21:25: see `[points.default](points)`. |
| `cex` | character expansion. |
| `lwd` | line width, also used for (non-filled) plot symbols, see `<lines>` and `<points>`. |
| `...` | further [graphical parameters](par) such as `xpd`, `lend`, `ljoin` and `lmitre`. |
### Details
The arguments `pch`, `col`, `bg`, `cex`, `lwd` may be vectors and may be recycled, depending on `type`: see `<points>` and `<lines>` for specifics. In particular note that `lwd` is treated as a vector for points and as a single (first) value for lines.
`cex` is a numeric factor in addition to `par("cex")` which affects symbols and characters as drawn by `type` `"p"`, `"o"`, `"b"` and `"c"`.
### See Also
`[plot](plot.default)`, `<plot.default>`, `<points>`, `<lines>`.
### Examples
```
points.default # to see how it calls "plot.xy(xy.coords(x, y), ...)"
```
r None
`pie` Pie Charts
-----------------
### Description
Draw a pie chart.
### Usage
```
pie(x, labels = names(x), edges = 200, radius = 0.8,
clockwise = FALSE, init.angle = if(clockwise) 90 else 0,
density = NULL, angle = 45, col = NULL, border = NULL,
lty = NULL, main = NULL, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a vector of non-negative numerical quantities. The values in `x` are displayed as the areas of pie slices. |
| `labels` | one or more expressions or character strings giving names for the slices. Other objects are coerced by `[as.graphicsAnnot](../../grdevices/html/as.graphicsannot)`. For empty or `NA` (after coercion to character) labels, no label nor pointing line is drawn. |
| `edges` | the circular outline of the pie is approximated by a polygon with this many edges. |
| `radius` | the pie is drawn centered in a square box whose sides range from *-1* to *1*. If the character strings labeling the slices are long it may be necessary to use a smaller radius. |
| `clockwise` | logical indicating if slices are drawn clockwise or counter clockwise (i.e., mathematically positive direction), the latter is default. |
| `init.angle` | number specifying the *starting angle* (in degrees) for the slices. Defaults to 0 (i.e., ‘3 o'clock’) unless `clockwise` is true where `init.angle` defaults to 90 (degrees), (i.e., ‘12 o'clock’). |
| `density` | the density of shading lines, in lines per inch. The default value of `NULL` means that no shading lines are drawn. Non-positive values of `density` also inhibit the drawing of shading lines. |
| `angle` | the slope of shading lines, given as an angle in degrees (counter-clockwise). |
| `col` | a vector of colors to be used in filling or shading the slices. If missing a set of 6 pastel colours is used, unless `density` is specified when `par("fg")` is used. |
| `border, lty` | (possibly vectors) arguments passed to `<polygon>` which draws each slice. |
| `main` | an overall title for the plot. |
| `...` | [graphical parameters](par) can be given as arguments to `pie`. They will affect the main title and labels only. |
### Note
Pie charts are a very bad way of displaying information. The eye is good at judging linear measures and bad at judging relative areas. A bar chart or dot chart is a preferable way of displaying this type of data.
Cleveland (1985), page 264: “Data that can be shown by pie charts always can be shown by a dot chart. This means that judgements of position along a common scale can be made instead of the less accurate angle judgements.” This statement is based on the empirical investigations of Cleveland and McGill as well as investigations by perceptual psychologists.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Cleveland, W. S. (1985) *The Elements of Graphing Data*. Wadsworth: Monterey, CA, USA.
### See Also
`<dotchart>`.
### Examples
```
require(grDevices)
pie(rep(1, 24), col = rainbow(24), radius = 0.9)
pie.sales <- c(0.12, 0.3, 0.26, 0.16, 0.04, 0.12)
names(pie.sales) <- c("Blueberry", "Cherry",
"Apple", "Boston Cream", "Other", "Vanilla Cream")
pie(pie.sales) # default colours
pie(pie.sales, col = c("purple", "violetred1", "green3",
"cornsilk", "cyan", "white"))
pie(pie.sales, col = gray(seq(0.4, 1.0, length.out = 6)))
pie(pie.sales, density = 10, angle = 15 + 10 * 1:6)
pie(pie.sales, clockwise = TRUE, main = "pie(*, clockwise = TRUE)")
segments(0, 0, 0, 1, col = "red", lwd = 2)
text(0, 1, "init.angle = 90", col = "red")
n <- 200
pie(rep(1, n), labels = "", col = rainbow(n), border = NA,
main = "pie(*, labels=\"\", col=rainbow(n), border=NA,..")
## Another case showing pie() is rather fun than science:
## (original by FinalBackwardsGlance on http://imgur.com/gallery/wWrpU4X)
pie(c(Sky = 78, "Sunny side of pyramid" = 17, "Shady side of pyramid" = 5),
init.angle = 315, col = c("deepskyblue", "yellow", "yellow3"), border = FALSE)
```
r None
`plot.formula` Formula Notation for Scatterplots
-------------------------------------------------
### Description
Specify a scatterplot or add points, lines, or text via a formula.
### Usage
```
## S3 method for class 'formula'
plot(formula, data = parent.frame(), ..., subset,
ylab = varnames[response], ask = dev.interactive())
## S3 method for class 'formula'
points(formula, data = parent.frame(), ..., subset)
## S3 method for class 'formula'
lines(formula, data = parent.frame(), ..., subset)
## S3 method for class 'formula'
text(formula, data = parent.frame(), ..., subset)
```
### Arguments
| | |
| --- | --- |
| `formula` | a `[formula](../../stats/html/formula)`, such as `y ~ x`. |
| `data` | a data.frame (or list) from which the variables in `formula` should be taken. A matrix is converted to a data frame. |
| `...` | Arguments to be passed to or from other methods. `horizontal = TRUE` is also accepted. |
| `subset` | an optional vector specifying a subset of observations to be used in the fitting process. |
| `ylab` | the y label of the plot(s). |
| `ask` | logical, see `<par>`. |
### Details
For the `lines`, `points` and `text` methods the formula should be of the form `y ~ x` or `y ~ 1` with a left-hand side and a single term on the right-hand side. The `plot` method accepts other forms discussed later in this section.
Both the terms in the formula and the `...` arguments are evaluated in `data` enclosed in `parent.frame()` if `data` is a list or a data frame. The terms of the formula and those arguments in `...` that are of the same length as `data` are subjected to the subsetting specified in `subset`. A plot against the running index can be specified as `plot(y ~ 1)`.
If the formula in the `plot` method contains more than one term on the right-hand side, a series of plots is produced of the response against each non-response term.
For the `plot` method the formula can be of the form `~ z + y + z`: the variables specified on the right-hand side are collected into a data frame, subsetted if specified, and displayed by `[plot.data.frame](plot.dataframe)`.
Missing values are not considered in these methods, and in particular cases with missing values are not removed.
If `y` is an object (i.e., has a `[class](../../base/html/class)` attribute) then `plot.formula` looks for a plot method for that class first. Otherwise, the class of `x` will determine the type of the plot. For factors this will be a parallel boxplot, and argument `horizontal = TRUE` can be specified (see `<boxplot>`).
Note that some arguments will need to be protected from premature evaluation by enclosing them in `[quote](../../base/html/substitute)`: currently this is done automatically for `main`, `sub` and `xlab`. For example, it is needed for the `panel.first` and `panel.last` arguments passed to `<plot.default>`.
### Value
These functions are invoked for their side effect of drawing on the active graphics device.
### See Also
`<plot.default>`, `<points>`, `<lines>`, `<plot.factor>`.
### Examples
```
op <- par(mfrow = c(2,1))
plot(Ozone ~ Wind, data = airquality, pch = as.character(Month))
plot(Ozone ~ Wind, data = airquality, pch = as.character(Month),
subset = Month != 7)
par(op)
## text.formula() can be very natural:
wb <- within(warpbreaks, {
time <- seq_along(breaks); W.T <- wool:tension })
plot(breaks ~ time, data = wb, type = "b")
text(breaks ~ time, data = wb, labels = W.T, col = 1+as.integer(wool))
```
r None
`par` Set or Query Graphical Parameters
----------------------------------------
### Description
`par` can be used to set or query graphical parameters. Parameters can be set by specifying them as arguments to `par` in `tag = value` form, or by passing them as a list of tagged values.
### Usage
```
par(..., no.readonly = FALSE)
<highlevel plot> (..., <tag> = <value>)
```
### Arguments
| | |
| --- | --- |
| `...` | arguments in `tag = value` form, or a list of tagged values. The tags must come from the names of graphical parameters described in the ‘Graphical Parameters’ section. |
| `no.readonly` | logical; if `TRUE` and there are no other arguments, only parameters are returned which can be set by a subsequent `par()` call *on the same device*. |
### Details
Each device has its own set of graphical parameters. If the current device is the null device, `par` will open a new device before querying/setting parameters. (What device is controlled by `[options](../../base/html/options)("device")`.)
Parameters are queried by giving one or more character vectors of parameter names to `par`.
`par()` (no arguments) or `par(no.readonly = TRUE)` is used to get *all* the graphical parameters (as a named list). Their names are currently taken from the unexported variable `graphics:::.Pars`.
***R.O.*** indicates ***read-only arguments***: These may only be used in queries and cannot be set. (`"cin"`, `"cra"`, `"csi"`, `"cxy"`, `"din"` and `"page"` are always read-only.)
Several parameters can only be set by a call to `par()`:
* `"ask"`,
* `"fig"`, `"fin"`,
* `"lheight"`,
* `"mai"`, `"mar"`, `"mex"`, `"mfcol"`, `"mfrow"`, `"mfg"`,
* `"new"`,
* `"oma"`, `"omd"`, `"omi"`,
* `"pin"`, `"plt"`, `"ps"`, `"pty"`,
* `"usr"`,
* `"xlog"`, `"ylog"`,
* `"ylbias"`
The remaining parameters can also be set as arguments (often via `...`) to high-level plot functions such as `<plot.default>`, `<plot.window>`, `<points>`, `<lines>`, `<abline>`, `<axis>`, `<title>`, `<text>`, `<mtext>`, `<segments>`, `<symbols>`, `<arrows>`, `<polygon>`, `<rect>`, `<box>`, `<contour>`, `<filled.contour>` and `<image>`. Such settings will be active during the execution of the function, only. However, see the comments on `bg`, `cex`, `col`, `lty`, `lwd` and `pch` which may be taken as *arguments* to certain plot functions rather than as graphical parameters.
The meaning of ‘character size’ is not well-defined: this is set up for the device taking `pointsize` into account but often not the actual font family in use. Internally the corresponding pars (`cra`, `cin`, `cxy` and `csi`) are used only to set the inter-line spacing used to convert `mar` and `oma` to physical margins. (The same inter-line spacing multiplied by `lheight` is used for multi-line strings in `text` and `strheight`.)
Note that graphical parameters are suggestions: plotting functions and devices need not make use of them (and this is particularly true of non-default methods for e.g. `plot`).
### Value
When parameters are set, their previous values are returned in an invisible named list. Such a list can be passed as an argument to `par` to restore the parameter values. Use `par(no.readonly
= TRUE)` for the full list of parameters that can be restored. However, restoring all of these is not wise: see the ‘Note’ section.
When just one parameter is queried, the value of that parameter is returned as (atomic) vector. When two or more parameters are queried, their values are returned in a list, with the list names giving the parameters.
Note the inconsistency: setting one parameter returns a list, but querying one parameter returns a vector.
### Graphical Parameters
`adj`
The value of `adj` determines the way in which text strings are justified in `<text>`, `<mtext>` and `<title>`. A value of `0` produces left-justified text, `0.5` (the default) centered text and `1` right-justified text. (Any value in *[0, 1]* is allowed, and on most devices values outside that interval will also work.)
Note that the `adj` *argument* of `<text>` also allows `adj = c(x, y)` for different adjustment in x- and y- directions. Note that whereas for `text` it refers to positioning of text about a point, for `mtext` and `title` it controls placement within the plot or device region.
`ann`
If set to `FALSE`, high-level plotting functions calling `<plot.default>` do not annotate the plots they produce with axis titles and overall titles. The default is to do annotation.
`ask`
logical. If `TRUE` (and the **R** session is interactive) the user is asked for input, before a new figure is drawn. As this applies to the device, it also affects output by packages grid and [lattice](https://CRAN.R-project.org/package=lattice). It can be set even on non-screen devices but may have no effect there.
This not really a graphics parameter, and its use is deprecated in favour of `[devAskNewPage](../../grdevices/html/devasknewpage)`.
`bg`
The color to be used for the background of the device region. When called from `par()` it also sets `new = FALSE`. See section ‘Color Specification’ for suitable values. For many devices the initial value is set from the `bg` argument of the device, and for the rest it is normally `"white"`.
Note that some graphics functions such as `<plot.default>` and `<points>` have an *argument* of this name with a different meaning.
`bty`
A character string which determined the type of `<box>` which is drawn about plots. If `bty` is one of `"o"` (the default), `"l"`, `"7"`, `"c"`, `"u"`, or `"]"` the resulting box resembles the corresponding upper case letter. A value of `"n"` suppresses the box.
`cex`
A numerical value giving the amount by which plotting text and symbols should be magnified relative to the default. This starts as `1` when a device is opened, and is reset when the layout is changed, e.g. by setting `mfrow`.
Note that some graphics functions such as `<plot.default>` have an *argument* of this name which *multiplies* this graphical parameter, and some functions such as `<points>` and `<text>` accept a vector of values which are recycled.
`cex.axis`
The magnification to be used for axis annotation relative to the current setting of `cex`.
`cex.lab`
The magnification to be used for x and y labels relative to the current setting of `cex`.
`cex.main`
The magnification to be used for main titles relative to the current setting of `cex`.
`cex.sub`
The magnification to be used for sub-titles relative to the current setting of `cex`.
`cin`
***R.O.***; character size `(width, height)` in inches. These are the same measurements as `cra`, expressed in different units.
`col`
A specification for the default plotting color. See section ‘Color Specification’.
Some functions such as `<lines>` and `<text>` accept a vector of values which are recycled and may be interpreted slightly differently.
`col.axis`
The color to be used for axis annotation. Defaults to `"black"`.
`col.lab`
The color to be used for x and y labels. Defaults to `"black"`.
`col.main`
The color to be used for plot main titles. Defaults to `"black"`.
`col.sub`
The color to be used for plot sub-titles. Defaults to `"black"`.
`cra`
***R.O.***; size of default character `(width, height)` in ‘rasters’ (pixels). Some devices have no concept of pixels and so assume an arbitrary pixel size, usually 1/72 inch. These are the same measurements as `cin`, expressed in different units.
`crt`
A numerical value specifying (in degrees) how single characters should be rotated. It is unwise to expect values other than multiples of 90 to work. Compare with `srt` which does string rotation.
`csi`
***R.O.***; height of (default-sized) characters in inches. The same as `par("cin")[2]`.
`cxy`
***R.O.***; size of default character `(width, height)` in user coordinate units. `par("cxy")` is `par("cin")/par("pin")` scaled to user coordinates. Note that `c(<strwidth>(ch), [strheight](strwidth)(ch))` for a given string `ch` is usually much more precise.
`din`
***R.O.***; the device dimensions, `(width, height)`, in inches. See also `[dev.size](../../grdevices/html/dev.size)`, which is updated immediately when an on-screen device windows is re-sized.
`err`
(*Unimplemented*; **R** is silent when points outside the plot region are *not* plotted.) The degree of error reporting desired.
`family`
The name of a font family for drawing text. The maximum allowed length is 200 bytes. This name gets mapped by each graphics device to a device-specific font description. The default value is `""` which means that the default device fonts will be used (and what those are should be listed on the help page for the device). Standard values are `"serif"`, `"sans"` and `"mono"`, and the [Hershey](../../grdevices/html/hershey) font families are also available. (Devices may define others, and some devices will ignore this setting completely. Names starting with `"Hershey"` are treated specially and should only be used for the built-in Hershey font families.) This can be specified inline for `<text>`.
`fg`
The color to be used for the foreground of plots. This is the default color used for things like axes and boxes around plots. When called from `par()` this also sets parameter `col` to the same value. See section ‘Color Specification’. A few devices have an argument to set the initial value, which is otherwise `"black"`.
`fig`
A numerical vector of the form `c(x1, x2, y1,
y2)` which gives the (NDC) coordinates of the figure region in the display region of the device. If you set this, unlike S, you start a new plot, so to add to an existing plot use `new = TRUE` as well.
`fin`
The figure region dimensions, `(width, height)`, in inches. If you set this, unlike S, you start a new plot.
`font`
An integer which specifies which font to use for text. If possible, device drivers arrange so that 1 corresponds to plain text (the default), 2 to bold face, 3 to italic and 4 to bold italic. Also, font 5 is expected to be the symbol font, in Adobe symbol encoding. On some devices font families can be selected by `family` to choose different sets of 5 fonts.
`font.axis`
The font to be used for axis annotation.
`font.lab`
The font to be used for x and y labels.
`font.main`
The font to be used for plot main titles.
`font.sub`
The font to be used for plot sub-titles.
`lab`
A numerical vector of the form `c(x, y, len)` which modifies the default way that axes are annotated. The values of `x` and `y` give the (approximate) number of tickmarks on the x and y axes and `len` specifies the label length. The default is `c(5, 5, 7)`. Note that this only affects the way the parameters `xaxp` and `yaxp` are set when the user coordinate system is set up, and is not consulted when axes are drawn. `len` *is unimplemented* in **R**.
`las`
numeric in {0,1,2,3}; the style of axis labels.
0:
always parallel to the axis [*default*],
1:
always horizontal,
2:
always perpendicular to the axis,
3:
always vertical.
Also supported by `<mtext>`. Note that string/character rotation *via* argument `srt` to `par` does *not* affect the axis labels.
`lend`
The line end style. This can be specified as an integer or string:
`0`
and `"round"` mean rounded line caps [*default*];
`1`
and `"butt"` mean butt line caps;
`2`
and `"square"` mean square line caps.
`lheight`
The line height multiplier. The height of a line of text (used to vertically space multi-line text) is found by multiplying the character height both by the current character expansion and by the line height multiplier. Default value is 1. Used in `<text>` and `[strheight](strwidth)`.
`ljoin`
The line join style. This can be specified as an integer or string:
`0`
and `"round"` mean rounded line joins [*default*];
`1`
and `"mitre"` mean mitred line joins;
`2`
and `"bevel"` mean bevelled line joins.
`lmitre`
The line mitre limit. This controls when mitred line joins are automatically converted into bevelled line joins. The value must be larger than 1 and the default is 10. Not all devices will honour this setting.
`lty`
The line type. Line types can either be specified as an integer (0=blank, 1=solid (default), 2=dashed, 3=dotted, 4=dotdash, 5=longdash, 6=twodash) or as one of the character strings `"blank"`, `"solid"`, `"dashed"`, `"dotted"`, `"dotdash"`, `"longdash"`, or `"twodash"`, where `"blank"` uses ‘invisible lines’ (i.e., does not draw them).
Alternatively, a string of up to 8 characters (from `c(1:9,
"A":"F")`) may be given, giving the length of line segments which are alternatively drawn and skipped. See section ‘Line Type Specification’.
Functions such as `<lines>` and `<segments>` accept a vector of values which are recycled.
`lwd`
The line width, a *positive* number, defaulting to `1`. The interpretation is device-specific, and some devices do not implement line widths less than one. (See the help on the device for details of the interpretation.)
Functions such as `<lines>` and `<segments>` accept a vector of values which are recycled: in such uses lines corresponding to values `NA` or `NaN` are omitted. The interpretation of `0` is device-specific.
`mai`
A numerical vector of the form `c(bottom,
left, top, right)` which gives the margin size specified in inches.
`mar`
A numerical vector of the form `c(bottom,
left, top, right)` which gives the number of lines of margin to be specified on the four sides of the plot. The default is `c(5, 4, 4, 2) + 0.1`.
`mex`
`mex` is a character size expansion factor which is used to describe coordinates in the margins of plots. Note that this does not change the font size, rather specifies the size of font (as a multiple of `csi`) used to convert between `mar` and `mai`, and between `oma` and `omi`.
This starts as `1` when the device is opened, and is reset when the layout is changed (alongside resetting `cex`).
`mfcol, mfrow`
A vector of the form `c(nr, nc)`. Subsequent figures will be drawn in an `nr`-by-`nc` array on the device by *columns* (`mfcol`), or *rows* (`mfrow`), respectively.
In a layout with exactly two rows and columns the base value of `"cex"` is reduced by a factor of 0.83: if there are three or more of either rows or columns, the reduction factor is 0.66.
Setting a layout resets the base value of `cex` and that of `mex` to `1`.
If either of these is queried it will give the current layout, so querying cannot tell you the order in which the array will be filled.
Consider the alternatives, `<layout>` and `[split.screen](screen)`.
`mfg`
A numerical vector of the form `c(i, j)` where `i` and `j` indicate which figure in an array of figures is to be drawn next (if setting) or is being drawn (if enquiring). The array must already have been set by `mfcol` or `mfrow`.
For compatibility with S, the form `c(i, j, nr, nc)` is also accepted, when `nr` and `nc` should be the current number of rows and number of columns. Mismatches will be ignored, with a warning.
`mgp`
The margin line (in `mex` units) for the axis title, axis labels and axis line. Note that `mgp[1]` affects `<title>` whereas `mgp[2:3]` affect `<axis>`. The default is `c(3, 1, 0)`.
`mkh`
The height in inches of symbols to be drawn when the value of `pch` is an integer. *Completely ignored in **R***.
`new`
logical, defaulting to `FALSE`. If set to `TRUE`, the next high-level plotting command (actually `[plot.new](frame)`) should *not clean* the frame before drawing *as if it were on a ***new*** device*. It is an error (ignored with a warning) to try to use `new = TRUE` on a device that does not currently contain a high-level plot.
`oma`
A vector of the form `c(bottom, left, top,
right)` giving the size of the outer margins in lines of text.
`omd`
A vector of the form `c(x1, x2, y1, y2)` giving the region *inside* outer margins in NDC (= normalized device coordinates), i.e., as a fraction (in *[0, 1]*) of the device region.
`omi`
A vector of the form `c(bottom, left, top,
right)` giving the size of the outer margins in inches.
`page`
***R.O.***; A boolean value indicating whether the next call to `[plot.new](frame)` is going to start a new page. This value may be `FALSE` if there are multiple figures on the page.
`pch`
Either an integer specifying a symbol or a single character to be used as the default in plotting points. See `<points>` for possible values and their interpretation. Note that only integers and single-character strings can be set as a graphics parameter (and not `NA` nor `NULL`).
Some functions such as `<points>` accept a vector of values which are recycled.
`pin`
The current plot dimensions, `(width, height)`, in inches.
`plt`
A vector of the form `c(x1, x2, y1, y2)` giving the coordinates of the plot region as fractions of the current figure region.
`ps`
integer; the point size of text (but not symbols). Unlike the `pointsize` argument of most devices, this does not change the relationship between `mar` and `mai` (nor `oma` and `omi`).
What is meant by ‘point size’ is device-specific, but most devices mean a multiple of 1bp, that is 1/72 of an inch.
`pty`
A character specifying the type of plot region to be used; `"s"` generates a square plotting region and `"m"` generates the maximal plotting region.
`smo`
(*Unimplemented*) a value which indicates how smooth circles and circular arcs should be.
`srt`
The string rotation in degrees. See the comment about `crt`. Only supported by `<text>`.
`tck`
The length of tick marks as a fraction of the smaller of the width or height of the plotting region. If `tck >= 0.5` it is interpreted as a fraction of the relevant side, so if `tck = 1` grid lines are drawn. The default setting (`tck = NA`) is to use `tcl = -0.5`.
`tcl`
The length of tick marks as a fraction of the height of a line of text. The default value is `-0.5`; setting `tcl = NA` sets `tck = -0.01` which is S' default.
`usr`
A vector of the form `c(x1, x2, y1, y2)` giving the extremes of the user coordinates of the plotting region. When a logarithmic scale is in use (i.e., `par("xlog")` is true, see below), then the x-limits will be `10 ^ par("usr")[1:2]`. Similarly for the y-axis.
`xaxp`
A vector of the form `c(x1, x2, n)` giving the coordinates of the extreme tick marks and the number of intervals between tick-marks when `par("xlog")` is false. Otherwise, when *log* coordinates are active, the three values have a different meaning: For a small range, `n` is *negative*, and the ticks are as in the linear case, otherwise, `n` is in `1:3`, specifying a case number, and `x1` and `x2` are the lowest and highest power of 10 inside the user coordinates, `10 ^ par("usr")[1:2]`. (The `"usr"` coordinates are log10-transformed here!)
n = 1
will produce tick marks at *10^j* for integer *j*,
n = 2
gives marks *k 10^j* with *k in {1,5}*,
n = 3
gives marks *k 10^j* with *k in {1,2,5}*.
See `[axTicks](axticks)()` for a pure **R** implementation of this.
This parameter is reset when a user coordinate system is set up, for example by starting a new page or by calling `<plot.window>` or setting `par("usr")`: `n` is taken from `par("lab")`. It affects the default behaviour of subsequent calls to `<axis>` for sides 1 or 3.
It is only relevant to default numeric axis systems, and not for example to dates.
`xaxs`
The style of axis interval calculation to be used for the x-axis. Possible values are `"r"`, `"i"`, `"e"`, `"s"`, `"d"`. The styles are generally controlled by the range of data or `xlim`, if given.
Style `"r"` (regular) first extends the data range by 4 percent at each end and then finds an axis with pretty labels that fits within the extended range.
Style `"i"` (internal) just finds an axis with pretty labels that fits within the original data range.
Style `"s"` (standard) finds an axis with pretty labels within which the original data range fits.
Style `"e"` (extended) is like style `"s"`, except that it is also ensures that there is room for plotting symbols within the bounding box.
Style `"d"` (direct) specifies that the current axis should be used on subsequent plots.
(*Only `"r"` and `"i"` styles have been implemented in **R**.*)
`xaxt`
A character which specifies the x axis type. Specifying `"n"` suppresses plotting of the axis. The standard value is `"s"`: for compatibility with S values `"l"` and `"t"` are accepted but are equivalent to `"s"`: any value other than `"n"` implies plotting.
`xlog`
A logical value (see `log` in `<plot.default>`). If `TRUE`, a logarithmic scale is in use (e.g., after `plot(*, log = "x")`). For a new device, it defaults to `FALSE`, i.e., linear scale.
`xpd`
A logical value or `NA`. If `FALSE`, all plotting is clipped to the plot region, if `TRUE`, all plotting is clipped to the figure region, and if `NA`, all plotting is clipped to the device region. See also `<clip>`.
`yaxp`
A vector of the form `c(y1, y2, n)` giving the coordinates of the extreme tick marks and the number of intervals between tick-marks unless for log coordinates, see `xaxp` above.
`yaxs`
The style of axis interval calculation to be used for the y-axis. See `xaxs` above.
`yaxt`
A character which specifies the y axis type. Specifying `"n"` suppresses plotting.
`ylbias`
A positive real value used in the positioning of text in the margins by `<axis>` and `<mtext>`. The default is in principle device-specific, but currently `0.2` for all of **R**'s own devices. Set this to `0.2` for compatibility with **R** < 2.14.0 on `x11` and `windows()` devices.
`ylog`
A logical value; see `xlog` above.
### Color Specification
Colors can be specified in several different ways. The simplest way is with a character string giving the color name (e.g., `"red"`). A list of the possible colors can be obtained with the function `[colors](../../grdevices/html/colors)`. Alternatively, colors can be specified directly in terms of their RGB components with a string of the form `"#RRGGBB"` where each of the pairs `RR`, `GG`, `BB` consist of two hexadecimal digits giving a value in the range `00` to `FF`. Colors can also be specified by giving an index into a small table of colors, the `[palette](../../grdevices/html/palette)`: indices wrap round so with the default palette of size 8, `10` is the same as `2`. This provides compatibility with S. Index `0` corresponds to the background color. Note that the palette (apart from `0` which is per-device) is a per-session setting.
Negative integer colours are errors.
Additionally, `"transparent"` is *transparent*, useful for filled areas (such as the background!), and just invisible for things like lines or text. In most circumstances (integer) `NA` is equivalent to `"transparent"` (but not for `<text>` and `<mtext>`).
Semi-transparent colors are available for use on devices that support them.
The functions `[rgb](../../grdevices/html/rgb)`, `[hsv](../../grdevices/html/hsv)`, `[hcl](../../grdevices/html/hcl)`, `[gray](../../grdevices/html/gray)` and `[rainbow](../../grdevices/html/palettes)` provide additional ways of generating colors.
### Line Type Specification
Line types can either be specified by giving an index into a small built-in table of line types (1 = solid, 2 = dashed, etc, see `lty` above) or directly as the lengths of on/off stretches of line. This is done with a string of an even number (up to eight) of characters, namely *non-zero* (hexadecimal) digits which give the lengths in consecutive positions in the string. For example, the string `"33"` specifies three units on followed by three off and `"3313"` specifies three units on followed by three off followed by one on and finally three off. The ‘units’ here are (on most devices) proportional to `lwd`, and with `lwd = 1` are in pixels or points or 1/96 inch.
The five standard dash-dot line types (`lty = 2:6`) correspond to `c("44", "13", "1343", "73", "2262")`.
Note that `NA` is not a valid value for `lty`.
### Note
The effect of restoring all the (settable) graphics parameters as in the examples is hard to predict if the device has been resized. Several of them are attempting to set the same things in different ways, and those last in the alphabet will win. In particular, the settings of `mai`, `mar`, `pin`, `plt` and `pty` interact, as do the outer margin settings, the figure layout and figure region size.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Murrell, P. (2005) *R Graphics*. Chapman & Hall/CRC Press.
### See Also
`<plot.default>` for some high-level plotting parameters; `[colors](../../grdevices/html/colors)`; `<clip>`; `[options](../../base/html/options)` for other setup parameters; graphic devices `[x11](../../grdevices/html/x11)`, `[postscript](../../grdevices/html/postscript)` and setting up device regions by `<layout>` and `[split.screen](screen)`.
### Examples
```
op <- par(mfrow = c(2, 2), # 2 x 2 pictures on one plot
pty = "s") # square plotting region,
# independent of device size
## At end of plotting, reset to previous settings:
par(op)
## Alternatively,
op <- par(no.readonly = TRUE) # the whole list of settable par's.
## do lots of plotting and par(.) calls, then reset:
par(op)
## Note this is not in general good practice
par("ylog") # FALSE
plot(1 : 12, log = "y")
par("ylog") # TRUE
plot(1:2, xaxs = "i") # 'inner axis' w/o extra space
par(c("usr", "xaxp"))
( nr.prof <-
c(prof.pilots = 16, lawyers = 11, farmers = 10, salesmen = 9, physicians = 9,
mechanics = 6, policemen = 6, managers = 6, engineers = 5, teachers = 4,
housewives = 3, students = 3, armed.forces = 1))
par(las = 3)
barplot(rbind(nr.prof)) # R 0.63.2: shows alignment problem
par(las = 0) # reset to default
require(grDevices) # for gray
## 'fg' use:
plot(1:12, type = "b", main = "'fg' : axes, ticks and box in gray",
fg = gray(0.7), bty = "7" , sub = R.version.string)
ex <- function() {
old.par <- par(no.readonly = TRUE) # all par settings which
# could be changed.
on.exit(par(old.par))
## ...
## ... do lots of par() settings and plots
## ...
invisible() #-- now, par(old.par) will be executed
}
ex()
## Line types
showLty <- function(ltys, xoff = 0, ...) {
stopifnot((n <- length(ltys)) >= 1)
op <- par(mar = rep(.5,4)); on.exit(par(op))
plot(0:1, 0:1, type = "n", axes = FALSE, ann = FALSE)
y <- (n:1)/(n+1)
clty <- as.character(ltys)
mytext <- function(x, y, txt)
text(x, y, txt, adj = c(0, -.3), cex = 0.8, ...)
abline(h = y, lty = ltys, ...); mytext(xoff, y, clty)
y <- y - 1/(3*(n+1))
abline(h = y, lty = ltys, lwd = 2, ...)
mytext(1/8+xoff, y, paste(clty," lwd = 2"))
}
showLty(c("solid", "dashed", "dotted", "dotdash", "longdash", "twodash"))
par(new = TRUE) # the same:
showLty(c("solid", "44", "13", "1343", "73", "2262"), xoff = .2, col = 2)
showLty(c("11", "22", "33", "44", "12", "13", "14", "21", "31"))
```
| programming_docs |
r None
`locator` Graphical Input
--------------------------
### Description
Reads the position of the graphics cursor when the (first) mouse button is pressed.
### Usage
```
locator(n = 512, type = "n", ...)
```
### Arguments
| | |
| --- | --- |
| `n` | the maximum number of points to locate. Valid values start at 1. |
| `type` | One of `"n"`, `"p"`, `"l"` or `"o"`. If `"p"` or `"o"` the points are plotted; if `"l"` or `"o"` they are joined by lines. |
| `...` | additional graphics parameters used if `type != "n"` for plotting the locations. |
### Details
`locator` is only supported on screen devices such as `X11`, `windows` and `quartz`. On other devices the call will do nothing.
Unless the process is terminated prematurely by the user (see below) at most `n` positions are determined.
For the usual `[X11](../../grdevices/html/x11)` device the identification process is terminated by pressing any mouse button other than the first. For the `[quartz](../../grdevices/html/quartz)` device the process is terminated by pressing the `ESC` key.
The current graphics parameters apply just as if `plot.default` has been called with the same value of `type`. The plotting of the points and lines is subject to clipping, but locations outside the current clipping rectangle will be returned.
On most devices which support `locator`, successful selection of a point is indicated by a bell sound unless `[options](../../base/html/options)(locatorBell = FALSE)` has been set.
If the window is resized or hidden and then exposed before the input process has terminated, any lines or points drawn by `locator` will disappear. These will reappear once the input process has terminated and the window is resized or hidden and exposed again. This is because the points and lines drawn by `locator` are not recorded in the device's display list until the input process has terminated.
### Value
A list containing `x` and `y` components which are the coordinates of the identified points in the user coordinate system, i.e., the one specified by `<par>("usr")`.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<identify>`. `[grid.locator](../../grid/html/grid.locator)` is the corresponding grid package function.
`[dev.capabilities](../../grdevices/html/dev.capabilities)` to see if it is supported.
r None
`zAxis` Generic Function to Add an Axis to a Plot
--------------------------------------------------
### Description
Generic function to add a suitable axis to the current plot.
### Usage
```
Axis(x = NULL, at = NULL, ..., side, labels = NULL)
```
### Arguments
| | |
| --- | --- |
| `x` | an object which indicates the range over which an axis should be drawn |
| `at` | the points at which tick-marks are to be drawn. |
| `side` | an integer specifying which side of the plot the axis is to be drawn on. The axis is placed as follows: 1=below, 2=left, 3=above and 4=right. |
| `labels` | this can either be a logical value specifying whether (numerical) annotations are to be made at the tickmarks, or a character or expression vector of labels to be placed at the tickpoints. If this is specified as a character or expression vector, `at` should be supplied and they should be the same length. |
| `...` | arguments to be passed to methods and perhaps then to `<axis>`. |
### Details
This is a generic function. It works in a slightly non-standard way: if `x` is supplied and non-NULL it dispatches on `x`, otherwise if `at` is supplied and non-NULL it dispatches on `at`, and the default action is to call `<axis>`, omitting argument `x`.
The idea is that for plots for which either or both of the axes are numerical but with a special interpretation, the standard plotting functions (including `<boxplot>`, `<contour>`, `<coplot>`, `<filled.contour>`, `<pairs>`, `<plot.default>`, `<rug>` and `<stripchart>`) will set up user coordinates and `Axis` will be called to label them appropriately.
There are `"Date"` and `"POSIXt"` methods which can pass an argument `format` on to the appropriate `axis` method (see `[axis.POSIXct](axis.posixct)`).
### Value
The numeric locations on the axis scale at which tick marks were drawn when the plot was first drawn (see ‘Details’).
This function is usually invoked for its side effect, which is to add an axis to an already existing plot.
### See Also
`<axis>` (which is eventually called from all `Axis()` methods) in package graphics.
r None
`panel.smooth` Simple Panel Plot
---------------------------------
### Description
An example of a simple useful `panel` function to be used as argument in e.g., `<coplot>` or `<pairs>`.
### Usage
```
panel.smooth(x, y, col = par("col"), bg = NA, pch = par("pch"),
cex = 1, col.smooth = 2, span = 2/3, iter = 3,
...)
```
### Arguments
| | |
| --- | --- |
| `x, y` | numeric vectors of the same length |
| `col, bg, pch, cex` | numeric or character codes for the color(s), point type and size of `<points>`; see also `<par>`. |
| `col.smooth` | color to be used by `lines` for drawing the smooths. |
| `span` | smoothing parameter `f` for `[lowess](../../stats/html/lowess)`, see there. |
| `iter` | number of robustness iterations for `[lowess](../../stats/html/lowess)`. |
| `...` | further arguments to `<lines>`. |
### See Also
`<coplot>` and `<pairs>` where `panel.smooth` is typically used; `[lowess](../../stats/html/lowess)` which does the smoothing.
### Examples
```
pairs(swiss, panel = panel.smooth, pch = ".") # emphasize the smooths
pairs(swiss, panel = panel.smooth, lwd = 2, cex = 1.5, col = 4) # hmm...
```
r None
`assocplot` Association Plots
------------------------------
### Description
Produce a Cohen-Friendly association plot indicating deviations from independence of rows and columns in a 2-dimensional contingency table.
### Usage
```
assocplot(x, col = c("black", "red"), space = 0.3,
main = NULL, xlab = NULL, ylab = NULL)
```
### Arguments
| | |
| --- | --- |
| `x` | a two-dimensional contingency table in matrix form. |
| `col` | a character vector of length two giving the colors used for drawing positive and negative Pearson residuals, respectively. |
| `space` | the amount of space (as a fraction of the average rectangle width and height) left between each rectangle. |
| `main` | overall title for the plot. |
| `xlab` | a label for the x axis. Defaults to the name (if any) of the row dimension in `x`. |
| `ylab` | a label for the y axis. Defaults to the name (if any) of the column dimension in `x`. |
### Details
For a two-way contingency table, the signed contribution to Pearson's *chi^2* for cell *i, j* is *d\_{ij} = (f\_{ij} - e\_{ij}) / sqrt(e\_{ij})*, where *f\_{ij}* and *e\_{ij}* are the observed and expected counts corresponding to the cell. In the Cohen-Friendly association plot, each cell is represented by a rectangle that has (signed) height proportional to *d\_{ij}* and width proportional to *sqrt(e\_{ij})*, so that the area of the box is proportional to the difference in observed and expected frequencies. The rectangles in each row are positioned relative to a baseline indicating independence (*d\_{ij} = 0*). If the observed frequency of a cell is greater than the expected one, the box rises above the baseline and is shaded in the color specified by the first element of `col`, which defaults to black; otherwise, the box falls below the baseline and is shaded in the color specified by the second element of `col`, which defaults to red.
A more flexible and extensible implementation of association plots written in the grid graphics system is provided in the function `[assoc](../../vcd/html/assoc)` in the contributed package [vcd](https://CRAN.R-project.org/package=vcd) (Meyer, Zeileis and Hornik, 2006).
### References
Cohen, A. (1980), On the graphical display of the significant components in a two-way contingency table. *Communications in Statistics—Theory and Methods*, **9**, 1025–1041. doi: [10.1080/03610928008827940](https://doi.org/10.1080/03610928008827940).
Friendly, M. (1992), Graphical methods for categorical data. *SAS User Group International Conference Proceedings*, **17**, 190–200. <http://datavis.ca/papers/sugi/sugi17.pdf>
Meyer, D., Zeileis, A., and Hornik, K. (2006) The strucplot Framework: Visualizing Multi-Way Contingency Tables with vcd. *Journal of Statistical Software*, **17(3)**, 1–48. doi: [10.18637/jss.v017.i03](https://doi.org/10.18637/jss.v017.i03).
### See Also
`<mosaicplot>`, `[chisq.test](../../stats/html/chisq.test)`.
### Examples
```
## Aggregate over sex:
x <- marginSums(HairEyeColor, c(1, 2))
x
assocplot(x, main = "Relation between hair and eye color")
```
r None
`abline` Add Straight Lines to a Plot
--------------------------------------
### Description
This function adds one or more straight lines through the current plot.
### Usage
```
abline(a = NULL, b = NULL, h = NULL, v = NULL, reg = NULL,
coef = NULL, untf = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `a, b` | the intercept and slope, single values. |
| `untf` | logical asking whether to *untransform*. See ‘Details’. |
| `h` | the y-value(s) for horizontal line(s). |
| `v` | the x-value(s) for vertical line(s). |
| `coef` | a vector of length two giving the intercept and slope. |
| `reg` | an object with a `[coef](../../stats/html/coef)` method. See ‘Details’. |
| `...` | [graphical parameters](par) such as `col`, `lty` and `lwd` (possibly as vectors: see ‘Details’) and `xpd` and the line characteristics `lend`, `ljoin` and `lmitre`. |
### Details
Typical usages are
```
abline(a, b, ...)
abline(h =, ...)
abline(v =, ...)
abline(coef =, ...)
abline(reg =, ...)
```
The first form specifies the line in intercept/slope form (alternatively `a` can be specified on its own and is taken to contain the slope and intercept in vector form).
The `h=` and `v=` forms draw horizontal and vertical lines at the specified coordinates.
The `coef` form specifies the line by a vector containing the slope and intercept.
`reg` is a regression object with a `[coef](../../stats/html/coef)` method. If this returns a vector of length 1 then the value is taken to be the slope of a line through the origin, otherwise, the first 2 values are taken to be the intercept and slope.
If `untf` is true, and one or both axes are log-transformed, then a curve is drawn corresponding to a line in original coordinates, otherwise a line is drawn in the transformed coordinate system. The `h` and `v` parameters always refer to original coordinates.
The [graphical parameters](par) `col`, `lty` and `lwd` can be specified; see `<par>` for details. For the `h=` and `v=` usages they can be vectors of length greater than one, recycled as necessary.
Specifying an `xpd` argument for clipping overrides the global `<par>("xpd")` setting used otherwise.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Murrell, P. (2005) *R Graphics*. Chapman & Hall/CRC Press.
### See Also
`<lines>` and `<segments>` for connected and arbitrary lines given by their *endpoints*. `<par>`.
### Examples
```
## Setup up coordinate system (with x == y aspect ratio):
plot(c(-2,3), c(-1,5), type = "n", xlab = "x", ylab = "y", asp = 1)
## the x- and y-axis, and an integer grid
abline(h = 0, v = 0, col = "gray60")
text(1,0, "abline( h = 0 )", col = "gray60", adj = c(0, -.1))
abline(h = -1:5, v = -2:3, col = "lightgray", lty = 3)
abline(a = 1, b = 2, col = 2)
text(1,3, "abline( 1, 2 )", col = 2, adj = c(-.1, -.1))
## Simple Regression Lines:
require(stats)
sale5 <- c(6, 4, 9, 7, 6, 12, 8, 10, 9, 13)
plot(sale5)
abline(lsfit(1:10, sale5))
abline(lsfit(1:10, sale5, intercept = FALSE), col = 4) # less fitting
z <- lm(dist ~ speed, data = cars)
plot(cars)
abline(z) # equivalent to abline(reg = z) or
abline(coef = coef(z))
## trivial intercept model
abline(mC <- lm(dist ~ 1, data = cars)) ## the same as
abline(a = coef(mC), b = 0, col = "blue")
```
r None
`stem` Stem-and-Leaf Plots
---------------------------
### Description
`stem` produces a stem-and-leaf plot of the values in `x`. The parameter `scale` can be used to expand the scale of the plot. A value of `scale = 2` will cause the plot to be roughly twice as long as the default.
### Usage
```
stem(x, scale = 1, width = 80, atom = 1e-08)
```
### Arguments
| | |
| --- | --- |
| `x` | a numeric vector. |
| `scale` | This controls the plot length. |
| `width` | The desired width of plot. |
| `atom` | a tolerance. |
### Details
Infinite and missing values in `x` are discarded.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### Examples
```
stem(islands)
stem(log10(islands))
```
r None
`frame` Create / Start a New Plot Frame
----------------------------------------
### Description
This function (`frame` is an alias for `plot.new`) causes the completion of plotting in the current plot (if there is one) and an advance to a new graphics frame. This is used in all high-level plotting functions and also useful for skipping plots when a multi-figure region is in use.
### Usage
```
plot.new()
frame()
```
### Details
The new page is painted with the background colour (`<par>("bg")`), which is often transparent. For devices with a *canvas* colour (the on-screen devices `X11`, `windows` and `quartz`), the window is first painted with the canvas colour and then the background colour.
There are two hooks called `"before.plot.new"` and `"plot.new"` (see `[setHook](../../base/html/userhooks)`) called immediately before and after advancing the frame. The latter is used in the testing code to annotate the new page. The hook function(s) are called with no argument. (If the value is a character string, `get` is called on it from within the graphics namespace.)
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. (`frame`.)
### See Also
`<plot.window>`, `<plot.default>`.
r None
`title` Plot Annotation
------------------------
### Description
This function can be used to add labels to a plot. Its first four principal arguments can also be used as arguments in most high-level plotting functions. They must be of type `[character](../../base/html/character)` or `[expression](../../base/html/expression)`. In the latter case, quite a bit of mathematical notation is available such as sub- and superscripts, greek letters, fractions, etc: see [plotmath](../../grdevices/html/plotmath)
### Usage
```
title(main = NULL, sub = NULL, xlab = NULL, ylab = NULL,
line = NA, outer = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `main` | The main title (on top) using font, size (character expansion) and color `par(c("font.main", "cex.main", "col.main"))`. |
| `sub` | Sub-title (at bottom) using font, size and color `par(c("font.sub", "cex.sub", "col.sub"))`. |
| `xlab` | X axis label using font, size and color `par(c("font.lab", "cex.lab", "col.lab"))`. |
| `ylab` | Y axis label, same font attributes as `xlab`. |
| `line` | specifying a value for `line` overrides the default placement of labels, and places them this many lines outwards from the plot edge. |
| `outer` | a logical value. If `TRUE`, the titles are placed in the outer margins of the plot. |
| `...` | further [graphical parameters](par) from `<par>`. Use e.g., `col.main` or `cex.sub` instead of just `col` or `cex`. `adj` controls the justification of the titles. `xpd` can be used to set the clipping region: this defaults to the figure region unless `outer = TRUE`, otherwise the device region and can only be increased. `mgp` controls the default placing of the axis titles. |
### Details
The labels passed to `title` can be character strings or language objects (names, calls or expressions), or a list containing the string to be plotted, and a selection of the optional modifying [graphical parameters](par) `cex=`, `col=` and `font=`. Other objects will be coerced by `[as.graphicsAnnot](../../grdevices/html/as.graphicsannot)`.
The position of `main` defaults to being vertically centered in (outer) margin 3 and justified horizontally according to `par("adj")` on the plot region (device region for `outer = TRUE`).
The positions of `xlab`, `ylab` and `sub` are `line` (default for `xlab` and `ylab` being `par("mgp")[1]` and increased by `1` for `sub`) lines (of height `par("mex")`) into the appropriate margin, justified in the text direction according to `par("adj")` on the plot/device region.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<mtext>`, `<text>`; `[plotmath](../../grdevices/html/plotmath)` for details on mathematical annotation.
### Examples
```
plot(cars, main = "") # here, could use main directly
title(main = "Stopping Distance versus Speed")
plot(cars, main = "")
title(main = list("Stopping Distance versus Speed", cex = 1.5,
col = "red", font = 3))
## Specifying "..." :
plot(1, col.axis = "sky blue", col.lab = "thistle")
title("Main Title", sub = "sub title",
cex.main = 2, font.main= 4, col.main= "blue",
cex.sub = 0.75, font.sub = 3, col.sub = "red")
x <- seq(-4, 4, length.out = 101)
y <- cbind(sin(x), cos(x))
matplot(x, y, type = "l", xaxt = "n",
main = expression(paste(plain(sin) * phi, " and ",
plain(cos) * phi)),
ylab = expression("sin" * phi, "cos" * phi), # only 1st is taken
xlab = expression(paste("Phase Angle ", phi)),
col.main = "blue")
axis(1, at = c(-pi, -pi/2, 0, pi/2, pi),
labels = expression(-pi, -pi/2, 0, pi/2, pi))
abline(h = 0, v = pi/2 * c(-1,1), lty = 2, lwd = .1, col = "gray70")
```
r None
`lines` Add Connected Line Segments to a Plot
----------------------------------------------
### Description
A generic function taking coordinates given in various ways and joining the corresponding points with line segments.
### Usage
```
lines(x, ...)
## Default S3 method:
lines(x, y = NULL, type = "l", ...)
```
### Arguments
| | |
| --- | --- |
| `x, y` | coordinate vectors of points to join. |
| `type` | character indicating the type of plotting; actually any of the `type`s as in `<plot.default>`. |
| `...` | Further graphical parameters (see `<par>`) may also be supplied as arguments, particularly, line type, `lty`, line width, `lwd`, color, `col` and for `type = "b"`, `pch`. Also the line characteristics `lend`, `ljoin` and `lmitre`. |
### Details
The coordinates can be passed in a plotting structure (a list with `x` and `y` components), a two-column matrix, a time series, .... See `[xy.coords](../../grdevices/html/xy.coords)`. If supplied separately, they must be of the same length.
The coordinates can contain `NA` values. If a point contains `NA` in either its `x` or `y` value, it is omitted from the plot, and lines are not drawn to or from such points. Thus missing values can be used to achieve breaks in lines.
For `type = "h"`, `col` can be a vector and will be recycled as needed.
`lwd` can be a vector: its first element will apply to lines but the whole vector to symbols (recycled as necessary).
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[lines.formula](plot.formula)` for the formula method; `<points>`, particularly for `type %in% c("p","b","o")`, `[plot](plot.default)`, and the workhorse function `<plot.xy>`.
`<abline>` for drawing (single) straight lines.
`<par>` for line type (`lty`) specification and how to specify colors.
### Examples
```
# draw a smooth line through a scatter plot
plot(cars, main = "Stopping Distance versus Speed")
lines(stats::lowess(cars))
```
| programming_docs |
r None
`layout` Specifying Complex Plot Arrangements
----------------------------------------------
### Description
`layout` divides the device up into as many rows and columns as there are in matrix `mat`, with the column-widths and the row-heights specified in the respective arguments.
### Usage
```
layout(mat, widths = rep.int(1, ncol(mat)),
heights = rep.int(1, nrow(mat)), respect = FALSE)
layout.show(n = 1)
lcm(x)
```
### Arguments
| | |
| --- | --- |
| `mat` | a matrix object specifying the location of the next *N* figures on the output device. Each value in the matrix must be `0` or a positive integer. If *N* is the largest positive integer in the matrix, then the integers *{1, …, N-1}* must also appear at least once in the matrix. |
| `widths` | a vector of values for the widths of columns on the device. Relative widths are specified with numeric values. Absolute widths (in centimetres) are specified with the `lcm()` function (see examples). |
| `heights` | a vector of values for the heights of rows on the device. Relative and absolute heights can be specified, see `widths` above. |
| `respect` | either a logical value or a matrix object. If the latter, then it must have the same dimensions as `mat` and each value in the matrix must be either `0` or `1`. |
| `n` | number of figures to plot. |
| `x` | a dimension to be interpreted as a number of centimetres. |
### Details
Figure *i* is allocated a region composed from a subset of these rows and columns, based on the rows and columns in which *i* occurs in `mat`.
The `respect` argument controls whether a unit column-width is the same physical measurement on the device as a unit row-height.
There is a limit (currently 200) for the numbers of rows and columns in the layout, and also for the total number of cells (10007).
`layout.show(n)` plots (part of) the current layout, namely the outlines of the next `n` figures.
`lcm` is a trivial function, to be used as *the* interface for specifying absolute dimensions for the `widths` and `heights` arguments of `layout()`.
### Value
`layout` returns the number of figures, *N*, see above.
### Warnings
These functions are totally incompatible with the other mechanisms for arranging plots on a device: `<par>(mfrow)`, `par(mfcol)` and `[split.screen](screen)`.
### Author(s)
Paul R. Murrell
### References
Murrell, P. R. (1999). Layouts: A mechanism for arranging plots on a page. *Journal of Computational and Graphical Statistics*, **8**, 121–134. doi: [10.2307/1390924](https://doi.org/10.2307/1390924).
Chapter 5 of Paul Murrell's Ph.D. thesis.
Murrell, P. (2005). *R Graphics*. Chapman & Hall/CRC Press.
### See Also
`<par>` with arguments `mfrow`, `mfcol`, or `mfg`.
### Examples
```
def.par <- par(no.readonly = TRUE) # save default, for resetting...
## divide the device into two rows and two columns
## allocate figure 1 all of row 1
## allocate figure 2 the intersection of column 2 and row 2
layout(matrix(c(1,1,0,2), 2, 2, byrow = TRUE))
## show the regions that have been allocated to each plot
layout.show(2)
## divide device into two rows and two columns
## allocate figure 1 and figure 2 as above
## respect relations between widths and heights
nf <- layout(matrix(c(1,1,0,2), 2, 2, byrow = TRUE), respect = TRUE)
layout.show(nf)
## create single figure which is 5cm square
nf <- layout(matrix(1), widths = lcm(5), heights = lcm(5))
layout.show(nf)
##-- Create a scatterplot with marginal histograms -----
x <- pmin(3, pmax(-3, stats::rnorm(50)))
y <- pmin(3, pmax(-3, stats::rnorm(50)))
xhist <- hist(x, breaks = seq(-3,3,0.5), plot = FALSE)
yhist <- hist(y, breaks = seq(-3,3,0.5), plot = FALSE)
top <- max(c(xhist$counts, yhist$counts))
xrange <- c(-3, 3)
yrange <- c(-3, 3)
nf <- layout(matrix(c(2,0,1,3),2,2,byrow = TRUE), c(3,1), c(1,3), TRUE)
layout.show(nf)
par(mar = c(3,3,1,1))
plot(x, y, xlim = xrange, ylim = yrange, xlab = "", ylab = "")
par(mar = c(0,3,1,1))
barplot(xhist$counts, axes = FALSE, ylim = c(0, top), space = 0)
par(mar = c(3,0,1,1))
barplot(yhist$counts, axes = FALSE, xlim = c(0, top), space = 0, horiz = TRUE)
par(def.par) #- reset to default
```
r None
`axis` Add an Axis to a Plot
-----------------------------
### Description
Adds an axis to the current plot, allowing the specification of the side, position, labels, and other options.
### Usage
```
axis(side, at = NULL, labels = TRUE, tick = TRUE, line = NA,
pos = NA, outer = FALSE, font = NA, lty = "solid",
lwd = 1, lwd.ticks = lwd, col = NULL, col.ticks = NULL,
hadj = NA, padj = NA, gap.axis = NA, ...)
```
### Arguments
| | |
| --- | --- |
| `side` | an integer specifying which side of the plot the axis is to be drawn on. The axis is placed as follows: 1=below, 2=left, 3=above and 4=right. |
| `at` | the points at which tick-marks are to be drawn. Non-finite (infinite, `NaN` or `NA`) values are omitted. By default (when `NULL`) tickmark locations are computed, see ‘Details’ below. |
| `labels` | this can either be a logical value specifying whether (numerical) annotations are to be made at the tickmarks, or a character or expression vector of labels to be placed at the tickpoints. (Other objects are coerced by `[as.graphicsAnnot](../../grdevices/html/as.graphicsannot)`.) If this is not logical, `at` should also be supplied and of the same length. If `labels` is of length zero after coercion, it has the same effect as supplying `TRUE`. |
| `tick` | a logical value specifying whether tickmarks and an axis line should be drawn. |
| `line` | the number of lines into the margin at which the axis line will be drawn, if not `NA`. |
| `pos` | the coordinate at which the axis line is to be drawn: if not `NA` this overrides the value of `line`. |
| `outer` | a logical value indicating whether the axis should be drawn in the outer plot margin, rather than the standard plot margin. |
| `font` | font for text. Defaults to `par("font")`. |
| `lty` | line type for both the axis line and the tick marks. |
| `lwd, lwd.ticks` | line widths for the axis line and the tick marks. Zero or negative values will suppress the line or ticks. |
| `col, col.ticks` | colors for the axis line and the tick marks respectively. `col = NULL` means to use `par("fg")`, possibly specified inline, and `col.ticks = NULL` means to use whatever color `col` resolved to. |
| `hadj` | adjustment (see `<par>("adj")`) for all labels *parallel* (‘horizontal’) to the reading direction. If this is not a finite value, the default is used (centring for strings parallel to the axis, justification of the end nearest the axis otherwise). |
| `padj` | adjustment for each tick label *perpendicular* to the reading direction. For labels parallel to the axes, `padj = 0` means right or top alignment, and `padj = 1` means left or bottom alignment. This can be a vector given a value for each string, and will be recycled as necessary. If `padj` is not a finite value (the default), the value of `par("las")` determines the adjustment. For strings plotted perpendicular to the axis the default is to centre the string. |
| `gap.axis` | an optional (typically non-negative) numeric factor to be multiplied with the size of an ‘m’ to determine the minimal gap between labels that are drawn, see ‘Details’. The default, `NA`, corresponds to `1` for tick labels drawn *parallel* to the axis and `0.25` otherwise, i.e., the default is equivalent to
```
perpendicular <- function(side, las) {
is.x <- (side %% 2 == 1) # is horizontal x-axis
( is.x && (las %in% 2:3)) ||
(!is.x && (las %in% 1:2))
}
gap.axis <- if(perpendicular(side, las)) 0.25 else 1
```
`gap.axis` may typically be relevant when `at = ..` tick-mark positions are specified explicitly. |
| `...` | other [graphical parameters](par) may also be passed as arguments to this function, particularly, `cex.axis`, `col.axis` and `font.axis` for axis annotation, i.e. tick labels, `mgp` and `xaxp` or `yaxp` for positioning, `tck` or `tcl` for tick mark length and direction, `las` for vertical/horizontal label orientation, or `fg` instead of `col`, and `xpd` for clipping. See `<par>` on these. Parameters `xaxt` (sides 1 and 3) and `yaxt` (sides 2 and 4) control if the axis is plotted at all. Note that `lab` will partial match to argument `labels` unless the latter is also supplied. (Since the default axes have already been set up by `<plot.window>`, `lab` will not be acted on by `axis`.) |
### Details
The axis line is drawn from the lowest to the highest value of `at`, but will be clipped at the plot region. By default, only ticks which are drawn from points within the plot region (up to a tolerance for rounding error) are plotted, but the ticks and their labels may well extend outside the plot region. Use `xpd = TRUE` or `xpd = NA` to allow axes to extend further.
When `at = NULL`, pretty tick mark locations are computed internally (the same way `[axTicks](axticks)(side)` would) from `<par>("xaxp")` or `"yaxp"` and `<par>("xlog")` (or `"ylog"`). Note that these locations may change if an on-screen plot is resized (for example, if the `plot` argument `asp` (see `<plot.window>`) is set.)
If `labels` is not specified, the numeric values supplied or calculated for `at` are converted to character strings as if they were a numeric vector printed by `[print.default](../../base/html/print.default)(digits = 7)`.
The code tries hard not to draw overlapping tick labels, and so will omit labels where they would abut or overlap previously drawn labels. This can result in, for example, every other tick being labelled. The ticks are drawn left to right or bottom to top, and space at least the size of an ‘m’, multiplied by `gap.axis`, is left between labels. In previous **R** versions, this applied only for labels written *parallel* to the axis direction, hence not for e.g., `las = 2`. Using `gap.axis = -1` restores that (buggy) previous behaviour (in the perpendicular case).
If either `line` or `pos` is set, they (rather than `par("mgp")[3]`) determine the position of the axis line and tick marks, and the tick labels are placed `par("mgp")[2]` further lines into (or towards for `pos`) the margin.
Several of the graphics parameters affect the way axes are drawn. The vertical (for sides 1 and 3) positions of the axis and the tick labels are controlled by `mgp[2:3]` and `mex`, the size and direction of the ticks is controlled by `tck` and `tcl` and the appearance of the tick labels by `cex.axis`, `col.axis` and `font.axis` with orientation controlled by `las` (but not `srt`, unlike S which uses `srt` if `at` is supplied and `las` if it is not). Note that `adj` is not supported and labels are always centered. See `<par>` for details.
### Value
The numeric locations on the axis scale at which tick marks were drawn when the plot was first drawn (see ‘Details’).
This function is usually invoked for its side effect, which is to add an axis to an already existing plot.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[Axis](zaxis)` for a generic interface.
`[axTicks](axticks)` returns the axis tick locations corresponding to `at = NULL`; `[pretty](../../base/html/pretty)` is more flexible for computing pretty tick coordinates and does *not* depend on (nor adapt to) the coordinate system in use.
Several graphics parameters affecting the appearance are documented in `<par>`.
### Examples
```
require(stats) # for rnorm
plot(1:4, rnorm(4), axes = FALSE)
axis(1, 1:4, LETTERS[1:4])
axis(2)
box() #- to make it look "as usual"
plot(1:7, rnorm(7), main = "axis() examples",
type = "s", xaxt = "n", frame.plot = FALSE, col = "red")
axis(1, 1:7, LETTERS[1:7], col.axis = "blue")
# unusual options:
axis(4, col = "violet", col.axis = "dark violet", lwd = 2)
axis(3, col = "gold", lty = 2, lwd = 0.5)
# one way to have a custom x axis
plot(1:10, xaxt = "n")
axis(1, xaxp = c(2, 9, 7))
## Changing default gap between labels:
plot(0:100, type="n", axes=FALSE, ann=FALSE)
title(quote("axis(1, .., gap.axis = f)," ~~ f >= 0))
axis(2, at = 5*(0:20), las = 1, gap.axis = 1/4)
gaps <- c(4, 2, 1, 1/2, 1/4, 0.1, 0)
chG <- paste0(ifelse(gaps == 1, "default: ", ""),
"gap.axis=", formatC(gaps))
jj <- seq_along(gaps)
linG <- -2.5*(jj-1)
for(j in jj) {
isD <- gaps[j] == 1 # is default
axis (1, at=5*(0:20), gap.axis = gaps[j], padj=-1, line = linG[j],
col.axis = if(isD) "forest green" else 1, font.axis= 1+isD)
}
mtext(chG, side=1, padj=-1, line = linG -1/2, cex=3/4,
col = ifelse(gaps == 1, "forest green", "blue3"))
## now shrink the window (in x- and y-direction) and observe the axis labels drawn
```
r None
`plot.dataframe` Plot Method for Data Frames
---------------------------------------------
### Description
`plot.data.frame`, a method for the `[plot](plot.default)` generic. It is designed for a quick look at numeric data frames.
### Usage
```
## S3 method for class 'data.frame'
plot(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | object of class `data.frame`. |
| `...` | further arguments to `<stripchart>`, `<plot.default>` or `<pairs>`. |
### Details
This is intended for data frames with *numeric* columns. For more than two columns it first calls `[data.matrix](../../base/html/data.matrix)` to convert the data frame to a numeric matrix and then calls `<pairs>` to produce a scatterplot matrix. This can fail and may well be inappropriate: for example numerical conversion of dates will lose their special meaning and a warning will be given.
For a two-column data frame it plots the second column against the first by the most appropriate method for the first column.
For a single numeric column it uses `<stripchart>`, and for other single-column data frames tries to find a plot method for the single column.
### See Also
`[data.frame](../../base/html/data.frame)`
### Examples
```
plot(OrchardSprays[1], method = "jitter")
plot(OrchardSprays[c(4,1)])
plot(OrchardSprays)
plot(iris)
plot(iris[5:4])
plot(women)
```
r None
`strwidth` Plotting Dimensions of Character Strings and Math Expressions
-------------------------------------------------------------------------
### Description
These functions compute the width or height, respectively, of the given strings or mathematical expressions `s[i]` on the current plotting device in *user* coordinates, *inches* or as fraction of the figure width `par("fin")`.
### Usage
```
strwidth(s, units = "user", cex = NULL, font = NULL, vfont = NULL, ...)
strheight(s, units = "user", cex = NULL, font = NULL, vfont = NULL, ...)
```
### Arguments
| | |
| --- | --- |
| `s` | a character or [expression](../../base/html/expression) vector whose dimensions are to be determined. Other objects are coerced by `[as.graphicsAnnot](../../grdevices/html/as.graphicsannot)`. |
| `units` | character indicating in which units `s` is measured; should be one of `"user"`, `"inches"`, `"figure"`; partial matching is performed. |
| `cex` | numeric **c**haracter **ex**pansion factor; multiplied by `<par>("cex")` yields the final character size; the default `NULL` is equivalent to `1`. |
| `font, vfont, ...` | additional information about the font, possibly including the graphics parameter `"family"`: see `<text>`. |
### Details
Note that the ‘height’ of a string is determined only by the number of linefeeds (`"\n"`) it contains: it is the (number of linefeeds - 1) times the line spacing plus the height of `"M"` in the selected font. For an expression it is the height of the bounding box as computed by [plotmath](../../grdevices/html/plotmath). Thus in both cases it is an estimate of how far **above** the final baseline the typeset object extends. (It may also extend below the baseline.) The inter-line spacing is controlled by `cex`, `<par>("lheight")` and the ‘point size’ (but not the actual font in use).
Measurements in `"user"` units (the default) are only available after `[plot.new](frame)` has been called – otherwise an error is thrown.
### Value
Numeric vector with the same length as `s`, giving the estimate of width or height for each `s[i]`. `NA` strings are given width and height 0 (as they are not plotted).
### See Also
`<text>`, `[nchar](../../base/html/nchar)`
### Examples
```
str.ex <- c("W","w","I",".","WwI.")
op <- par(pty = "s"); plot(1:100, 1:100, type = "n")
sw <- strwidth(str.ex); sw
all.equal(sum(sw[1:4]), sw[5])
#- since the last string contains the others
sw.i <- strwidth(str.ex, "inches"); 25.4 * sw.i # width in [mm]
unique(sw / sw.i)
# constant factor: 1 value
mean(sw.i / strwidth(str.ex, "fig")) / par('fin')[1] # = 1: are the same
## See how letters fall in classes
## -- depending on graphics device and font!
all.lett <- c(letters, LETTERS)
shL <- strheight(all.lett, units = "inches") * 72 # 'big points'
table(shL) # all have same heights ...
mean(shL)/par("cin")[2] # around 0.6
(swL <- strwidth(all.lett, units = "inches") * 72) # 'big points'
split(all.lett, factor(round(swL, 2)))
sumex <- expression(sum(x[i], i=1,n), e^{i * pi} == -1)
strwidth(sumex)
strheight(sumex)
par(op) #- reset to previous setting
```
r None
`matplot` Plot Columns of Matrices
-----------------------------------
### Description
Plot the columns of one matrix against the columns of another (which often is just a vector treated as 1-column matrix).
### Usage
```
matplot(x, y, type = "p", lty = 1:5, lwd = 1, lend = par("lend"),
pch = NULL,
col = 1:6, cex = NULL, bg = NA,
xlab = NULL, ylab = NULL, xlim = NULL, ylim = NULL,
log = "", ..., add = FALSE, verbose = getOption("verbose"))
matpoints(x, y, type = "p", lty = 1:5, lwd = 1, pch = NULL,
col = 1:6, ...)
matlines (x, y, type = "l", lty = 1:5, lwd = 1, pch = NULL,
col = 1:6, ...)
```
### Arguments
| | |
| --- | --- |
| `x,y` | vectors or matrices of data for plotting. The number of rows should match. If one of them are missing, the other is taken as `y` and an `x` vector of `1:n` is used. Missing values (`NA`s) are allowed. Since **R** 4.0.0, `[class](../../base/html/class)(.)`es of `x` and `y` such as `"[Date](../../base/html/dates)"` are typically preserved. |
| `type` | character string (length 1 vector) or vector of 1-character strings indicating the type of plot for each column of `y`, see `[plot](plot.default)` for all possible `type`s. The first character of `type` defines the first plot, the second character the second, etc. Characters in `type` are cycled through; e.g., `"pl"` alternately plots points and lines. |
| `lty,lwd,lend` | vector of line types, widths, and end styles. The first element is for the first column, the second element for the second column, etc., even if lines are not plotted for all columns. Line types will be used cyclically until all plots are drawn. |
| `pch` | character string or vector of 1-characters or integers for plotting characters, see `<points>` for details. The first character is the plotting-character for the first plot, the second for the second, etc. The default is the digits (1 through 9, 0) then the lowercase and uppercase letters. |
| `col` | vector of colors. Colors are used cyclically. |
| `cex` | vector of character expansion sizes, used cyclically. This works as a multiple of `<par>("cex")`. `NULL` is equivalent to `1.0`. |
| `bg` | vector of background (fill) colors for the open plot symbols given by `pch = 21:25` as in `<points>`. The default `NA` corresponds to the one of the underlying function `<plot.xy>`. |
| `xlab, ylab` | titles for x and y axes, as in `[plot](plot.default)`. |
| `xlim, ylim` | ranges of x and y axes, as in `[plot](plot.default)`. |
| `log, ...` | Graphical parameters (see `<par>`) and any further arguments of `plot`, typically `<plot.default>`, may also be supplied as arguments to this function; even `panel.first` etc now work. Hence, the high-level graphics control arguments described under `<par>` and the arguments to `<title>` may be supplied to this function. |
| `add` | logical. If `TRUE`, plots are added to current one, using `<points>` and `<lines>`. |
| `verbose` | logical. If `TRUE`, write one line of what is done. |
### Details
`matplot(x,y, ..)` is basically a wrapper for
1. calling (the generic function) `[plot](plot.default)(x[,1], y[,1], ..)` for the first columns (only if `add = TRUE`).
2. calling (the generic) `<lines>(x[,j], y[,j], ..)` for subsequent columns.
Since **R** 4.0.0, care is taken to keep the `[class](../../base/html/class)(.)` of `x` and `y`, such that the corresponding `plot()` and `lines()` *methods* will be called.
Points involving missing values are not plotted.
The first column of `x` is plotted against the first column of `y`, the second column of `x` against the second column of `y`, etc. If one matrix has fewer columns, plotting will cycle back through the columns again. (In particular, either `x` or `y` may be a vector, against which all columns of the other argument will be plotted.)
The first element of `col, cex, lty, lwd` is used to plot the axes as well as the first line.
Because plotting symbols are drawn with lines and because these functions may be changing the line style, you should probably specify `lty = 1` when using plotting symbols.
### Side Effects
Function `matplot` generates a new plot; `matpoints` and `matlines` add to the current one.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[plot](plot.default)`, `<points>`, `<lines>`, `[matrix](../../base/html/matrix)`, `<par>`.
### Examples
```
require(grDevices)
matplot((-4:5)^2, main = "Quadratic") # almost identical to plot(*)
sines <- outer(1:20, 1:4, function(x, y) sin(x / 20 * pi * y))
matplot(sines, pch = 1:4, type = "o", col = rainbow(ncol(sines)))
matplot(sines, type = "b", pch = 21:23, col = 2:5, bg = 2:5,
main = "matplot(...., pch = 21:23, bg = 2:5)")
x <- 0:50/50
matplot(x, outer(x, 1:8, function(x, k) sin(k*pi * x)),
ylim = c(-2,2), type = "plobcsSh",
main= "matplot(,type = \"plobcsSh\" )")
## pch & type = vector of 1-chars :
matplot(x, outer(x, 1:4, function(x, k) sin(k*pi * x)),
pch = letters[1:4], type = c("b","p","o"))
lends <- c("round","butt","square")
matplot(matrix(1:12, 4), type="c", lty=1, lwd=10, lend=lends)
text(cbind(2.5, 2*c(1,3,5)-.4), lends, col= 1:3, cex = 1.5)
table(iris$Species) # is data.frame with 'Species' factor
iS <- iris$Species == "setosa"
iV <- iris$Species == "versicolor"
op <- par(bg = "bisque")
matplot(c(1, 8), c(0, 4.5), type = "n", xlab = "Length", ylab = "Width",
main = "Petal and Sepal Dimensions in Iris Blossoms")
matpoints(iris[iS,c(1,3)], iris[iS,c(2,4)], pch = "sS", col = c(2,4))
matpoints(iris[iV,c(1,3)], iris[iV,c(2,4)], pch = "vV", col = c(2,4))
legend(1, 4, c(" Setosa Petals", " Setosa Sepals",
"Versicolor Petals", "Versicolor Sepals"),
pch = "sSvV", col = rep(c(2,4), 2))
nam.var <- colnames(iris)[-5]
nam.spec <- as.character(iris[1+50*0:2, "Species"])
iris.S <- array(NA, dim = c(50,4,3),
dimnames = list(NULL, nam.var, nam.spec))
for(i in 1:3) iris.S[,,i] <- data.matrix(iris[1:50+50*(i-1), -5])
matplot(iris.S[, "Petal.Length",], iris.S[, "Petal.Width",], pch = "SCV",
col = rainbow(3, start = 0.8, end = 0.1),
sub = paste(c("S", "C", "V"), dimnames(iris.S)[[3]],
sep = "=", collapse= ", "),
main = "Fisher's Iris Data")
par(op)
## 'x' a "Date" vector :
nd <- length(dv <- seq(as.Date("1959-02-21"), by = "weeks", length.out = 100))
mSC <- cbind(I=1, sin=sin(pi*(1:nd)/8), cos=cos(pi*(1:nd)/8))
matplot(dv, mSC, type = "b", main = "matplot(<Date>, y)")
## 'x' a "POSIXct" date-time vector :
ct <- seq(c(ISOdate(2000,3,20)), by = "15 mins", length.out = 100)
matplot(ct, mSC, type = "b", main = "matplot(<POSIXct>, y)")
## or the same with even more axis flexibility:
matplot(ct, mSC, type = "b", main = "matplot(<POSIXct>, y)", xaxt="n")
Axis(ct, side=1, at = ct[1+4*(0:24)])
## Also works for data frame columns:
matplot(iris[1:50,1:4])
```
| programming_docs |
r None
`axis.POSIXct` Date and Date-time Plotting Functions
-----------------------------------------------------
### Description
Functions to plot objects of classes `"POSIXlt"`, `"POSIXct"` and `"Date"` representing calendar dates and times.
### Usage
```
axis.POSIXct(side, x, at, format, labels = TRUE, ...)
axis.Date(side, x, at, format, labels = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `x, at` | A date-time or date object. |
| `side` | See `<axis>`. |
| `format` | See `[strptime](../../base/html/strptime)`. |
| `labels` | Either a logical value specifying whether annotations are to be made at the tickmarks, or a vector of character strings to be placed at the tickpoints. |
| `...` | Further arguments to be passed from or to other methods, typically [graphical parameters](par). |
### Details
`axis.POSIXct` and `axis.Date` work quite hard to choose suitable time units (years, months, days, hours, minutes or seconds) and a sensible output format, but this can be overridden by supplying a `format` specification.
If `at` is supplied it specifies the locations of the ticks and labels whereas if `x` is specified a suitable grid of labels is chosen. Printing of tick labels can be suppressed by using `labels = FALSE`.
The date-times for a `"POSIXct"` input are interpreted in the time zone give by the `"tzone"` attribute if there is one, otherwise the current time zone.
The way the date-times are rendered (especially month names) is controlled by the locale setting of category `"LC_TIME"` (see `[Sys.setlocale](../../base/html/locales)`).
### Value
The locations on the axis scale at which tick marks were drawn.
### See Also
[DateTimeClasses](../../base/html/datetimeclasses), [Dates](../../base/html/dates) for details of the classes.
`[Axis](zaxis)`.
### Examples
```
with(beaver1, {
time <- strptime(paste(1990, day, time %/% 100, time %% 100),
"%Y %j %H %M")
plot(time, temp, type = "l") # axis at 4-hour intervals.
# now label every hour on the time axis
plot(time, temp, type = "l", xaxt = "n")
r <- as.POSIXct(round(range(time), "hours"))
axis.POSIXct(1, at = seq(r[1], r[2], by = "hour"), format = "%H")
})
plot(.leap.seconds, seq_along(.leap.seconds), type = "n", yaxt = "n",
xlab = "leap seconds", ylab = "", bty = "n")
rug(.leap.seconds)
## or as dates
lps <- as.Date(.leap.seconds)
plot(lps, seq_along(.leap.seconds),
type = "n", yaxt = "n", xlab = "leap seconds",
ylab = "", bty = "n")
rug(lps)
## 100 random dates in a 10-week period
random.dates <- as.Date("2001/1/1") + 70*sort(stats::runif(100))
plot(random.dates, 1:100)
# or for a better axis labelling
plot(random.dates, 1:100, xaxt = "n")
axis.Date(1, at = seq(as.Date("2001/1/1"), max(random.dates)+6, "weeks"))
axis.Date(1, at = seq(as.Date("2001/1/1"), max(random.dates)+6, "days"),
labels = FALSE, tcl = -0.2)
```
r None
`grid` Add Grid to a Plot
--------------------------
### Description
`grid` adds an `nx` by `ny` rectangular grid to an existing plot.
### Usage
```
grid(nx = NULL, ny = nx, col = "lightgray", lty = "dotted",
lwd = par("lwd"), equilogs = TRUE)
```
### Arguments
| | |
| --- | --- |
| `nx, ny` | number of cells of the grid in x and y direction. When `NULL`, as per default, the grid aligns with the tick marks on the corresponding *default* axis (i.e., tickmarks as computed by `[axTicks](axticks)`). When `[NA](../../base/html/na)`, no grid lines are drawn in the corresponding direction. |
| `col` | character or (integer) numeric; color of the grid lines. |
| `lty` | character or (integer) numeric; line type of the grid lines. |
| `lwd` | non-negative numeric giving line width of the grid lines. |
| `equilogs` | logical, only used when *log* coordinates and alignment with the axis tick marks are active. Setting `equilogs =
FALSE` in that case gives *non equidistant* tick aligned grid lines. |
### Note
If more fine tuning is required, use `<abline>(h = ., v = .)` directly.
### References
Murrell, P. (2005) *R Graphics*. Chapman & Hall/CRC Press.
### See Also
`[plot](plot.default)`, `<abline>`, `<lines>`, `<points>`.
### Examples
```
plot(1:3)
grid(NA, 5, lwd = 2) # grid only in y-direction
## maybe change the desired number of tick marks: par(lab = c(mx, my, 7))
op <- par(mfcol = 1:2)
with(iris,
{
plot(Sepal.Length, Sepal.Width, col = as.integer(Species),
xlim = c(4, 8), ylim = c(2, 4.5), panel.first = grid(),
main = "with(iris, plot(...., panel.first = grid(), ..) )")
plot(Sepal.Length, Sepal.Width, col = as.integer(Species),
panel.first = grid(3, lty = 1, lwd = 2),
main = "... panel.first = grid(3, lty = 1, lwd = 2), ..")
}
)
par(op)
```
r None
`barplot` Bar Plots
--------------------
### Description
Creates a bar plot with vertical or horizontal bars.
### Usage
```
barplot(height, ...)
## Default S3 method:
barplot(height, width = 1, space = NULL,
names.arg = NULL, legend.text = NULL, beside = FALSE,
horiz = FALSE, density = NULL, angle = 45,
col = NULL, border = par("fg"),
main = NULL, sub = NULL, xlab = NULL, ylab = NULL,
xlim = NULL, ylim = NULL, xpd = TRUE, log = "",
axes = TRUE, axisnames = TRUE,
cex.axis = par("cex.axis"), cex.names = par("cex.axis"),
inside = TRUE, plot = TRUE, axis.lty = 0, offset = 0,
add = FALSE, ann = !add && par("ann"), args.legend = NULL, ...)
## S3 method for class 'formula'
barplot(formula, data, subset, na.action,
horiz = FALSE, xlab = NULL, ylab = NULL, ...)
```
### Arguments
| | |
| --- | --- |
| `height` | either a vector or matrix of values describing the bars which make up the plot. If `height` is a vector, the plot consists of a sequence of rectangular bars with heights given by the values in the vector. If `height` is a matrix and `beside` is `FALSE` then each bar of the plot corresponds to a column of `height`, with the values in the column giving the heights of stacked sub-bars making up the bar. If `height` is a matrix and `beside` is `TRUE`, then the values in each column are juxtaposed rather than stacked. |
| `width` | optional vector of bar widths. Re-cycled to length the number of bars drawn. Specifying a single value will have no visible effect unless `xlim` is specified. |
| `space` | the amount of space (as a fraction of the average bar width) left before each bar. May be given as a single number or one number per bar. If `height` is a matrix and `beside` is `TRUE`, `space` may be specified by two numbers, where the first is the space between bars in the same group, and the second the space between the groups. If not given explicitly, it defaults to `c(0,1)` if `height` is a matrix and `beside` is `TRUE`, and to 0.2 otherwise. |
| `names.arg` | a vector of names to be plotted below each bar or group of bars. If this argument is omitted, then the names are taken from the `names` attribute of `height` if this is a vector, or the column names if it is a matrix. |
| `legend.text` | a vector of text used to construct a legend for the plot, or a logical indicating whether a legend should be included. This is only useful when `height` is a matrix. In that case given legend labels should correspond to the rows of `height`; if `legend.text` is true, the row names of `height` will be used as labels if they are non-null. |
| `beside` | a logical value. If `FALSE`, the columns of `height` are portrayed as stacked bars, and if `TRUE` the columns are portrayed as juxtaposed bars. |
| `horiz` | a logical value. If `FALSE`, the bars are drawn vertically with the first bar to the left. If `TRUE`, the bars are drawn horizontally with the first at the bottom. |
| `density` | a vector giving the density of shading lines, in lines per inch, for the bars or bar components. The default value of `NULL` means that no shading lines are drawn. Non-positive values of `density` also inhibit the drawing of shading lines. |
| `angle` | the slope of shading lines, given as an angle in degrees (counter-clockwise), for the bars or bar components. |
| `col` | a vector of colors for the bars or bar components. By default, grey is used if `height` is a vector, and a gamma-corrected grey palette if `height` is a matrix. |
| `border` | the color to be used for the border of the bars. Use `border = NA` to omit borders. If there are shading lines, `border = TRUE` means use the same colour for the border as for the shading lines. |
| `main,sub` | overall and sub title for the plot. |
| `xlab` | a label for the x axis. |
| `ylab` | a label for the y axis. |
| `xlim` | limits for the x axis. |
| `ylim` | limits for the y axis. |
| `xpd` | logical. Should bars be allowed to go outside region? |
| `log` | string specifying if axis scales should be logarithmic; see `<plot.default>`. |
| `axes` | logical. If `TRUE`, a vertical (or horizontal, if `horiz` is true) axis is drawn. |
| `axisnames` | logical. If `TRUE`, and if there are `names.arg` (see above), the other axis is drawn (with `lty = 0`) and labeled. |
| `cex.axis` | expansion factor for numeric axis labels (see `<par>('cex')`). |
| `cex.names` | expansion factor for axis names (bar labels). |
| `inside` | logical. If `TRUE`, the lines which divide adjacent (non-stacked!) bars will be drawn. Only applies when `space = 0` (which it partly is when `beside = TRUE`). |
| | |
| --- | --- |
| `plot` | logical. If `FALSE`, nothing is plotted. |
| `axis.lty` | the graphics parameter `lty` (see `<par>('lty')`) applied to the axis and tick marks of the categorical (default horizontal) axis. Note that by default the axis is suppressed. |
| `offset` | a vector indicating how much the bars should be shifted relative to the x axis. |
| `add` | logical specifying if bars should be added to an already existing plot; defaults to `FALSE`. |
| `ann` | logical specifying if the default annotation (`main`, `sub`, `xlab`, `ylab`) should appear on the plot, see `<title>`. |
| `args.legend` | list of additional arguments to pass to `<legend>()`; names of the list are used as argument names. Only used if `legend.text` is supplied. |
| `formula` | a formula where the `y` variables are numeric data to plot against the categorical `x` variables. The formula can have one of three forms:
```
y ~ x
y ~ x1 + x2
cbind(y1, y2) ~ x
```
(see the examples). |
| `data` | a data frame (or list) from which the variables in formula should be taken. |
| `subset` | an optional vector specifying a subset of observations to be used. |
| `na.action` | a function which indicates what should happen when the data contain `[NA](../../base/html/na)` values. The default is to ignore missing values in the given variables. |
| `...` | arguments to be passed to/from other methods. For the default method these can include further arguments (such as `axes`, `asp` and `main`) and [graphical parameters](par) (see `<par>`) which are passed to `<plot.window>()`, `<title>()` and `<axis>`. |
### Value
A numeric vector (or matrix, when `beside = TRUE`), say `mp`, giving the coordinates of *all* the bar midpoints drawn, useful for adding to the graph.
If `beside` is true, use `colMeans(mp)` for the midpoints of each *group* of bars, see example.
### Author(s)
R Core, with a contribution by Arni Magnusson.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Murrell, P. (2005) *R Graphics*. Chapman & Hall/CRC Press.
### See Also
`[plot](plot.default)(..., type = "h")`, `<dotchart>`; `<hist>` for bars of a *continuous* variable. `<mosaicplot>()`, more sophisticated to visualize *several* categorical variables.
### Examples
```
# Formula method
barplot(GNP ~ Year, data = longley)
barplot(cbind(Employed, Unemployed) ~ Year, data = longley)
## 3rd form of formula - 2 categories :
op <- par(mfrow = 2:1, mgp = c(3,1,0)/2, mar = .1+c(3,3:1))
summary(d.Titanic <- as.data.frame(Titanic))
barplot(Freq ~ Class + Survived, data = d.Titanic,
subset = Age == "Adult" & Sex == "Male",
main = "barplot(Freq ~ Class + Survived, *)", ylab = "# {passengers}", legend.text = TRUE)
# Corresponding table :
(xt <- xtabs(Freq ~ Survived + Class + Sex, d.Titanic, subset = Age=="Adult"))
# Alternatively, a mosaic plot :
mosaicplot(xt[,,"Male"], main = "mosaicplot(Freq ~ Class + Survived, *)", color=TRUE)
par(op)
# Default method
require(grDevices) # for colours
tN <- table(Ni <- stats::rpois(100, lambda = 5))
r <- barplot(tN, col = rainbow(20))
#- type = "h" plotting *is* 'bar'plot
lines(r, tN, type = "h", col = "red", lwd = 2)
barplot(tN, space = 1.5, axisnames = FALSE,
sub = "barplot(..., space= 1.5, axisnames = FALSE)")
barplot(VADeaths, plot = FALSE)
barplot(VADeaths, plot = FALSE, beside = TRUE)
mp <- barplot(VADeaths) # default
tot <- colMeans(VADeaths)
text(mp, tot + 3, format(tot), xpd = TRUE, col = "blue")
barplot(VADeaths, beside = TRUE,
col = c("lightblue", "mistyrose", "lightcyan",
"lavender", "cornsilk"),
legend.text = rownames(VADeaths), ylim = c(0, 100))
title(main = "Death Rates in Virginia", font.main = 4)
hh <- t(VADeaths)[, 5:1]
mybarcol <- "gray20"
mp <- barplot(hh, beside = TRUE,
col = c("lightblue", "mistyrose",
"lightcyan", "lavender"),
legend.text = colnames(VADeaths), ylim = c(0,100),
main = "Death Rates in Virginia", font.main = 4,
sub = "Faked upper 2*sigma error bars", col.sub = mybarcol,
cex.names = 1.5)
segments(mp, hh, mp, hh + 2*sqrt(1000*hh/100), col = mybarcol, lwd = 1.5)
stopifnot(dim(mp) == dim(hh)) # corresponding matrices
mtext(side = 1, at = colMeans(mp), line = -2,
text = paste("Mean", formatC(colMeans(hh))), col = "red")
# Bar shading example
barplot(VADeaths, angle = 15+10*1:5, density = 20, col = "black",
legend.text = rownames(VADeaths))
title(main = list("Death Rates in Virginia", font = 4))
# Border color
barplot(VADeaths, border = "dark blue")
# Log scales (not much sense here)
barplot(tN, col = heat.colors(12), log = "y")
barplot(tN, col = gray.colors(20), log = "xy")
# Legend location
barplot(height = cbind(x = c(465, 91) / 465 * 100,
y = c(840, 200) / 840 * 100,
z = c(37, 17) / 37 * 100),
beside = FALSE,
width = c(465, 840, 37),
col = c(1, 2),
legend.text = c("A", "B"),
args.legend = list(x = "topleft"))
```
r None
`sunflowerplot` Produce a Sunflower Scatter Plot
-------------------------------------------------
### Description
Multiple points are plotted as ‘sunflowers’ with multiple leaves (‘petals’) such that overplotting is visualized instead of accidental and invisible.
### Usage
```
sunflowerplot(x, ...)
## Default S3 method:
sunflowerplot(x, y = NULL, number, log = "", digits = 6,
xlab = NULL, ylab = NULL, xlim = NULL, ylim = NULL,
add = FALSE, rotate = FALSE,
pch = 16, cex = 0.8, cex.fact = 1.5,
col = par("col"), bg = NA, size = 1/8, seg.col = 2,
seg.lwd = 1.5, ...)
## S3 method for class 'formula'
sunflowerplot(formula, data = NULL, xlab = NULL, ylab = NULL, ...,
subset, na.action = NULL)
```
### Arguments
| | |
| --- | --- |
| `x` | numeric vector of `x`-coordinates of length `n`, say, or another valid plotting structure, as for `<plot.default>`, see also `[xy.coords](../../grdevices/html/xy.coords)`. |
| `y` | numeric vector of `y`-coordinates of length `n`. |
| `number` | integer vector of length `n`. `number[i]` = number of replicates for `(x[i], y[i])`, may be 0. Default (`missing(number)`): compute the exact multiplicity of the points `x[], y[]`, via `[xyTable](../../grdevices/html/xytable)()`. |
| `log` | character indicating log coordinate scale, see `<plot.default>`. |
| `digits` | when `number` is computed (i.e., not specified), `x` and `y` are rounded to `digits` significant digits before multiplicities are computed. |
| `xlab, ylab` | character label for x-, or y-axis, respectively. |
| `xlim, ylim` | `numeric(2)` limiting the extents of the x-, or y-axis. |
| `add` | logical; should the plot be added on a previous one ? Default is `FALSE`. |
| `rotate` | logical; if `TRUE`, randomly rotate the sunflowers (preventing artefacts). |
| `pch` | plotting character to be used for points (`number[i]==1`) and center of sunflowers. |
| `cex` | numeric; character size expansion of center points (s. `pch`). |
| `cex.fact` | numeric *shrinking* factor to be used for the center points *when there are flower leaves*, i.e., `cex / cex.fact` is used for these. |
| `col, bg` | colors for the plot symbols, passed to `<plot.default>`. |
| `size` | of sunflower leaves in inches, 1[in] := 2.54[cm]. Default: 1/8\", approximately 3.2mm. |
| `seg.col` | color to be used for the **seg**ments which make the sunflowers leaves, see `<par>(col=)`; `col = "gold"` reminds of real sunflowers. |
| `seg.lwd` | numeric; the line width for the leaves' segments. |
| `...` | further arguments to `[plot](plot.default)` [if `add = FALSE`], or to be passed to or from another method. |
| `formula` | a `[formula](../../stats/html/formula)`, such as `y ~ x`. |
| `data` | a data.frame (or list) from which the variables in `formula` should be taken. |
| `subset` | an optional vector specifying a subset of observations to be used in the fitting process. |
| `na.action` | a function which indicates what should happen when the data contain `NA`s. The default is to ignore case with missing values. |
### Details
This is a generic function with default and formula methods.
For `number[i] == 1`, a (slightly enlarged) usual plotting symbol (`pch`) is drawn. For `number[i] > 1`, a small plotting symbol is drawn and `number[i]` equi-angular ‘rays’ emanate from it.
If `rotate = TRUE` and `number[i] >= 2`, a random direction is chosen (instead of the y-axis) for the first ray. The goal is to `[jitter](../../base/html/jitter)` the orientations of the sunflowers in order to prevent artefactual visual impressions.
### Value
A list with three components of same length,
| | |
| --- | --- |
| `x` | x coordinates |
| `y` | y coordinates |
| `number` | number |
Use `[xyTable](../../grdevices/html/xytable)()` (from package grDevices) if you are only interested in this return value.
### Side Effects
A scatter plot is drawn with ‘sunflowers’ as symbols.
### Author(s)
Andreas Ruckstuhl, Werner Stahel, Martin Maechler, Tim Hesterberg, 1989–1993. Port to **R** by Martin Maechler [[email protected]](mailto:[email protected]).
### References
Chambers, J. M., Cleveland, W. S., Kleiner, B. and Tukey, P. A. (1983). *Graphical Methods for Data Analysis*. Wadsworth.
Schilling, M. F. and Watkins, A. E. (1994). A suggestion for sunflower plots. *The American Statistician*, **48**, 303–305. doi: [10.2307/2684839](https://doi.org/10.2307/2684839).
Murrell, P. (2005). *R Graphics*. Chapman & Hall/CRC Press.
### See Also
`[density](../../stats/html/density)`, `[xyTable](../../grdevices/html/xytable)`
### Examples
```
require(stats) # for rnorm
require(grDevices)
## 'number' is computed automatically:
sunflowerplot(iris[, 3:4])
## Imitating Chambers et al, p.109, closely:
sunflowerplot(iris[, 3:4], cex = .2, cex.fact = 1, size = .035, seg.lwd = .8)
## or
sunflowerplot(Petal.Width ~ Petal.Length, data = iris,
cex = .2, cex.fact = 1, size = .035, seg.lwd = .8)
sunflowerplot(x = sort(2*round(rnorm(100))), y = round(rnorm(100), 0),
main = "Sunflower Plot of Rounded N(0,1)")
## Similarly using a "xyTable" argument:
xyT <- xyTable(x = sort(2*round(rnorm(100))), y = round(rnorm(100), 0),
digits = 3)
utils::str(xyT, vec.len = 20)
sunflowerplot(xyT, main = "2nd Sunflower Plot of Rounded N(0,1)")
## A 'marked point process' {explicit 'number' argument}:
sunflowerplot(rnorm(100), rnorm(100), number = rpois(n = 100, lambda = 2),
main = "Sunflower plot (marked point process)",
rotate = TRUE, col = "blue4")
```
| programming_docs |
r None
`rasterImage` Draw One or More Raster Images
---------------------------------------------
### Description
`rasterImage` draws a raster image at the given locations and sizes.
### Usage
```
rasterImage(image,
xleft, ybottom, xright, ytop,
angle = 0, interpolate = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `image` | a `raster` object, or an object that can be coerced to one by `[as.raster](../../grdevices/html/as.raster)`. |
| `xleft` | a vector (or scalar) of left x positions. |
| `ybottom` | a vector (or scalar) of bottom y positions. |
| `xright` | a vector (or scalar) of right x positions. |
| `ytop` | a vector (or scalar) of top y positions. |
| `angle` | angle of rotation (in degrees, anti-clockwise from positive x-axis, about the bottom-left corner). |
| `interpolate` | a logical vector (or scalar) indicating whether to apply linear interpolation to the image when drawing. |
| `...` | [graphical parameters](par). |
### Details
The positions supplied, i.e., `xleft, ...`, are relative to the current plotting region. If the x-axis goes from 100 to 200 then `xleft` should be larger than 100 and `xright` should be less than 200. The position vectors will be recycled to the length of the longest.
Plotting raster images is not supported on all devices and may have limitations where supported, for example (e.g., for `postscript` and `X11(type = "Xlib")` is restricted to opaque colors). Problems with the rendering of raster images have been reported by users of `windows()` devices under Remote Desktop, at least under its default settings.
You should not expect a raster image to be re-sized when an on-screen device is re-sized: whether it is is device-dependent.
### See Also
`<rect>`, `<polygon>`, and `<segments>` and others for flexible ways to draw shapes.
`[dev.capabilities](../../grdevices/html/dev.capabilities)` to see if it is supported.
### Examples
```
require(grDevices)
## set up the plot region:
op <- par(bg = "thistle")
plot(c(100, 250), c(300, 450), type = "n", xlab = "", ylab = "")
image <- as.raster(matrix(0:1, ncol = 5, nrow = 3))
rasterImage(image, 100, 300, 150, 350, interpolate = FALSE)
rasterImage(image, 100, 400, 150, 450)
rasterImage(image, 200, 300, 200 + xinch(.5), 300 + yinch(.3),
interpolate = FALSE)
rasterImage(image, 200, 400, 250, 450, angle = 15, interpolate = FALSE)
par(op)
```
r None
`filled.contour` Level (Contour) Plots
---------------------------------------
### Description
This function produces a contour plot with the areas between the contours filled in solid color (Cleveland calls this a level plot). A key showing how the colors map to z values is shown to the right of the plot.
### Usage
```
filled.contour(x = seq(0, 1, length.out = nrow(z)),
y = seq(0, 1, length.out = ncol(z)),
z,
xlim = range(x, finite = TRUE),
ylim = range(y, finite = TRUE),
zlim = range(z, finite = TRUE),
levels = pretty(zlim, nlevels), nlevels = 20,
color.palette = function(n) hcl.colors(n, "YlOrRd", rev = TRUE),
col = color.palette(length(levels) - 1),
plot.title, plot.axes, key.title, key.axes,
asp = NA, xaxs = "i", yaxs = "i", las = 1,
axes = TRUE, frame.plot = axes, ...)
.filled.contour(x, y, z, levels, col)
```
### Arguments
| | |
| --- | --- |
| `x, y` | locations of grid lines at which the values in `z` are measured. These must be in ascending order. (The rest of this description does not apply to `.filled.contour`.) By default, equally spaced values from 0 to 1 are used. If `x` is a `list`, its components `x$x` and `x$y` are used for `x` and `y`, respectively. If the list has component `z` this is used for `z`. |
| `z` | a numeric matrix containing the values to be plotted.. Note that `x` can be used instead of `z` for convenience. |
| `xlim` | x limits for the plot. |
| `ylim` | y limits for the plot. |
| `zlim` | z limits for the plot. |
| `levels` | a set of levels which are used to partition the range of `z`. Must be **strictly** increasing (and finite). Areas with `z` values between consecutive levels are painted with the same color. |
| `nlevels` | if `levels` is not specified, the range of `z`, values is divided into approximately this many levels. |
| `color.palette` | a color palette function to be used to assign colors in the plot. |
| `col` | an explicit set of colors to be used in the plot. This argument overrides any palette function specification. There should be one less color than levels |
| `plot.title` | statements which add titles to the main plot. |
| `plot.axes` | statements which draw axes (and a `<box>`) on the main plot. This overrides the default axes. |
| `key.title` | statements which add titles for the plot key. |
| `key.axes` | statements which draw axes on the plot key. This overrides the default axis. |
| `asp` | the *y/x* aspect ratio, see `<plot.window>`. |
| `xaxs` | the x axis style. The default is to use internal labeling. |
| `yaxs` | the y axis style. The default is to use internal labeling. |
| `las` | the style of labeling to be used. The default is to use horizontal labeling. |
| `axes, frame.plot` | logicals indicating if axes and a box should be drawn, as in `<plot.default>`. |
| `...` | additional [graphical parameters](par), currently only passed to `<title>()`. |
### Details
The values to be plotted can contain `NA`s. Rectangles with two or more corner values are `NA` are omitted entirely: where there is a single `NA` value the triangle opposite the `NA` is omitted.
Values to be plotted can be infinite: the effect is similar to that described for `NA` values.
`.filled.contour` is a ‘bare bones’ interface to add just the contour plot to an already-set-up plot region. It is is intended for programmatic use, and the programmer is responsible for checking the conditions on the arguments.
### Note
`filled.contour` uses the `<layout>` function and so is restricted to a full page display.
The output produced by `filled.contour` is actually a combination of two plots; one is the filled contour and one is the legend. Two separate coordinate systems are set up for these two plots, but they are only used internally – once the function has returned these coordinate systems are lost. If you want to annotate the main contour plot, for example to add points, you can specify graphics commands in the `plot.axes` argument. See the examples.
### Author(s)
Ross Ihaka and R Core Team
### References
Cleveland, W. S. (1993) *Visualizing Data*. Summit, New Jersey: Hobart.
### See Also
`<contour>`, `<image>`, `[hcl.colors](../../grdevices/html/palettes)`, `[gray.colors](../../grdevices/html/gray.colors)`, `[palette](../../grdevices/html/palette)`; `[contourplot](../../lattice/html/levelplot)` and `[levelplot](../../lattice/html/levelplot)` from package [lattice](https://CRAN.R-project.org/package=lattice).
### Examples
```
require("grDevices") # for colours
filled.contour(volcano, asp = 1) # simple
x <- 10*1:nrow(volcano)
y <- 10*1:ncol(volcano)
filled.contour(x, y, volcano,
color.palette = function(n) hcl.colors(n, "terrain"),
plot.title = title(main = "The Topography of Maunga Whau",
xlab = "Meters North", ylab = "Meters West"),
plot.axes = { axis(1, seq(100, 800, by = 100))
axis(2, seq(100, 600, by = 100)) },
key.title = title(main = "Height\n(meters)"),
key.axes = axis(4, seq(90, 190, by = 10))) # maybe also asp = 1
mtext(paste("filled.contour(.) from", R.version.string),
side = 1, line = 4, adj = 1, cex = .66)
# Annotating a filled contour plot
a <- expand.grid(1:20, 1:20)
b <- matrix(a[,1] + a[,2], 20)
filled.contour(x = 1:20, y = 1:20, z = b,
plot.axes = { axis(1); axis(2); points(10, 10) })
## Persian Rug Art:
x <- y <- seq(-4*pi, 4*pi, length.out = 27)
r <- sqrt(outer(x^2, y^2, "+"))
filled.contour(cos(r^2)*exp(-r/(2*pi)), axes = FALSE)
## rather, the key *should* be labeled:
filled.contour(cos(r^2)*exp(-r/(2*pi)), frame.plot = FALSE,
plot.axes = {})
```
r None
`points` Add Points to a Plot
------------------------------
### Description
`points` is a generic function to draw a sequence of points at the specified coordinates. The specified character(s) are plotted, centered at the coordinates.
### Usage
```
points(x, ...)
## Default S3 method:
points(x, y = NULL, type = "p", ...)
```
### Arguments
| | |
| --- | --- |
| `x, y` | coordinate vectors of points to plot. |
| `type` | character indicating the type of plotting; actually any of the `type`s as in `<plot.default>`. |
| `...` | Further [graphical parameters](par) may also be supplied as arguments. See ‘Details’. |
### Details
The coordinates can be passed in a plotting structure (a list with `x` and `y` components), a two-column matrix, a time series, .... See `[xy.coords](../../grdevices/html/xy.coords)`. If supplied separately, they must be of the same length.
Graphical parameters commonly used are
`pch`
plotting ‘character’, i.e., symbol to use. This can either be a single character or an integer code for one of a set of graphics symbols. The full set of S symbols is available with `pch = 0:18`, see the examples below. (NB: **R** uses circles instead of the octagons used in S.)
Value `pch = "."` (equivalently `pch = 46`) is handled specially. It is a rectangle of side 0.01 inch (scaled by `cex`). In addition, if `cex = 1` (the default), each side is at least one pixel (1/72 inch on the `[pdf](../../grdevices/html/pdf)`, `[postscript](../../grdevices/html/postscript)` and `[xfig](../../grdevices/html/xfig)` devices).
For other text symbols, `cex = 1` corresponds to the default fontsize of the device, often specified by an argument `pointsize`. For `pch` in `0:25` the default size is about 75% of the character height (see `par("cin")`).
`col`
color code or name, see `<par>`.
`bg`
background (fill) color for the open plot symbols given by `pch = 21:25`.
`cex`
character (or symbol) expansion: a numerical vector. This works as a multiple of `<par>("cex")`.
`lwd`
line width for drawing symbols see `<par>`.
Others less commonly used are `lty` and `lwd` for types such as `"b"` and `"l"`.
The [graphical parameters](par) `pch`, `col`, `bg`, `cex` and `lwd` can be vectors (which will be recycled as needed) giving a value for each point plotted. If lines are to be plotted (e.g., for `type = "b"`) the first element of `lwd` is used.
Points whose `x`, `y`, `pch`, `col` or `cex` value is `NA` are omitted from the plot.
### 'pch' values
Values of `pch` are stored internally as integers. The interpretation is
* `NA_integer_`: no symbol.
* `0:18`: S-compatible vector symbols.
* `19:25`: further **R** vector symbols.
* `26:31`: unused (and ignored).
* `32:127`: ASCII characters.
* `128:255` native characters *only in a single-byte locale and for the symbol font*. (`128:159` are only used on Windows.)
* `-32 ...` Unicode code point (where supported).
Note that unlike S (which uses octagons), symbols `1`, `10`, `13` and `16` use circles. The filled shapes `15:18` do not include a border.
The following **R** plotting symbols are can be obtained with `pch = 19:25`: those with `21:25` can be colored and filled with different colors: `col` gives the border color and `bg` the background color (which is "grey" in the figure)
* `pch = 19`: solid circle,
* `pch = 20`: bullet (smaller solid circle, 2/3 the size of `19`),
* `pch = 21`: filled circle,
* `pch = 22`: filled square,
* `pch = 23`: filled diamond,
* `pch = 24`: filled triangle point-up,
* `pch = 25`: filled triangle point down.
Note that all of these both fill the shape and draw a border. Some care in interpretation is needed when semi-transparent colours are used for both fill and border (and the result might be device-specific and even viewer-specific for `[pdf](../../grdevices/html/pdf)`).
The difference between `pch = 16` and `pch = 19` is that the latter uses a border and so is perceptibly larger when `lwd` is large relative to `cex`.
Values `pch = 26:31` are currently unused and `pch = 32:127` give the ASCII characters. In a single-byte locale `pch = 128:255` give the corresponding character (if any) in the locale's character set. Where supported by the OS, negative values specify a Unicode code point, so e.g. `-0x2642L` is a ‘male sign’ and `-0x20ACL` is the Euro.
A character string consisting of a single character is converted to an integer: `32:127` for ASCII characters, and usually to the Unicode code point otherwise. (In non-Latin-1 single-byte locales, `128:255` will be used for 8-bit characters.)
If `pch` supplied is a logical, integer or character `NA` or an empty character string the point is omitted from the plot.
If `pch` is `NULL` or otherwise of length 0, `par("pch")` is used.
If the symbol font (`<par>(font = 5)`) is used, numerical values should be used for `pch`: the range is `c(32:126, 160:254)` in all locales (but `240` is not defined (used for ‘apple’ on macOS) and `160`, Euro, may not be present).
### Note
A single-byte encoding may include the characters in `pch = 128:255`, and if it does, a font may not include all (or even any) of them.
Not all negative numbers are valid as Unicode code points, and no check is done. A display device is likely to use a rectangle for (or omit) Unicode code points which are invalid or for which it does not have a glyph in the font used.
What happens for very small or zero values of `cex` is device-dependent: symbols or characters may become invisible or they may be plotted at a fixed minimum size. Circles of zero radius will not be plotted.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[points.formula](plot.formula)` for the formula method; `[plot](plot.default)`, `<lines>`, and the underlying workhorse function `<plot.xy>`.
### Examples
```
require(stats) # for rnorm
plot(-4:4, -4:4, type = "n") # setting up coord. system
points(rnorm(200), rnorm(200), col = "red")
points(rnorm(100)/2, rnorm(100)/2, col = "blue", cex = 1.5)
op <- par(bg = "light blue")
x <- seq(0, 2*pi, length.out = 51)
## something "between type='b' and type='o'":
plot(x, sin(x), type = "o", pch = 21, bg = par("bg"), col = "blue", cex = .6,
main = 'plot(..., type="o", pch=21, bg=par("bg"))')
par(op)
## Not run:
## The figure was produced by calls like
png("pch.png", height = 0.7, width = 7, res = 100, units = "in")
par(mar = rep(0,4))
plot(c(-1, 26), 0:1, type = "n", axes = FALSE)
text(0:25, 0.6, 0:25, cex = 0.5)
points(0:25, rep(0.3, 26), pch = 0:25, bg = "grey")
## End(Not run)
##-------- Showing all the extra & some char graphics symbols ---------
pchShow <-
function(extras = c("*",".", "o","O","0","+","-","|","%","#"),
cex = 3, ## good for both .Device=="postscript" and "x11"
col = "red3", bg = "gold", coltext = "brown", cextext = 1.2,
main = paste("plot symbols : points (... pch = *, cex =",
cex,")"))
{
nex <- length(extras)
np <- 26 + nex
ipch <- 0:(np-1)
k <- floor(sqrt(np))
dd <- c(-1,1)/2
rx <- dd + range(ix <- ipch %/% k)
ry <- dd + range(iy <- 3 + (k-1)- ipch %% k)
pch <- as.list(ipch) # list with integers & strings
if(nex > 0) pch[26+ 1:nex] <- as.list(extras)
plot(rx, ry, type = "n", axes = FALSE, xlab = "", ylab = "", main = main)
abline(v = ix, h = iy, col = "lightgray", lty = "dotted")
for(i in 1:np) {
pc <- pch[[i]]
## 'col' symbols with a 'bg'-colored interior (where available) :
points(ix[i], iy[i], pch = pc, col = col, bg = bg, cex = cex)
if(cextext > 0)
text(ix[i] - 0.3, iy[i], pc, col = coltext, cex = cextext)
}
}
pchShow()
pchShow(c("o","O","0"), cex = 2.5)
pchShow(NULL, cex = 4, cextext = 0, main = NULL)
## ------------ test code for various pch specifications -------------
# Try this in various font families (including Hershey)
# and locales. Use sign = -1 asserts we want Latin-1.
# Standard cases in a MBCS locale will not plot the top half.
TestChars <- function(sign = 1, font = 1, ...)
{
MB <- l10n_info()$MBCS
r <- if(font == 5) { sign <- 1; c(32:126, 160:254)
} else if(MB) 32:126 else 32:255
if (sign == -1) r <- c(32:126, 160:255)
par(pty = "s")
plot(c(-1,16), c(-1,16), type = "n", xlab = "", ylab = "",
xaxs = "i", yaxs = "i",
main = sprintf("sign = %d, font = %d", sign, font))
grid(17, 17, lty = 1) ; mtext(paste("MBCS:", MB))
for(i in r) try(points(i%%16, i%/%16, pch = sign*i, font = font,...))
}
TestChars()
try(TestChars(sign = -1))
TestChars(font = 5) # Euro might be at 160 (0+10*16).
# macOS has apple at 240 (0+15*16).
try(TestChars(-1, font = 2)) # bold
```
r None
`plot.table` Plot Methods for table Objects
--------------------------------------------
### Description
This is a method of the generic `plot` function for (contingency) `[table](../../base/html/table)` objects. Whereas for two- and more dimensional tables, a `<mosaicplot>` is drawn, one-dimensional ones are plotted as bars.
### Usage
```
## S3 method for class 'table'
plot(x, type = "h", ylim = c(0, max(x)), lwd = 2,
xlab = NULL, ylab = NULL, frame.plot = is.num, ...)
## S3 method for class 'table'
points(x, y = NULL, type = "h", lwd = 2, ...)
## S3 method for class 'table'
lines(x, y = NULL, type = "h", lwd = 2, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a `[table](../../base/html/table)` (like) object. |
| `y` | Must be `NULL`: there to protect against incorrect calls. |
| `type` | plotting type. |
| `ylim` | range of y-axis. |
| `lwd` | line width for bars when `type = "h"` is used in the 1D case. |
| `xlab, ylab` | x- and y-axis labels. |
| `frame.plot` | logical indicating if a frame (`<box>`) should be drawn in the 1D case. Defaults to true when `x` has `[dimnames](../../base/html/dimnames)` coerce-able to numbers. |
| `...` | further graphical arguments, see `<plot.default>`. `axes = FALSE` is accepted. |
### See Also
`<plot.factor>`, the `[plot](plot.default)` method for factors.
### Examples
```
## 1-d tables
(Poiss.tab <- table(N = stats::rpois(200, lambda = 5)))
plot(Poiss.tab, main = "plot(table(rpois(200, lambda = 5)))")
plot(table(state.division))
## 4-D :
plot(Titanic, main ="plot(Titanic, main= *)")
```
r None
`hist` Histograms
------------------
### Description
The generic function `hist` computes a histogram of the given data values. If `plot = TRUE`, the resulting object of [class](../../base/html/class) `"histogram"` is plotted by `[plot.histogram](plothistogram)`, before it is returned.
### Usage
```
hist(x, ...)
## Default S3 method:
hist(x, breaks = "Sturges",
freq = NULL, probability = !freq,
include.lowest = TRUE, right = TRUE,
density = NULL, angle = 45, col = "lightgray", border = NULL,
main = paste("Histogram of" , xname),
xlim = range(breaks), ylim = NULL,
xlab = xname, ylab,
axes = TRUE, plot = TRUE, labels = FALSE,
nclass = NULL, warn.unused = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a vector of values for which the histogram is desired. |
| `breaks` | one of: * a vector giving the breakpoints between histogram cells,
* a function to compute the vector of breakpoints,
* a single number giving the number of cells for the histogram,
* a character string naming an algorithm to compute the number of cells (see ‘Details’),
* a function to compute the number of cells.
In the last three cases the number is a suggestion only; as the breakpoints will be set to `[pretty](../../base/html/pretty)` values, the number is limited to `1e6` (with a warning if it was larger). If `breaks` is a function, the `x` vector is supplied to it as the only argument (and the number of breaks is only limited by the amount of available memory). |
| `freq` | logical; if `TRUE`, the histogram graphic is a representation of frequencies, the `counts` component of the result; if `FALSE`, probability densities, component `density`, are plotted (so that the histogram has a total area of one). Defaults to `TRUE` *if and only if* `breaks` are equidistant (and `probability` is not specified). |
| `probability` | an *alias* for `!freq`, for S compatibility. |
| `include.lowest` | logical; if `TRUE`, an `x[i]` equal to the `breaks` value will be included in the first (or last, for `right = FALSE`) bar. This will be ignored (with a warning) unless `breaks` is a vector. |
| `right` | logical; if `TRUE`, the histogram cells are right-closed (left open) intervals. |
| `density` | the density of shading lines, in lines per inch. The default value of `NULL` means that no shading lines are drawn. Non-positive values of `density` also inhibit the drawing of shading lines. |
| `angle` | the slope of shading lines, given as an angle in degrees (counter-clockwise). |
| `col` | a colour to be used to fill the bars. The default of `NULL` yields unfilled bars. |
| `border` | the color of the border around the bars. The default is to use the standard foreground color. |
| `main, xlab, ylab` | main title and axis labels: these arguments to `<title>()` get “smart” defaults here, e.g., the default `ylab` is `"Frequency"` iff `freq` is true. |
| `xlim, ylim` | the range of x and y values with sensible defaults. Note that `xlim` is *not* used to define the histogram (breaks), but only for plotting (when `plot = TRUE`). |
| `axes` | logical. If `TRUE` (default), axes are draw if the plot is drawn. |
| `plot` | logical. If `TRUE` (default), a histogram is plotted, otherwise a list of breaks and counts is returned. In the latter case, a warning is used if (typically graphical) arguments are specified that only apply to the `plot = TRUE` case. |
| `labels` | logical or character string. Additionally draw labels on top of bars, if not `FALSE`; see `[plot.histogram](plothistogram)`. |
| `nclass` | numeric (integer). For S(-PLUS) compatibility only, `nclass` is equivalent to `breaks` for a scalar or character argument. |
| `warn.unused` | logical. If `plot = FALSE` and `warn.unused = TRUE`, a warning will be issued when graphical parameters are passed to `hist.default()`. |
| `...` | further arguments and [graphical parameters](par) passed to `[plot.histogram](plothistogram)` and thence to `<title>` and `<axis>` (if `plot = TRUE`). |
### Details
The definition of *histogram* differs by source (with country-specific biases). **R**'s default with equi-spaced breaks (also the default) is to plot the counts in the cells defined by `breaks`. Thus the height of a rectangle is proportional to the number of points falling into the cell, as is the area *provided* the breaks are equally-spaced.
The default with non-equi-spaced breaks is to give a plot of area one, in which the *area* of the rectangles is the fraction of the data points falling in the cells.
If `right = TRUE` (default), the histogram cells are intervals of the form `(a, b]`, i.e., they include their right-hand endpoint, but not their left one, with the exception of the first cell when `include.lowest` is `TRUE`.
For `right = FALSE`, the intervals are of the form `[a, b)`, and `include.lowest` means ‘*include highest*’.
A numerical tolerance of *1e-7* times the median bin size (for more than four bins, otherwise the median is substituted) is applied when counting entries on the edges of bins. This is not included in the reported `breaks` nor in the calculation of `density`.
The default for `breaks` is `"Sturges"`: see `[nclass.Sturges](../../grdevices/html/nclass)`. Other names for which algorithms are supplied are `"Scott"` and `"FD"` / `"Freedman-Diaconis"` (with corresponding functions `[nclass.scott](../../grdevices/html/nclass)` and `[nclass.FD](../../grdevices/html/nclass)`). Case is ignored and partial matching is used. Alternatively, a function can be supplied which will compute the intended number of breaks or the actual breakpoints as a function of `x`.
### Value
an object of class `"histogram"` which is a list with components:
| | |
| --- | --- |
| `breaks` | the *n+1* cell boundaries (= `breaks` if that was a vector). These are the nominal breaks, not with the boundary fuzz. |
| `counts` | *n* integers; for each cell, the number of `x[]` inside. |
| `density` | values *f^(x[i])*, as estimated density values. If `all(diff(breaks) == 1)`, they are the relative frequencies `counts/n` and in general satisfy *sum[i; f^(x[i]) (b[i+1]-b[i])] = 1*, where *b[i]* = `breaks[i]`. |
| `mids` | the *n* cell midpoints. |
| `xname` | a character string with the actual `x` argument name. |
| `equidist` | logical, indicating if the distances between `breaks` are all the same. |
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Venables, W. N. and Ripley. B. D. (2002) *Modern Applied Statistics with S*. Springer.
### See Also
`[nclass.Sturges](../../grdevices/html/nclass)`, `<stem>`, `[density](../../stats/html/density)`, `[truehist](../../mass/html/truehist)` in package [MASS](https://CRAN.R-project.org/package=MASS).
Typical plots with vertical bars are *not* histograms. Consider `<barplot>` or `[plot](plot.default)(*, type = "h")` for such bar plots.
### Examples
```
op <- par(mfrow = c(2, 2))
hist(islands)
utils::str(hist(islands, col = "gray", labels = TRUE))
hist(sqrt(islands), breaks = 12, col = "lightblue", border = "pink")
##-- For non-equidistant breaks, counts should NOT be graphed unscaled:
r <- hist(sqrt(islands), breaks = c(4*0:5, 10*3:5, 70, 100, 140),
col = "blue1")
text(r$mids, r$density, r$counts, adj = c(.5, -.5), col = "blue3")
sapply(r[2:3], sum)
sum(r$density * diff(r$breaks)) # == 1
lines(r, lty = 3, border = "purple") # -> lines.histogram(*)
par(op)
require(utils) # for str
str(hist(islands, breaks = 12, plot = FALSE)) #-> 10 (~= 12) breaks
str(hist(islands, breaks = c(12,20,36,80,200,1000,17000), plot = FALSE))
hist(islands, breaks = c(12,20,36,80,200,1000,17000), freq = TRUE,
main = "WRONG histogram") # and warning
## Extreme outliers; the "FD" rule would take very large number of 'breaks':
XXL <- c(1:9, c(-1,1)*1e300)
hh <- hist(XXL, "FD") # did not work in R <= 3.4.1; now gives warning
## pretty() determines how many counts are used (platform dependently!):
length(hh$breaks) ## typically 1 million -- though 1e6 was "a suggestion only"
require(stats)
set.seed(14)
x <- rchisq(100, df = 4)
## Comparing data with a model distribution should be done with qqplot()!
qqplot(x, qchisq(ppoints(x), df = 4)); abline(0, 1, col = 2, lty = 2)
## if you really insist on using hist() ... :
hist(x, freq = FALSE, ylim = c(0, 0.2))
curve(dchisq(x, df = 4), col = 2, lty = 2, lwd = 2, add = TRUE)
```
| programming_docs |
r None
`hist.POSIXt` Histogram of a Date or Date-Time Object
------------------------------------------------------
### Description
Method for `<hist>` applied to date or date-time objects.
### Usage
```
## S3 method for class 'POSIXt'
hist(x, breaks, ...,
xlab = deparse1(substitute(x)),
plot = TRUE, freq = FALSE,
start.on.monday = TRUE, format, right = TRUE)
## S3 method for class 'Date'
hist(x, breaks, ...,
xlab = deparse1(substitute(x)),
plot = TRUE, freq = FALSE,
start.on.monday = TRUE, format, right = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x` | an object inheriting from class `"POSIXt"` or `"Date"`. |
| `breaks` | a vector of cut points *or* number giving the number of intervals which `x` is to be cut into *or* an interval specification, one of `"days"`, `"weeks"`, `"months"`, `"quarters"` or `"years"`, plus `"secs"`, `"mins"`, `"hours"` for date-time objects. |
| `...` | [graphical parameters](par), or arguments to `[hist.default](hist)` such as `include.lowest`, `right` and `labels`. |
| `xlab` | a character string giving the label for the x axis, if plotted. |
| `plot` | logical. If `TRUE` (default), a histogram is plotted, otherwise a list of breaks and counts is returned. |
| `freq` | logical; if `TRUE`, the histogram graphic is a representation of frequencies, i.e, the `counts` component of the result; if `FALSE`, *relative* frequencies (probabilities) are plotted. |
| `start.on.monday` | logical. If `breaks = "weeks"`, should the week start on Mondays or Sundays? |
| `format` | for the x-axis labels. See `[strptime](../../base/html/strptime)`. |
| `right` | logical; if `TRUE`, the histogram cells are right-closed (left open) intervals. |
### Details
Note that unlike the default method, `breaks` is a required argument.
Using `breaks = "quarters"` will create intervals of 3 calendar months, with the intervals beginning on January 1, April 1, July 1 or October 1, based upon `min(x)` as appropriate.
With the default `right = TRUE`, breaks will be set on the last day of the previous period when `breaks` is `"months"`, `"quarters"` or `"years"`. Use `right = FALSE` to set them to the first day of the interval shown in each bar.
### Value
An object of class `"histogram"`: see `<hist>`.
### See Also
`[seq.POSIXt](../../base/html/seq.posixt)`, `[axis.POSIXct](axis.posixct)`, `<hist>`
### Examples
```
hist(.leap.seconds, "years", freq = TRUE)
hist(.leap.seconds,
seq(ISOdate(1970, 1, 1), ISOdate(2020, 1, 1), "5 years"))
rug(.leap.seconds, lwd=2)
## 100 random dates in a 10-week period
random.dates <- as.Date("2001/1/1") + 70*stats::runif(100)
hist(random.dates, "weeks", format = "%d %b")
```
r None
`segments` Add Line Segments to a Plot
---------------------------------------
### Description
Draw line segments between pairs of points.
### Usage
```
segments(x0, y0, x1 = x0, y1 = y0,
col = par("fg"), lty = par("lty"), lwd = par("lwd"),
...)
```
### Arguments
| | |
| --- | --- |
| `x0, y0` | coordinates of points **from** which to draw. |
| `x1, y1` | coordinates of points **to** which to draw. At least one must be supplied. |
| `col, lty, lwd` | [graphical parameters](par) as in `<par>`, possibly vectors. `NA` values in `col` cause the segment to be omitted. |
| `...` | further [graphical parameters](par) (from `<par>`), such as `xpd` and the line characteristics `lend`, `ljoin` and `lmitre`. |
### Details
For each `i`, a line segment is drawn between the point `(x0[i], y0[i])` and the point `(x1[i], y1[i])`. The coordinate vectors will be recycled to the length of the longest.
The [graphical parameters](par) `col`, `lty` and `lwd` can be vectors of length greater than one and will be recycled if necessary.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<arrows>`, `<polygon>` for slightly easier and less flexible line drawing, and `<lines>` for the usual polygons.
### Examples
```
x <- stats::runif(12); y <- stats::rnorm(12)
i <- order(x, y); x <- x[i]; y <- y[i]
plot(x, y, main = "arrows(.) and segments(.)")
## draw arrows from point to point :
s <- seq(length(x)-1) # one shorter than data
arrows(x[s], y[s], x[s+1], y[s+1], col= 1:3)
s <- s[-length(s)]
segments(x[s], y[s], x[s+2], y[s+2], col= 'pink')
```
r None
`spineplot` Spine Plots and Spinograms
---------------------------------------
### Description
Spine plots are a special cases of mosaic plots, and can be seen as a generalization of stacked (or highlighted) bar plots. Analogously, spinograms are an extension of histograms.
### Usage
```
spineplot(x, ...)
## Default S3 method:
spineplot(x, y = NULL,
breaks = NULL, tol.ylab = 0.05, off = NULL,
ylevels = NULL, col = NULL,
main = "", xlab = NULL, ylab = NULL,
xaxlabels = NULL, yaxlabels = NULL,
xlim = NULL, ylim = c(0, 1), axes = TRUE, ...)
## S3 method for class 'formula'
spineplot(formula, data = NULL,
breaks = NULL, tol.ylab = 0.05, off = NULL,
ylevels = NULL, col = NULL,
main = "", xlab = NULL, ylab = NULL,
xaxlabels = NULL, yaxlabels = NULL,
xlim = NULL, ylim = c(0, 1), axes = TRUE, ...,
subset = NULL, drop.unused.levels = FALSE)
```
### Arguments
| | |
| --- | --- |
| `x` | an object, the default method expects either a single variable (interpreted to be the explanatory variable) or a 2-way table. See details. |
| `y` | a `"factor"` interpreted to be the dependent variable |
| `formula` | a `"formula"` of type `y ~ x` with a single dependent `"factor"` and a single explanatory variable. |
| `data` | an optional data frame. |
| `breaks` | if the explanatory variable is numeric, this controls how it is discretized. `breaks` is passed to `<hist>` and can be a list of arguments. |
| `tol.ylab` | convenience tolerance parameter for y-axis annotation. If the distance between two labels drops under this threshold, they are plotted equidistantly. |
| `off` | vertical offset between the bars (in per cent). It is fixed to `0` for spinograms and defaults to `2` for spine plots. |
| `ylevels` | a character or numeric vector specifying in which order the levels of the dependent variable should be plotted. |
| `col` | a vector of fill colors of the same length as `levels(y)`. The default is to call `[gray.colors](../../grdevices/html/gray.colors)`. |
| `main, xlab, ylab` | character strings for annotation |
| `xaxlabels, yaxlabels` | character vectors for annotation of x and y axis. Default to `levels(y)` and `levels(x)`, respectively for the spine plot. For `xaxlabels` in the spinogram, the breaks are used. |
| `xlim, ylim` | the range of x and y values with sensible defaults. |
| `axes` | logical. If `FALSE` all axes (including those giving level names) are suppressed. |
| `...` | additional arguments passed to `<rect>`. |
| `subset` | an optional vector specifying a subset of observations to be used for plotting. |
| `drop.unused.levels` | should factors have unused levels dropped? Defaults to `FALSE`. |
### Details
`spineplot` creates either a spinogram or a spine plot. It can be called via `spineplot(x, y)` or `spineplot(y ~ x)` where `y` is interpreted to be the dependent variable (and has to be categorical) and `x` the explanatory variable. `x` can be either categorical (then a spine plot is created) or numerical (then a spinogram is plotted). Additionally, `spineplot` can also be called with only a single argument which then has to be a 2-way table, interpreted to correspond to `table(x, y)`.
Both, spine plots and spinograms, are essentially mosaic plots with special formatting of spacing and shading. Conceptually, they plot *P(y | x)* against *P(x)*. For the spine plot (where both *x* and *y* are categorical), both quantities are approximated by the corresponding empirical relative frequencies. For the spinogram (where *x* is numerical), *x* is first discretized (by calling `<hist>` with `breaks` argument) and then empirical relative frequencies are taken.
Thus, spine plots can also be seen as a generalization of stacked bar plots where not the heights but the widths of the bars corresponds to the relative frequencies of `x`. The heights of the bars then correspond to the conditional relative frequencies of `y` in every `x` group. Analogously, spinograms extend stacked histograms.
### Value
The table visualized is returned invisibly.
### Author(s)
Achim Zeileis [[email protected]](mailto:[email protected])
### References
Friendly, M. (1994). Mosaic displays for multi-way contingency tables. *Journal of the American Statistical Association*, **89**, 190–200. doi: [10.2307/2291215](https://doi.org/10.2307/2291215).
Hartigan, J.A., and Kleiner, B. (1984). A mosaic of television ratings. *The American Statistician*, **38**, 32–35. doi: [10.2307/2683556](https://doi.org/10.2307/2683556).
Hofmann, H., Theus, M. (2005), *Interactive graphics for visualizing conditional distributions*. Unpublished Manuscript.
Hummel, J. (1996). Linked bar charts: Analysing categorical data graphically. *Computational Statistics*, **11**, 23–33.
### See Also
`<mosaicplot>`, `<hist>`, `<cdplot>`
### Examples
```
## treatment and improvement of patients with rheumatoid arthritis
treatment <- factor(rep(c(1, 2), c(43, 41)), levels = c(1, 2),
labels = c("placebo", "treated"))
improved <- factor(rep(c(1, 2, 3, 1, 2, 3), c(29, 7, 7, 13, 7, 21)),
levels = c(1, 2, 3),
labels = c("none", "some", "marked"))
## (dependence on a categorical variable)
(spineplot(improved ~ treatment))
## applications and admissions by department at UC Berkeley
## (two-way tables)
(spineplot(marginSums(UCBAdmissions, c(3, 2)),
main = "Applications at UCB"))
(spineplot(marginSums(UCBAdmissions, c(3, 1)),
main = "Admissions at UCB"))
## NASA space shuttle o-ring failures
fail <- factor(c(2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 2, 1, 2, 1,
1, 1, 1, 2, 1, 1, 1, 1, 1),
levels = c(1, 2), labels = c("no", "yes"))
temperature <- c(53, 57, 58, 63, 66, 67, 67, 67, 68, 69, 70, 70,
70, 70, 72, 73, 75, 75, 76, 76, 78, 79, 81)
## (dependence on a numerical variable)
(spineplot(fail ~ temperature))
(spineplot(fail ~ temperature, breaks = 3))
(spineplot(fail ~ temperature, breaks = quantile(temperature)))
## highlighting for failures
spineplot(fail ~ temperature, ylevels = 2:1)
```
r None
`dotchart` Cleveland's Dot Plots
---------------------------------
### Description
Draw a Cleveland dot plot.
### Usage
```
dotchart(x, labels = NULL, groups = NULL, gdata = NULL, offset = 1/8,
ann = par("ann"), xaxt = par("xaxt"), frame.plot = TRUE, log = "",
cex = par("cex"), pt.cex = cex,
pch = 21, gpch = 21, bg = par("bg"),
color = par("fg"), gcolor = par("fg"), lcolor = "gray",
xlim = range(x[is.finite(x)]),
main = NULL, xlab = NULL, ylab = NULL, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | either a vector or matrix of numeric values (`NA`s are allowed). If `x` is a matrix the overall plot consists of juxtaposed dotplots for each row. Inputs which satisfy `[is.numeric](../../base/html/numeric)(x)` but not `is.vector(x) || is.matrix(x)` are coerced by `[as.numeric](../../base/html/numeric)`, with a warning. |
| `labels` | a vector of labels for each point. For vectors the default is to use `names(x)` and for matrices the row labels `dimnames(x)[[1]]`. |
| `groups` | an optional factor indicating how the elements of `x` are grouped. If `x` is a matrix, `groups` will default to the columns of `x`. |
| `gdata` | data values for the groups. This is typically a summary such as the median or mean of each group. |
| `offset` | offset in inches of `ylab` and `labels`; was hardwired to 0.4 before **R** 4.0.0. |
| `ann` | a `[logical](../../base/html/logical)` value indicating whether the default annotation (title and x and y axis labels) should appear on the plot. |
| `xaxt` | a string indicating the x-axis style; use `"n"` to suppress and see also `<par>("xaxt")`. |
| `frame.plot` | a logical indicating whether a box should be drawn around the plot. |
| `log` | a character string indicating if one or the other axis should be logarithmic, see `<plot.default>`. |
| `cex` | the character size to be used. Setting `cex` to a value smaller than one can be a useful way of avoiding label overlap. Unlike many other graphics functions, this sets the actual size, not a multiple of `par("cex")`. |
| `pt.cex` | the `cex` to be applied to plotting symbols. This behaves like `cex` in `plot()`. |
| `pch` | the plotting character or symbol to be used. |
| `gpch` | the plotting character or symbol to be used for group values. |
| `bg` | the background color of plotting characters or symbols to be used; use `<par>(bg= *)` to set the background color of the whole plot. |
| `color` | the color(s) to be used for points and labels. |
| `gcolor` | the single color to be used for group labels and values. |
| `lcolor` | the color(s) to be used for the horizontal lines. |
| `xlim` | horizontal range for the plot, see `<plot.window>`, for example. |
| `main` | overall title for the plot, see `<title>`. |
| `xlab, ylab` | axis annotations as in `title`. |
| `...` | [graphical parameters](par) can also be specified as arguments. |
### Value
This function is invoked for its side effect, which is to produce two variants of dotplots as described in Cleveland (1985).
Dot plots are a reasonable substitute for bar plots.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Cleveland, W. S. (1985) *The Elements of Graphing Data.* Monterey, CA: Wadsworth.
Murrell, P. (2005) *R Graphics*. Chapman & Hall/CRC Press.
### Examples
```
dotchart(VADeaths, main = "Death Rates in Virginia - 1940")
op <- par(xaxs = "i") # 0 -- 100%
dotchart(t(VADeaths), xlim = c(0,100), bg = "skyblue",
main = "Death Rates in Virginia - 1940", xlab = "rate [ % ]",
ylab = "Grouping: Age x Urbanity . Gender")
par(op)
```
r None
`plot.design` Plot Univariate Effects of a Design or Model
-----------------------------------------------------------
### Description
Plot univariate effects of one or more `[factor](../../base/html/factor)`s, typically for a designed experiment as analyzed by `[aov](../../stats/html/aov)()`.
### Usage
```
plot.design(x, y = NULL, fun = mean, data = NULL, ...,
ylim = NULL, xlab = "Factors", ylab = NULL,
main = NULL, ask = NULL, xaxt = par("xaxt"),
axes = TRUE, xtick = FALSE)
```
### Arguments
| | |
| --- | --- |
| `x` | either a data frame containing the design factors and optionally the response, or a `[formula](../../stats/html/formula)` or `[terms](../../stats/html/terms)` object. |
| `y` | the response, if not given in x. |
| `fun` | a function (or name of one) to be applied to each subset. It must return one number for a numeric (vector) input. |
| `data` | data frame containing the variables referenced by `x` when that is formula-like. |
| `...` | [graphical parameters](par) such as `col`, see `<par>`. |
| `ylim` | range of y values, as in `<plot.default>`. |
| `xlab` | x axis label, see `<title>`. |
| `ylab` | y axis label with a ‘smart’ default. |
| `main` | main title, see `<title>`. |
| `ask` | logical indicating if the user should be asked before a new page is started – in the case of multiple y's. |
| `xaxt` | character giving the type of x axis. |
| `axes` | logical indicating if axes should be drawn. |
| `xtick` | logical indicating if ticks (one per factor) should be drawn on the x axis. |
### Details
The supplied function will be called once for each level of each factor in the design and the plot will show these summary values. The levels of a particular factor are shown along a vertical line, and the overall value of `fun()` for the response is drawn as a horizontal line.
### Note
A big effort was taken to make this closely compatible to the S version. However, `col` (and `fg`) specifications have different effects.
In S this was a method of the `[plot](plot.default)` generic function for `design` objects.
### Author(s)
Roberto Frisullo and Martin Maechler
### References
Chambers, J. M. and Hastie, T. J. eds (1992) *Statistical Models in S*. Chapman & Hall, London, **the** *white book*, pp. 546–7 (and 163–4).
Freeny, A. E. and Landwehr, J. M. (1990) Displays for data from large designed experiments; Computer Science and Statistics: Proc.\ 22nd Symp\. Interface, 117–126, Springer Verlag.
### See Also
`[interaction.plot](../../stats/html/interaction.plot)` for a ‘standard graphic’ of designed experiments.
### Examples
```
require(stats)
plot.design(warpbreaks) # automatic for data frame with one numeric var.
Form <- breaks ~ wool + tension
summary(fm1 <- aov(Form, data = warpbreaks))
plot.design( Form, data = warpbreaks, col = 2) # same as above
## More than one y :
utils::str(esoph)
plot.design(esoph) ## two plots; if interactive you are "ask"ed
## or rather, compare mean and median:
op <- par(mfcol = 1:2)
plot.design(ncases/ncontrols ~ ., data = esoph, ylim = c(0, 0.8))
plot.design(ncases/ncontrols ~ ., data = esoph, ylim = c(0, 0.8),
fun = median)
par(op)
```
r None
`units` Graphical Units
------------------------
### Description
`xinch` and `yinch` convert the specified number of inches given as their arguments into the correct units for plotting with graphics functions. Usually, this only makes sense when normal coordinates are used, i.e., *no* `log` scale (see the `log` argument to `<par>`).
`xyinch` does the same for a pair of numbers `xy`, simultaneously.
### Usage
```
xinch(x = 1, warn.log = TRUE)
yinch(y = 1, warn.log = TRUE)
xyinch(xy = 1, warn.log = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x, y` | numeric vector |
| `xy` | numeric of length 1 or 2. |
| `warn.log` | logical; if `TRUE`, a warning is printed in case of active log scale. |
### Examples
```
all(c(xinch(), yinch()) == xyinch()) # TRUE
xyinch()
xyinch #- to see that is really delta{"usr"} / "pin"
## plot labels offset 0.12 inches to the right
## of plotted symbols in a plot
with(mtcars, {
plot(mpg, disp, pch = 19, main = "Motor Trend Cars")
text(mpg + xinch(0.12), disp, row.names(mtcars),
adj = 0, cex = .7, col = "blue")
})
```
r None
`smoothScatter` Scatterplots with Smoothed Densities Color Representation
--------------------------------------------------------------------------
### Description
`smoothScatter` produces a smoothed color density representation of a scatterplot, obtained through a (2D) kernel density estimate.
### Usage
```
smoothScatter(x, y = NULL, nbin = 128, bandwidth,
colramp = colorRampPalette(c("white", blues9)),
nrpoints = 100, ret.selection = FALSE,
pch = ".", cex = 1, col = "black",
transformation = function(x) x^.25,
postPlotHook = box,
xlab = NULL, ylab = NULL, xlim, ylim,
xaxs = par("xaxs"), yaxs = par("yaxs"), ...)
```
### Arguments
| | |
| --- | --- |
| `x, y` | the `x` and `y` arguments provide the x and y coordinates for the plot. Any reasonable way of defining the coordinates is acceptable. See the function `[xy.coords](../../grdevices/html/xy.coords)` for details. If supplied separately, they must be of the same length. |
| `nbin` | numeric vector of length one (for both directions) or two (for x and y separately) specifying the number of equally spaced grid points for the density estimation; directly used as `gridsize` in `[bkde2D](../../kernsmooth/html/bkde2d)()`. |
| `bandwidth` | numeric vector (length 1 or 2) of smoothing bandwidth(s). If missing, a more or less useful default is used. `bandwidth` is subsequently passed to function `[bkde2D](../../kernsmooth/html/bkde2d)`. |
| `colramp` | function accepting an integer `n` as an argument and returning `n` colors. |
| `nrpoints` | number of points to be superimposed on the density image. The first `nrpoints` points from those areas of lowest regional densities will be plotted. Adding points to the plot allows for the identification of outliers. If all points are to be plotted, choose `nrpoints = Inf`. |
| `ret.selection` | `[logical](../../base/html/logical)` indicating to return the ordered indices of “low density” points if `nrpoints > 0`. |
| `pch, cex, col` | arguments passed to `<points>`, when `nrpoints > 0`: point symbol, character expansion factor and color, see also `<par>`. |
| `transformation` | function mapping the density scale to the color scale. |
| `postPlotHook` | either `NULL` or a function which will be called (with no arguments) after `<image>`. |
| `xlab, ylab` | character strings to be used as axis labels, passed to `<image>`. |
| `xlim, ylim` | numeric vectors of length 2 specifying axis limits. |
| `xaxs, yaxs, ...` | further arguments passed to `<image>`, e.g., `add=TRUE` or `useRaster=TRUE`. |
### Details
`smoothScatter` produces a smoothed version of a scatter plot. Two dimensional (kernel density) smoothing is performed by `[bkde2D](../../kernsmooth/html/bkde2d)` from package [KernSmooth](https://CRAN.R-project.org/package=KernSmooth). See the examples for how to use this function together with `<pairs>`.
### Value
If `ret.selection` is true, a vector of integers of length `nrpoints` (or smaller, if there are less finite points inside `xlim` and `ylim`) with the indices of the low-density points drawn, ordered with lowest density first.
### Author(s)
Florian Hahne at FHCRC, originally
### See Also
`[bkde2D](../../kernsmooth/html/bkde2d)` from package [KernSmooth](https://CRAN.R-project.org/package=KernSmooth); `[densCols](../../grdevices/html/denscols)` which uses the same smoothing computations and `[blues9](../../grdevices/html/denscols)` in package grDevices.
`[scatter.smooth](../../stats/html/scatter.smooth)` adds a `[loess](../../stats/html/loess)` regression smoother to a scatter plot.
### Examples
```
## A largish data set
n <- 10000
x1 <- matrix(rnorm(n), ncol = 2)
x2 <- matrix(rnorm(n, mean = 3, sd = 1.5), ncol = 2)
x <- rbind(x1, x2)
oldpar <- par(mfrow = c(2, 2), mar=.1+c(3,3,1,1), mgp = c(1.5, 0.5, 0))
smoothScatter(x, nrpoints = 0)
smoothScatter(x)
## a different color scheme:
Lab.palette <- colorRampPalette(c("blue", "orange", "red"), space = "Lab")
i.s <- smoothScatter(x, colramp = Lab.palette,
## pch=NA: do not draw them
nrpoints = 250, ret.selection=TRUE)
## label the 20 very lowest-density points,the "outliers" (with obs.number):
i.20 <- i.s[1:20]
text(x[i.20,], labels = i.20, cex= 0.75)
## somewhat similar, using identical smoothing computations,
## but considerably *less* efficient for really large data:
plot(x, col = densCols(x), pch = 20)
## use with pairs:
par(mfrow = c(1, 1))
y <- matrix(rnorm(40000), ncol = 4) + 3*rnorm(10000)
y[, c(2,4)] <- -y[, c(2,4)]
pairs(y, panel = function(...) smoothScatter(..., nrpoints = 0, add = TRUE),
gap = 0.2)
par(oldpar)
```
| programming_docs |
r None
`plot.window` Set up World Coordinates for Graphics Window
-----------------------------------------------------------
### Description
This function sets up the world coordinate system for a graphics window. It is called by higher level functions such as `<plot.default>` (*after* `[plot.new](frame)`).
### Usage
```
plot.window(xlim, ylim, log = "", asp = NA, ...)
```
### Arguments
| | |
| --- | --- |
| `xlim, ylim` | numeric vectors of length 2, giving the x and y coordinates ranges. |
| `log` | character; indicating which axes should be in log scale. |
| `asp` | numeric, giving the **asp**ect ratio y/x, see ‘Details’. |
| `...` | further [graphical parameters](par) as in `<par>`. The relevant ones are `xaxs`, `yaxs` and `lab`. |
### Details
asp:
If `asp` is a finite positive value then the window is set up so that one data unit in the *y* direction is equal in length to `asp` *\** one data unit in the *x* direction.
Note that in this case, `<par>("usr")` is no longer determined by, e.g., `par("xaxs")`, but rather by `asp` and the device's aspect ratio. (See what happens if you interactively resize the plot device after running the example below!)
The special case `asp == 1` produces plots where distances between points are represented accurately on screen. Values with `asp > 1` can be used to produce more accurate maps when using latitude and longitude.
Note that the coordinate ranges will be extended by 4% if the appropriate [graphical parameter](par) `xaxs` or `yaxs` has value `"r"` (which is the default).
To reverse an axis, use `xlim` or `ylim` of the form `c(hi, lo)`.
The function attempts to produce a plausible set of scales if one or both of `xlim` and `ylim` is of length one or the two values given are identical, but it is better to avoid that case.
Usually, one should rather use the higher-level functions such as `[plot](plot.default)`, `<hist>`, `<image>`, ..., instead and refer to their help pages for explanation of the arguments.
A side-effect of the call is to set up the `usr`, `xaxp` and `yaxp` [graphical parameters](par). (It is for the latter two that `lab` is used.)
### See Also
`[xy.coords](../../grdevices/html/xy.coords)`, `<plot.xy>`, `<plot.default>`.
`<par>` for the graphical parameters mentioned.
### Examples
```
##--- An example for the use of 'asp' :
require(stats) # normally loaded
loc <- cmdscale(eurodist)
rx <- range(x <- loc[,1])
ry <- range(y <- -loc[,2])
plot(x, y, type = "n", asp = 1, xlab = "", ylab = "")
abline(h = pretty(rx, 10), v = pretty(ry, 10), col = "lightgray")
text(x, y, labels(eurodist), cex = 0.8)
```
r None
`axTicks` Compute Axis Tickmark Locations
------------------------------------------
### Description
Compute pretty tickmark locations, the same way as **R** does internally. This is only non-trivial when **log** coordinates are active. By default, gives the `at` values which `<axis>(side)` would use.
### Usage
```
axTicks(side, axp = NULL, usr = NULL, log = NULL, nintLog = NULL)
```
### Arguments
| | |
| --- | --- |
| `side` | integer in 1:4, as for `<axis>`. |
| `axp` | numeric vector of length three, defaulting to `<par>("xaxp")` or `<par>("yaxp")` depending on the `side` argument (`par("xaxp")` if `side` is 1 or 3, `par("yaxp")` if side is 2 or 4). |
| `usr` | numeric vector of length two giving user coordinate limits, defaulting to the relevant portion of `<par>("usr")` (`par("usr")[1:2]` or `par("usr")[3:4]` for `side` in (1,3) or (2,4) respectively). |
| `log` | logical indicating if log coordinates are active; defaults to `<par>("xlog")` or `<par>("ylog")` depending on `side`. |
| `nintLog` | (only used when `log` is true): approximate (lower bound for the) number of tick intervals; defaults to `<par>("lab")[j]` where `j` is 1 or 2 depending on `side`. Set this to `Inf` if you want the same behavior as in earlier **R** versions (than 2.14.x). |
### Details
The `axp`, `usr`, and `log` arguments must be consistent as their default values (the `par(..)` results) are. If you specify all three (as non-NULL), the graphics environment is not used at all. Note that the meaning of `axp` differs significantly when `log` is `TRUE`; see the documentation on `<par>(xaxp = .)`.
`axTicks()` may be seen as an **R** implementation of the C function `CreateAtVector()` in ‘..../src/main/plot.c’ which is called by `<axis>(side, *)` when no argument `at` is specified or directly by `[axisTicks](../../grdevices/html/axisticks)()` (in package grDevices).
The delicate case, `log = TRUE`, now makes use of `[axisTicks](../../grdevices/html/axisticks)` unless `nintLog = Inf` which exists for back compatibility.
### Value
numeric vector of coordinate values at which axis tickmarks can be drawn. By default, when only the first argument is specified, these values should be identical to those that `<axis>(side)` would use or has used. Note that the values are decreasing when `usr` is (“reverse axis” case).
### See Also
`<axis>`, `<par>`. `[pretty](../../base/html/pretty)` uses the same algorithm (but independently of the graphics environment) and has more options. However it is not available for `log = TRUE.`
`[axisTicks](../../grdevices/html/axisticks)()` (package grDevices).
### Examples
```
plot(1:7, 10*21:27)
axTicks(1)
axTicks(2)
stopifnot(identical(axTicks(1), axTicks(3)),
identical(axTicks(2), axTicks(4)))
## Show how axTicks() and axis() correspond :
op <- par(mfrow = c(3, 1))
for(x in 9999 * c(1, 2, 8)) {
plot(x, 9, log = "x")
cat(formatC(par("xaxp"), width = 5),";", T <- axTicks(1),"\n")
rug(T, col = adjustcolor("red", 0.5), lwd = 4)
}
par(op)
x <- 9.9*10^(-3:10)
plot(x, 1:14, log = "x")
axTicks(1) # now length 5, in R <= 2.13.x gave the following
axTicks(1, nintLog = Inf) # rather too many
## An example using axTicks() without reference to an existing plot
## (copying R's internal procedures for setting axis ranges etc.),
## You do need to supply _all_ of axp, usr, log, nintLog
## standard logarithmic y axis labels
ylims <- c(0.2, 88)
get_axp <- function(x) 10^c(ceiling(x[1]), floor(x[2]))
## mimic par("yaxs") == "i"
usr.i <- log10(ylims)
(aT.i <- axTicks(side = 2, usr = usr.i,
axp = c(get_axp(usr.i), n = 3), log = TRUE, nintLog = 5))
## mimic (default) par("yaxs") == "r"
usr.r <- extendrange(r = log10(ylims), f = 0.04)
(aT.r <- axTicks(side = 2, usr = usr.r,
axp = c(get_axp(usr.r), 3), log = TRUE, nintLog = 5))
## Prove that we got it right :
plot(0:1, ylims, log = "y", yaxs = "i")
stopifnot(all.equal(aT.i, axTicks(side = 2)))
plot(0:1, ylims, log = "y", yaxs = "r")
stopifnot(all.equal(aT.r, axTicks(side = 2)))
```
r None
`bxp` Draw Box Plots from Summaries
------------------------------------
### Description
`bxp` draws box plots based on the given summaries in `z`. It is usually called from within `<boxplot>`, but can be invoked directly.
### Usage
```
bxp(z, notch = FALSE, width = NULL, varwidth = FALSE,
outline = TRUE, notch.frac = 0.5, log = "",
border = par("fg"), pars = NULL, frame.plot = axes,
horizontal = FALSE, ann = TRUE,
add = FALSE, at = NULL, show.names = NULL,
...)
```
### Arguments
| | |
| --- | --- |
| `z` | a list containing data summaries to be used in constructing the plots. These are usually the result of a call to `<boxplot>`, but can be generated in any fashion. |
| `notch` | if `notch` is `TRUE`, a notch is drawn in each side of the boxes. If the notches of two plots do not overlap then the medians are significantly different at the 5 percent level. |
| `width` | a vector giving the relative widths of the boxes making up the plot. |
| `varwidth` | if `varwidth` is `TRUE`, the boxes are drawn with widths proportional to the square-roots of the number of observations in the groups. |
| `outline` | if `outline` is not true, the outliers are not drawn. |
| `notch.frac` | numeric in (0,1). When `notch = TRUE`, the fraction of the box width that the notches should use. |
| `border` | character or numeric (vector), the color of the box borders. Is recycled for multiple boxes. Is used as default for the `boxcol`, `medcol`, `whiskcol`, `staplecol`, and `outcol` options (see below). |
| `log` | character, indicating if any axis should be drawn in logarithmic scale, as in `<plot.default>`. |
| `frame.plot` | logical, indicating if a ‘frame’ (`<box>`) should be drawn; defaults to `TRUE`, unless `axes = FALSE` is specified. |
| `horizontal` | logical indicating if the boxplots should be horizontal; default `FALSE` means vertical boxes. |
| `ann` | a logical value indicating whether the default annotation (title and x and y axis labels) should appear on the plot. |
| `add` | logical, if true *add* boxplot to current plot. |
| `at` | numeric vector giving the locations where the boxplots should be drawn, particularly when `add = TRUE`; defaults to `1:n` where `n` is the number of boxes. |
| `show.names` | Set to `TRUE` or `FALSE` to override the defaults on whether an x-axis label is printed for each group. |
| `pars,...` | [graphical parameters](par) (etc) can be passed as arguments to this function, either as a list (`pars`) or normally(`...`), see the following. (Those in `...` take precedence over those in `pars`.) Currently, `yaxs` and `ylim` are used ‘along the boxplot’, i.e., vertically, when `horizontal` is false, and `xlim` horizontally. `xaxt`, `yaxt`, `las`, `cex.axis`, and `col.axis` are passed to `<axis>`, and `main`, `cex.main`, `col.main`, `sub`, `cex.sub`, `col.sub`, `xlab`, `ylab`, `cex.lab`, and `col.lab` are passed to `<title>`. In addition, `axes` is accepted (see `<plot.window>`), with default `TRUE`. The following arguments (or `pars` components) allow further customization of the boxplot graphics. Their defaults are typically determined from the non-prefixed version (e.g., `boxlty` from `lty`), either from the specified argument or `pars` component or the corresponding `<par>` one. boxwex:
a scale factor to be applied to all boxes. When there are only a few groups, the appearance of the plot can be improved by making the boxes narrower. The default depends on `at` and typically is *0.8*. staplewex, outwex:
staple and outlier line width expansion, proportional to box width; both default to 0.5. boxlty, boxlwd, boxcol, boxfill:
box outline type, width, color, and fill color (which currently defaults to `col` and will in future default to `par("bg")`). medlty, medlwd, medpch, medcex, medcol, medbg:
median line type, line width, point character, point size expansion, color, and background color. The default `medpch = NA` suppresses the point, and `medlty = "blank"` does so for the line. Note that `medlwd` defaults to *3x* the default `lwd`. whisklty, whisklwd, whiskcol:
whisker line type (default: `"dashed"`), width, and color. staplelty, staplelwd, staplecol:
staple (= end of whisker) line type, width, and color. outlty, outlwd, outpch, outcex, outcol, outbg:
outlier line type, line width, point character, point size expansion, color, and background color. The default `outlty = "blank"` suppresses the lines and `outpch = NA` suppresses points. |
### Value
An invisible vector, actually identical to the `at` argument, with the coordinates ("x" if horizontal is false, "y" otherwise) of box centers, useful for adding to the plot.
### Note
When `add = FALSE`, `xlim` now defaults to `xlim =
range(at, *) + c(-0.5, 0.5)`. It will usually be a good idea to specify `xlim` if the "x" axis has a log scale or `width` is far from uniform.
### Author(s)
The R Core development team and Arni Magnusson (then at U Washington) who has provided most changes for the box\*, med\*, whisk\*, staple\*, and out\* arguments.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### Examples
```
require(stats)
set.seed(753)
(bx.p <- boxplot(split(rt(100, 4), gl(5, 20))))
op <- par(mfrow = c(2, 2))
bxp(bx.p, xaxt = "n")
bxp(bx.p, notch = TRUE, axes = FALSE, pch = 4, boxfill = 1:5)
bxp(bx.p, notch = TRUE, boxfill = "lightblue", frame.plot = FALSE,
outline = FALSE, main = "bxp(*, frame.plot= FALSE, outline= FALSE)")
bxp(bx.p, notch = TRUE, boxfill = "lightblue", border = 2:6,
ylim = c(-4,4), pch = 22, bg = "green", log = "x",
main = "... log = 'x', ylim = *")
par(op)
op <- par(mfrow = c(1, 2))
## single group -- no label
boxplot (weight ~ group, data = PlantGrowth, subset = group == "ctrl")
## with label
bx <- boxplot(weight ~ group, data = PlantGrowth,
subset = group == "ctrl", plot = FALSE)
bxp(bx, show.names=TRUE)
par(op)
z <- split(rnorm(1000), rpois(1000, 2.2))
boxplot(z, whisklty = 3, main = "boxplot(z, whisklty = 3)")
## Colour support similar to plot.default:
op <- par(mfrow = 1:2, bg = "light gray", fg = "midnight blue")
boxplot(z, col.axis = "skyblue3", main = "boxplot(*, col.axis=..,main=..)")
plot(z[[1]], col.axis = "skyblue3", main = "plot(*, col.axis=..,main=..)")
mtext("par(bg=\"light gray\", fg=\"midnight blue\")",
outer = TRUE, line = -1.2)
par(op)
## Mimic S-Plus:
splus <- list(boxwex = 0.4, staplewex = 1, outwex = 1, boxfill = "grey40",
medlwd = 3, medcol = "white", whisklty = 3, outlty = 1, outpch = NA)
boxplot(z, pars = splus)
## Recycled and "sweeping" parameters
op <- par(mfrow = c(1,2))
boxplot(z, border = 1:5, lty = 3, medlty = 1, medlwd = 2.5)
boxplot(z, boxfill = 1:3, pch = 1:5, lwd = 1.5, medcol = "white")
par(op)
## too many possibilities
boxplot(z, boxfill = "light gray", outpch = 21:25, outlty = 2,
bg = "pink", lwd = 2,
medcol = "dark blue", medcex = 2, medpch = 20)
```
r None
`rug` Add a Rug to a Plot
--------------------------
### Description
Adds a *rug* representation (1-d plot) of the data to the plot.
### Usage
```
rug(x, ticksize = 0.03, side = 1, lwd = 0.5, col = par("fg"),
quiet = getOption("warn") < 0, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | A numeric vector |
| `ticksize` | The length of the ticks making up the ‘rug’. Positive lengths give inwards ticks. |
| `side` | On which side of the plot box the rug will be plotted. Normally 1 (bottom) or 3 (top). |
| `lwd` | The line width of the ticks. Some devices will round the default width up to `1`. |
| `col` | The colour the ticks are plotted in. |
| `quiet` | logical indicating if there should be a warning about clipped values. |
| `...` | further arguments, passed to `<axis>`, such as `line` or `pos` for specifying the location of the rug. |
### Details
Because of the way `rug` is implemented, only values of `x` that fall within the plot region are included. There will be a warning if any finite values are omitted, but non-finite values are omitted silently.
### References
Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S.* Wadsworth & Brooks/Cole.
### See Also
`[jitter](../../base/html/jitter)` which you may want for ties in `x`.
### Examples
```
require(stats) # both 'density' and its default method
with(faithful, {
plot(density(eruptions, bw = 0.15))
rug(eruptions)
rug(jitter(eruptions, amount = 0.01), side = 3, col = "light blue")
})
```
r None
`boxplot` Box Plots
--------------------
### Description
Produce box-and-whisker plot(s) of the given (grouped) values.
### Usage
```
boxplot(x, ...)
## S3 method for class 'formula'
boxplot(formula, data = NULL, ..., subset, na.action = NULL,
xlab = mklab(y_var = horizontal),
ylab = mklab(y_var =!horizontal),
add = FALSE, ann = !add, horizontal = FALSE,
drop = FALSE, sep = ".", lex.order = FALSE)
## Default S3 method:
boxplot(x, ..., range = 1.5, width = NULL, varwidth = FALSE,
notch = FALSE, outline = TRUE, names, plot = TRUE,
border = par("fg"), col = "lightgray", log = "",
pars = list(boxwex = 0.8, staplewex = 0.5, outwex = 0.5),
ann = !add, horizontal = FALSE, add = FALSE, at = NULL)
```
### Arguments
| | |
| --- | --- |
| `formula` | a formula, such as `y ~ grp`, where `y` is a numeric vector of data values to be split into groups according to the grouping variable `grp` (usually a factor). Note that `~ g1 + g2` is equivalent to `g1:g2`. |
| `data` | a data.frame (or list) from which the variables in `formula` should be taken. |
| `subset` | an optional vector specifying a subset of observations to be used for plotting. |
| `na.action` | a function which indicates what should happen when the data contain `NA`s. The default is to ignore missing values in either the response or the group. |
| `xlab, ylab` | x- and y-axis annotation, since **R** 3.6.0 with a non-empty default. Can be suppressed by `ann=FALSE`. |
| `ann` | `[logical](../../base/html/logical)` indicating if axes should be annotated (by `xlab` and `ylab`). |
| `drop, sep, lex.order` | passed to `[split.default](../../base/html/split)`, see there. |
| `x` | for specifying data from which the boxplots are to be produced. Either a numeric vector, or a single list containing such vectors. Additional unnamed arguments specify further data as separate vectors (each corresponding to a component boxplot). `[NA](../../base/html/na)`s are allowed in the data. |
| `...` | For the `formula` method, named arguments to be passed to the default method. For the default method, unnamed arguments are additional data vectors (unless `x` is a list when they are ignored), and named arguments are arguments and [graphical parameters](par) to be passed to `<bxp>` in addition to the ones given by argument `pars` (and override those in `pars`). Note that `bxp` may or may not make use of graphical parameters it is passed: see its documentation. |
| `range` | this determines how far the plot whiskers extend out from the box. If `range` is positive, the whiskers extend to the most extreme data point which is no more than `range` times the interquartile range from the box. A value of zero causes the whiskers to extend to the data extremes. |
| `width` | a vector giving the relative widths of the boxes making up the plot. |
| `varwidth` | if `varwidth` is `TRUE`, the boxes are drawn with widths proportional to the square-roots of the number of observations in the groups. |
| `notch` | if `notch` is `TRUE`, a notch is drawn in each side of the boxes. If the notches of two plots do not overlap this is ‘strong evidence’ that the two medians differ (Chambers *et al*, 1983, p. 62). See `[boxplot.stats](../../grdevices/html/boxplot.stats)` for the calculations used. |
| `outline` | if `outline` is not true, the outliers are not drawn (as points whereas S+ uses lines). |
| | |
| --- | --- |
| `names` | group labels which will be printed under each boxplot. Can be a character vector or an [expression](../../base/html/expression) (see [plotmath](../../grdevices/html/plotmath)). |
| `boxwex` | a scale factor to be applied to all boxes. When there are only a few groups, the appearance of the plot can be improved by making the boxes narrower. |
| `staplewex` | staple line width expansion, proportional to box width. |
| `outwex` | outlier line width expansion, proportional to box width. |
| `plot` | if `TRUE` (the default) then a boxplot is produced. If not, the summaries which the boxplots are based on are returned. |
| `border` | an optional vector of colors for the outlines of the boxplots. The values in `border` are recycled if the length of `border` is less than the number of plots. |
| `col` | if `col` is non-null it is assumed to contain colors to be used to colour the bodies of the box plots. By default they are in the background colour. |
| `log` | character indicating if x or y or both coordinates should be plotted in log scale. |
| `pars` | a list of (potentially many) more graphical parameters, e.g., `boxwex` or `outpch`; these are passed to `<bxp>` (if `plot` is true); for details, see there. |
| `horizontal` | logical indicating if the boxplots should be horizontal; default `FALSE` means vertical boxes. |
| `add` | logical, if true *add* boxplot to current plot. |
| `at` | numeric vector giving the locations where the boxplots should be drawn, particularly when `add = TRUE`; defaults to `1:n` where `n` is the number of boxes. |
### Details
The generic function `boxplot` currently has a default method (`boxplot.default`) and a formula interface (`boxplot.formula`).
If multiple groups are supplied either as multiple arguments or via a formula, parallel boxplots will be plotted, in the order of the arguments or the order of the levels of the factor (see `[factor](../../base/html/factor)`).
Missing values are ignored when forming boxplots.
### Value
List with the following components:
| | |
| --- | --- |
| `stats` | a matrix, each column contains the extreme of the lower whisker, the lower hinge, the median, the upper hinge and the extreme of the upper whisker for one group/plot. If all the inputs have the same class attribute, so will this component. |
| `n` | a vector with the number of (non-`[NA](../../base/html/na)`) observations in each group. |
| `conf` | a matrix where each column contains the lower and upper extremes of the notch. |
| `out` | the values of any data points which lie beyond the extremes of the whiskers. |
| `group` | a vector of the same length as `out` whose elements indicate to which group the outlier belongs. |
| `names` | a vector of names for the groups. |
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988). *The New S Language*. Wadsworth & Brooks/Cole.
Chambers, J. M., Cleveland, W. S., Kleiner, B. and Tukey, P. A. (1983). *Graphical Methods for Data Analysis*. Wadsworth & Brooks/Cole.
Murrell, P. (2005). *R Graphics*. Chapman & Hall/CRC Press.
See also `[boxplot.stats](../../grdevices/html/boxplot.stats)`.
### See Also
`[boxplot.stats](../../grdevices/html/boxplot.stats)` which does the computation, `<bxp>` for the plotting and more examples; and `<stripchart>` for an alternative (with small data sets).
### Examples
```
## boxplot on a formula:
boxplot(count ~ spray, data = InsectSprays, col = "lightgray")
# *add* notches (somewhat funny here <--> warning "notches .. outside hinges"):
boxplot(count ~ spray, data = InsectSprays,
notch = TRUE, add = TRUE, col = "blue")
boxplot(decrease ~ treatment, data = OrchardSprays, col = "bisque",
log = "y")
## horizontal=TRUE, switching y <--> x :
boxplot(decrease ~ treatment, data = OrchardSprays, col = "bisque",
log = "x", horizontal=TRUE)
rb <- boxplot(decrease ~ treatment, data = OrchardSprays, col = "bisque")
title("Comparing boxplot()s and non-robust mean +/- SD")
mn.t <- tapply(OrchardSprays$decrease, OrchardSprays$treatment, mean)
sd.t <- tapply(OrchardSprays$decrease, OrchardSprays$treatment, sd)
xi <- 0.3 + seq(rb$n)
points(xi, mn.t, col = "orange", pch = 18)
arrows(xi, mn.t - sd.t, xi, mn.t + sd.t,
code = 3, col = "pink", angle = 75, length = .1)
## boxplot on a matrix:
mat <- cbind(Uni05 = (1:100)/21, Norm = rnorm(100),
`5T` = rt(100, df = 5), Gam2 = rgamma(100, shape = 2))
boxplot(mat) # directly, calling boxplot.matrix()
## boxplot on a data frame:
df. <- as.data.frame(mat)
par(las = 1) # all axis labels horizontal
boxplot(df., main = "boxplot(*, horizontal = TRUE)", horizontal = TRUE)
## Using 'at = ' and adding boxplots -- example idea by Roger Bivand :
boxplot(len ~ dose, data = ToothGrowth,
boxwex = 0.25, at = 1:3 - 0.2,
subset = supp == "VC", col = "yellow",
main = "Guinea Pigs' Tooth Growth",
xlab = "Vitamin C dose mg",
ylab = "tooth length",
xlim = c(0.5, 3.5), ylim = c(0, 35), yaxs = "i")
boxplot(len ~ dose, data = ToothGrowth, add = TRUE,
boxwex = 0.25, at = 1:3 + 0.2,
subset = supp == "OJ", col = "orange")
legend(2, 9, c("Ascorbic acid", "Orange juice"),
fill = c("yellow", "orange"))
## With less effort (slightly different) using factor *interaction*:
boxplot(len ~ dose:supp, data = ToothGrowth,
boxwex = 0.5, col = c("orange", "yellow"),
main = "Guinea Pigs' Tooth Growth",
xlab = "Vitamin C dose mg", ylab = "tooth length",
sep = ":", lex.order = TRUE, ylim = c(0, 35), yaxs = "i")
## more examples in help(bxp)
```
| programming_docs |
r None
`plot.factor` Plotting Factor Variables
----------------------------------------
### Description
This functions implements a scatterplot method for `[factor](../../base/html/factor)` arguments of the *generic* `[plot](plot.default)` function.
If `y` is missing `<barplot>` is produced. For numeric `y` a `<boxplot>` is used, and for a factor `y` a `<spineplot>` is shown. For any other type of `y` the next `plot` method is called, normally `<plot.default>`.
### Usage
```
## S3 method for class 'factor'
plot(x, y, legend.text = NULL, ...)
```
### Arguments
| | |
| --- | --- |
| `x, y` | numeric or factor. `y` may be missing. |
| `legend.text` | character vector for annotation of y axis in the case of a factor `y`: defaults to `levels(y)`. This sets the `yaxlabels` argument of `<spineplot>`. |
| `...` | Further arguments to `<barplot>`, `<boxplot>`, `<spineplot>` or `[plot](plot.default)` as appropriate. All of these accept [graphical parameters](par) (see `<par>`) and annotation arguments passed to `<title>` and `axes = FALSE`. None accept `type`. |
### See Also
`<plot.default>`, `<plot.formula>`, `<barplot>`, `<boxplot>`, `<spineplot>`.
### Examples
```
require(grDevices)
plot(weight ~ group, data = PlantGrowth) # numeric vector ~ factor
plot(cut(weight, 2) ~ group, data = PlantGrowth) # factor ~ factor
## passing "..." to spineplot() eventually:
plot(cut(weight, 3) ~ group, data = PlantGrowth,
col = hcl(c(0, 120, 240), 50, 70))
plot(PlantGrowth$group, axes = FALSE, main = "no axes") # extremely silly
```
r None
`rect` Draw One or More Rectangles
-----------------------------------
### Description
`rect` draws a rectangle (or sequence of rectangles) with the given coordinates, fill and border colors.
### Usage
```
rect(xleft, ybottom, xright, ytop, density = NULL, angle = 45,
col = NA, border = NULL, lty = par("lty"), lwd = par("lwd"),
...)
```
### Arguments
| | |
| --- | --- |
| `xleft` | a vector (or scalar) of left x positions. |
| `ybottom` | a vector (or scalar) of bottom y positions. |
| `xright` | a vector (or scalar) of right x positions. |
| `ytop` | a vector (or scalar) of top y positions. |
| `density` | the density of shading lines, in lines per inch. The default value of `NULL` means that no shading lines are drawn. A zero value of `density` means no shading lines whereas negative values (and `NA`) suppress shading (and so allow color filling). |
| `angle` | angle (in degrees) of the shading lines. |
| `col` | color(s) to fill or shade the rectangle(s) with. The default `NA` (or also `NULL`) means do not fill, i.e., draw transparent rectangles, unless `density` is specified. |
| `border` | color for rectangle border(s). The default means `par("fg")`. Use `border = NA` to omit borders. If there are shading lines, `border = TRUE` means use the same colour for the border as for the shading lines. |
| `lty` | line type for borders and shading; defaults to `"solid"`. |
| `lwd` | line width for borders and shading. Note that the use of `lwd = 0` (as in the examples) is device-dependent. |
| `...` | [graphical parameters](par) such as `xpd`, `lend`, `ljoin` and `lmitre` can be given as arguments. |
### Details
The positions supplied, i.e., `xleft, ...`, are relative to the current plotting region. If the x-axis goes from 100 to 200 then `xleft` must be larger than 100 and `xright` must be less than 200. The position vectors will be recycled to the length of the longest.
It is a graphics primitive used in `<hist>`, `<barplot>`, `<legend>`, etc.
### See Also
`<box>` for the standard box around the plot; `<polygon>` and `<segments>` for flexible line drawing.
`<par>` for how to specify colors.
### Examples
```
require(grDevices)
## set up the plot region:
op <- par(bg = "thistle")
plot(c(100, 250), c(300, 450), type = "n", xlab = "", ylab = "",
main = "2 x 11 rectangles; 'rect(100+i,300+i, 150+i,380+i)'")
i <- 4*(0:10)
## draw rectangles with bottom left (100, 300)+i
## and top right (150, 380)+i
rect(100+i, 300+i, 150+i, 380+i, col = rainbow(11, start = 0.7, end = 0.1))
rect(240-i, 320+i, 250-i, 410+i, col = heat.colors(11), lwd = i/5)
## Background alternating ( transparent / "bg" ) :
j <- 10*(0:5)
rect(125+j, 360+j, 141+j, 405+j/2, col = c(NA,0),
border = "gold", lwd = 2)
rect(125+j, 296+j/2, 141+j, 331+j/5, col = c(NA,"midnightblue"))
mtext("+ 2 x 6 rect(*, col = c(NA,0)) and col = c(NA,\"m..blue\")")
## an example showing colouring and shading
plot(c(100, 200), c(300, 450), type= "n", xlab = "", ylab = "")
rect(100, 300, 125, 350) # transparent
rect(100, 400, 125, 450, col = "green", border = "blue") # coloured
rect(115, 375, 150, 425, col = par("bg"), border = "transparent")
rect(150, 300, 175, 350, density = 10, border = "red")
rect(150, 400, 175, 450, density = 30, col = "blue",
angle = -30, border = "transparent")
legend(180, 450, legend = 1:4, fill = c(NA, "green", par("fg"), "blue"),
density = c(NA, NA, 10, 30), angle = c(NA, NA, 30, -30))
par(op)
```
r None
`curve` Draw Function Plots
----------------------------
### Description
Draws a curve corresponding to a function over the interval `[from, to]`. `curve` can plot also an expression in the variable `xname`, default x.
### Usage
```
curve(expr, from = NULL, to = NULL, n = 101, add = FALSE,
type = "l", xname = "x", xlab = xname, ylab = NULL,
log = NULL, xlim = NULL, ...)
## S3 method for class 'function'
plot(x, y = 0, to = 1, from = y, xlim = NULL, ylab = NULL, ...)
```
### Arguments
| | |
| --- | --- |
| `expr` | The name of a function, or a [call](../../base/html/call) or an [expression](../../base/html/expression) written as a function of `x` which will evaluate to an object of the same length as `x`. |
| `x` | a ‘vectorizing’ numeric **R** function. |
| `y` | alias for `from` for compatibility with `plot` |
| `from, to` | the range over which the function will be plotted. |
| `n` | integer; the number of x values at which to evaluate. |
| `add` | logical; if `TRUE` add to an already existing plot; if `NA` start a new plot taking the defaults for the limits and log-scaling of the x-axis from the previous plot. Taken as `FALSE` (with a warning if a different value is supplied) if no graphics device is open. |
| `xlim` | `NULL` or a numeric vector of length 2; if non-`NULL` it provides the defaults for `c(from, to)` and, unless `add = TRUE`, selects the x-limits of the plot – see `<plot.window>`. |
| `type` | plot type: see `<plot.default>`. |
| `xname` | character string giving the name to be used for the x axis. |
| `xlab, ylab, log, ...` | labels and [graphical parameters](par) can also be specified as arguments. See ‘Details’ for the interpretation of the default for `log`. For the `"function"` method of `plot`, `...` can include any of the other arguments of `curve`, except `expr`. |
### Details
The function or expression `expr` (for `curve`) or function `x` (for `plot`) is evaluated at `n` points equally spaced over the range `[from, to]`. The points determined in this way are then plotted.
If either `from` or `to` is `NULL`, it defaults to the corresponding element of `xlim` if that is not `NULL`.
What happens when neither `from`/`to` nor `xlim` specifies both x-limits is a complex story. For `plot(<function>)` and for `curve(add = FALSE)` the defaults are *(0, 1)*. For `curve(add = NA)` and `curve(add =
TRUE)` the defaults are taken from the x-limits used for the previous plot. (This differs from versions of **R** prior to 2.14.0.)
The value of `log` is used both to specify the plot axes (unless `add = TRUE`) and how ‘equally spaced’ is interpreted: if the x component indicates log-scaling, the points at which the expression or function is plotted are equally spaced on log scale.
The default value of `log` is taken from the current plot when `add = TRUE`, whereas if `add = NA` the x component is taken from the existing plot (if any) and the y component defaults to linear. For `add = FALSE` the default is `""`
This used to be a quick hack which now seems to serve a useful purpose, but can give bad results for functions which are not smooth.
For expensive-to-compute `expr`essions, you should use smarter tools.
The way `curve` handles `expr` has caused confusion. It first looks to see if `expr` is a [name](../../base/html/name) (also known as a symbol), in which case it is taken to be the name of a function, and `expr` is replaced by a call to `expr` with a single argument with name given by `xname`. Otherwise it checks that `expr` is either a [call](../../base/html/call) or an [expression](../../base/html/expression), and that it contains a reference to the variable given by `xname` (using `[all.vars](../../base/html/allnames)`): anything else is an error. Then `expr` is evaluated in an environment which supplies a vector of name given by `xname` of length `n`, and should evaluate to an object of length `n`. Note that this means that `curve(x, ...)` is taken as a request to plot a function named `x` (and it is used as such in the `function` method for `plot`).
The `plot` method can be called directly as `plot.function`.
### Value
A list with components `x` and `y` of the points that were drawn is returned invisibly.
### Warning
For historical reasons, `add` is allowed as an argument to the `"function"` method of `plot`, but its behaviour may surprise you. It is recommended to use `add` only with `curve`.
### See Also
`[splinefun](../../stats/html/splinefun)` for spline interpolation, `<lines>`.
### Examples
```
plot(qnorm) # default range c(0, 1) is appropriate here,
# but end values are -/+Inf and so are omitted.
plot(qlogis, main = "The Inverse Logit : qlogis()")
abline(h = 0, v = 0:2/2, lty = 3, col = "gray")
curve(sin, -2*pi, 2*pi, xname = "t")
curve(tan, xname = "t", add = NA,
main = "curve(tan) --> same x-scale as previous plot")
op <- par(mfrow = c(2, 2))
curve(x^3 - 3*x, -2, 2)
curve(x^2 - 2, add = TRUE, col = "violet")
## simple and advanced versions, quite similar:
plot(cos, -pi, 3*pi)
curve(cos, xlim = c(-pi, 3*pi), n = 1001, col = "blue", add = TRUE)
chippy <- function(x) sin(cos(x)*exp(-x/2))
curve(chippy, -8, 7, n = 2001)
plot (chippy, -8, -5)
for(ll in c("", "x", "y", "xy"))
curve(log(1+x), 1, 100, log = ll, sub = paste0("log = '", ll, "'"))
par(op)
```
r None
`graphics-package` The R Graphics Package
------------------------------------------
### Description
R functions for base graphics
### Details
This package contains functions for ‘base’ graphics. Base graphics are traditional S-like graphics, as opposed to the more recent [grid](../../grid/html/grid-package) graphics.
For a complete list of functions with individual help pages, use `library(help = "graphics")`.
### Author(s)
R Core Team and contributors worldwide
Maintainer: R Core Team [[email protected]](mailto:[email protected])
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Murrell, P. (2005) *R Graphics*. Chapman & Hall/CRC Press.
r None
`convertXY` Convert between Graphics Coordinate Systems
--------------------------------------------------------
### Description
Convert between graphics coordinate systems.
### Usage
```
grconvertX(x, from = "user", to = "user")
grconvertY(y, from = "user", to = "user")
```
### Arguments
| | |
| --- | --- |
| `x, y` | numeric vector of coordinates. |
| `from, to` | character strings giving the coordinate systems to convert between. |
### Details
The coordinate systems are
`"user"`
user coordinates.
`"inches"`
inches.
`"device"`
the device coordinate system.
`"ndc"`
normalized device coordinates.
`"nfc"`
normalized figure coordinates.
`"npc"`
normalized plot coordinates.
`"nic"`
normalized inner region coordinates. (The ‘inner region’ is that inside the outer margins.)
`"lines"`
lines of margin (based on `mex`).
`"chars"`
lines of text (based on `cex`).
(These names can be partially matched.) For the ‘normalized’ coordinate systems the lower left has value 0 and the top right value 1.
Device coordinates are those in which the device works: they are usually in pixels where that makes sense and in big points (1/72 inch) otherwise (e.g., `[pdf](../../grdevices/html/pdf)` and `[postscript](../../grdevices/html/postscript)`).
### Value
A numeric vector of the same length as the input.
### Examples
```
op <- par(omd=c(0.1, 0.9, 0.1, 0.9), mfrow = c(1, 2))
plot(1:4)
for(tp in c("in", "dev", "ndc", "nfc", "npc", "nic", "lines", "chars"))
print(grconvertX(c(1.0, 4.0), "user", tp))
par(op)
```
r None
`clip` Set Clipping Region
---------------------------
### Description
Set clipping region in user coordinates
### Usage
```
clip(x1, x2, y1, y2)
```
### Arguments
| | |
| --- | --- |
| `x1, x2, y1, y2` | user coordinates of clipping rectangle |
### Details
How the clipping rectangle is set depends on the setting of `<par>("xpd")`: this function changes the current setting until the next high-level plotting command resets it.
Clipping of lines, rectangles and polygons is done in the graphics engine, but clipping of text is if possible done in the device, so the effect of clipping text is device-dependent (and may result in text not wholly within the clipping region being omitted entirely).
Exactly when the clipping region will be reset can be hard to predict. `[plot.new](frame)` always resets it. Functions such as `<lines>` and `<text>` only reset it if `<par>("xpd")` has been changed. However, functions such as `<box>`, `<mtext>`, `<title>` and `[plot.dendrogram](../../stats/html/dendrogram)` can manipulate the `xpd` setting.
### See Also
`<par>`
### Examples
```
x <- rnorm(1000)
hist(x, xlim = c(-4,4))
usr <- par("usr")
clip(usr[1], -2, usr[3], usr[4])
hist(x, col = 'red', add = TRUE)
clip(2, usr[2], usr[3], usr[4])
hist(x, col = 'blue', add = TRUE)
do.call("clip", as.list(usr)) # reset to plot region
```
r None
`mosaicplot` Mosaic Plots
--------------------------
### Description
Plots a mosaic on the current graphics device.
### Usage
```
mosaicplot(x, ...)
## Default S3 method:
mosaicplot(x, main = deparse1(substitute(x)),
sub = NULL, xlab = NULL, ylab = NULL,
sort = NULL, off = NULL, dir = NULL,
color = NULL, shade = FALSE, margin = NULL,
cex.axis = 0.66, las = par("las"), border = NULL,
type = c("pearson", "deviance", "FT"), ...)
## S3 method for class 'formula'
mosaicplot(formula, data = NULL, ...,
main = deparse1(substitute(data)), subset,
na.action = stats::na.omit)
```
### Arguments
| | |
| --- | --- |
| `x` | a contingency table in array form, with optional category labels specified in the `dimnames(x)` attribute. The table is best created by the `table()` command. |
| `main` | character string for the mosaic title. |
| `sub` | character string for the mosaic sub-title (at bottom). |
| `xlab, ylab` | x- and y-axis labels used for the plot; by default, the first and second element of `names(dimnames(X))` (i.e., the name of the first and second variable in `X`). |
| `sort` | vector ordering of the variables, containing a permutation of the integers `1:length(dim(x))` (the default). |
| `off` | vector of offsets to determine percentage spacing at each level of the mosaic (appropriate values are between 0 and 20, and the default is 20 times the number of splits for 2-dimensional tables, and 10 otherwise). Rescaled to maximally 50, and recycled if necessary. |
| `dir` | vector of split directions (`"v"` for vertical and `"h"` for horizontal) for each level of the mosaic, one direction for each dimension of the contingency table. The default consists of alternating directions, beginning with a vertical split. |
| `color` | logical or (recycling) vector of colors for color shading, used only when `shade` is `FALSE`, or `NULL` (default). By default, grey boxes are drawn. `color = TRUE` uses a gamma-corrected grey palette. `color = FALSE` gives empty boxes with no shading. |
| `shade` | a logical indicating whether to produce extended mosaic plots, or a numeric vector of at most 5 distinct positive numbers giving the absolute values of the cut points for the residuals. By default, `shade` is `FALSE`, and simple mosaics are created. Using `shade = TRUE` cuts absolute values at 2 and 4. |
| `margin` | a list of vectors with the marginal totals to be fit in the log-linear model. By default, an independence model is fitted. See `[loglin](../../stats/html/loglin)` for further information. |
| `cex.axis` | The magnification to be used for axis annotation, as a multiple of `par("cex")`. |
| `las` | numeric; the style of axis labels, see `<par>`. |
| `border` | colour of borders of cells: see `<polygon>`. |
| `type` | a character string indicating the type of residual to be represented. Must be one of `"pearson"` (giving components of Pearson's *chi-squared*), `"deviance"` (giving components of the likelihood ratio *chi-squared*), or `"FT"` for the Freeman-Tukey residuals. The value of this argument can be abbreviated. |
| `formula` | a formula, such as `y ~ x`. |
| `data` | a data frame (or list), or a contingency table from which the variables in `formula` should be taken. |
| `...` | further arguments to be passed to or from methods. |
| `subset` | an optional vector specifying a subset of observations in the data frame to be used for plotting. |
| `na.action` | a function which indicates what should happen when the data contains variables to be cross-tabulated, and these variables contain `NA`s. The default is to omit cases which have an `NA` in any variable. Since the tabulation will omit all cases containing missing values, this will only be useful if the `na.action` function replaces missing values. |
### Details
This is a generic function. It currently has a default method (`mosaicplot.default`) and a formula interface (`mosaicplot.formula`).
Extended mosaic displays visualize standardized residuals of a loglinear model for the table by color and outline of the mosaic's tiles. (Standardized residuals are often referred to a standard normal distribution.) Cells representing negative residuals are drawn in shaded of red and with broken borders; positive ones are drawn in blue with solid borders.
For the formula method, if `data` is an object inheriting from class `"table"` or class `"ftable"` or an array with more than 2 dimensions, it is taken as a contingency table, and hence all entries should be non-negative. In this case the left-hand side of `formula` should be empty and the variables on the right-hand side should be taken from the names of the dimnames attribute of the contingency table. A marginal table of these variables is computed, and a mosaic plot of that table is produced.
Otherwise, `data` should be a data frame or matrix, list or environment containing the variables to be cross-tabulated. In this case, after possibly selecting a subset of the data as specified by the `subset` argument, a contingency table is computed from the variables given in `formula`, and a mosaic is produced from this.
See Emerson (1998) for more information and a case study with television viewer data from Nielsen Media Research.
Missing values are not supported except via an `na.action` function when `data` contains variables to be cross-tabulated.
A more flexible and extensible implementation of mosaic plots written in the grid graphics system is provided in the function `[mosaic](../../vcd/html/mosaic)` in the contributed package [vcd](https://CRAN.R-project.org/package=vcd) (Meyer, Zeileis and Hornik, 2006).
### Author(s)
S-PLUS original by John Emerson [[email protected]](mailto:[email protected]). Originally modified and enhanced for **R** by Kurt Hornik.
### References
Hartigan, J.A., and Kleiner, B. (1984). A mosaic of television ratings. *The American Statistician*, **38**, 32–35. doi: [10.2307/2683556](https://doi.org/10.2307/2683556).
Emerson, J. W. (1998). Mosaic displays in S-PLUS: A general implementation and a case study. *Statistical Computing and Graphics Newsletter (ASA)*, **9**, 1, 17–23.
Friendly, M. (1994). Mosaic displays for multi-way contingency tables. *Journal of the American Statistical Association*, **89**, 190–200. doi: [10.2307/2291215](https://doi.org/10.2307/2291215).
Meyer, D., Zeileis, A., and Hornik, K. (2006) The strucplot Framework: Visualizing Multi-Way Contingency Tables with vcd. *Journal of Statistical Software*, **17(3)**, 1–48. doi: [10.18637/jss.v017.i03](https://doi.org/10.18637/jss.v017.i03).
### See Also
`<assocplot>`, `[loglin](../../stats/html/loglin)`.
### Examples
```
require(stats)
mosaicplot(Titanic, main = "Survival on the Titanic", color = TRUE)
## Formula interface for tabulated data:
mosaicplot(~ Sex + Age + Survived, data = Titanic, color = TRUE)
mosaicplot(HairEyeColor, shade = TRUE)
## Independence model of hair and eye color and sex. Indicates that
## there are more blue eyed blonde females than expected in the case
## of independence and too few brown eyed blonde females.
## The corresponding model is:
fm <- loglin(HairEyeColor, list(1, 2, 3))
pchisq(fm$pearson, fm$df, lower.tail = FALSE)
mosaicplot(HairEyeColor, shade = TRUE, margin = list(1:2, 3))
## Model of joint independence of sex from hair and eye color. Males
## are underrepresented among people with brown hair and eyes, and are
## overrepresented among people with brown hair and blue eyes.
## The corresponding model is:
fm <- loglin(HairEyeColor, list(1:2, 3))
pchisq(fm$pearson, fm$df, lower.tail = FALSE)
## Formula interface for raw data: visualize cross-tabulation of numbers
## of gears and carburettors in Motor Trend car data.
mosaicplot(~ gear + carb, data = mtcars, color = TRUE, las = 1)
# color recycling
mosaicplot(~ gear + carb, data = mtcars, color = 2:3, las = 1)
```
| programming_docs |
r None
`box` Draw a Box around a Plot
-------------------------------
### Description
This function draws a box around the current plot in the given color and linetype. The `bty` parameter determines the type of box drawn. See `<par>` for details.
### Usage
```
box(which = "plot", lty = "solid", ...)
```
### Arguments
| | |
| --- | --- |
| `which` | character, one of `"plot"`, `"figure"`, `"inner"` and `"outer"`. |
| `lty` | line type of the box. |
| `...` | further [graphical parameters](par), such as `bty`, `col`, or `lwd`, see `<par>`. Note that `xpd` is not accepted as clipping is always to the device region. |
### Details
The choice of colour is complicated. If `col` was supplied and is not `NA`, it is used. Otherwise, if `fg` was supplied and is not `NA`, it is used. The final default is `par("col")`.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<rect>` for drawing of arbitrary rectangles.
### Examples
```
plot(1:7, abs(stats::rnorm(7)), type = "h", axes = FALSE)
axis(1, at = 1:7, labels = letters[1:7])
box(lty = '1373', col = 'red')
```
r None
`pairs` Scatterplot Matrices
-----------------------------
### Description
A matrix of scatterplots is produced.
### Usage
```
pairs(x, ...)
## S3 method for class 'formula'
pairs(formula, data = NULL, ..., subset,
na.action = stats::na.pass)
## Default S3 method:
pairs(x, labels, panel = points, ...,
horInd = 1:nc, verInd = 1:nc,
lower.panel = panel, upper.panel = panel,
diag.panel = NULL, text.panel = textPanel,
label.pos = 0.5 + has.diag/3, line.main = 3,
cex.labels = NULL, font.labels = 1,
row1attop = TRUE, gap = 1, log = "",
horOdd = !row1attop, verOdd = !row1attop)
```
### Arguments
| | |
| --- | --- |
| `x` | the coordinates of points given as numeric columns of a matrix or data frame. Logical and factor columns are converted to numeric in the same way that `[data.matrix](../../base/html/data.matrix)` does. |
| `formula` | a formula, such as `~ x + y + z`. Each term will give a separate variable in the pairs plot, so terms should be numeric vectors. (A response will be interpreted as another variable, but not treated specially, so it is confusing to use one.) |
| `data` | a data.frame (or list) from which the variables in `formula` should be taken. |
| `subset` | an optional vector specifying a subset of observations to be used for plotting. |
| `na.action` | a function which indicates what should happen when the data contain `NA`s. The default is to pass missing values on to the panel functions, but `na.action = na.omit` will cause cases with missing values in any of the variables to be omitted entirely. |
| `labels` | the names of the variables. |
| `panel` | `function(x, y, ...)` which is used to plot the contents of each panel of the display. |
| `...` | arguments to be passed to or from methods. Also, [graphical parameters](par) can be given as can arguments to `plot` such as `main`. `par("oma")` will be set appropriately unless specified. |
| `horInd, verInd` | The (numerical) indices of the variables to be plotted on the horizontal and vertical axes respectively. |
| `lower.panel, upper.panel` | separate panel functions (or `NULL`) to be used below and above the diagonal respectively. |
| `diag.panel` | optional `function(x, ...)` to be applied on the diagonals. |
| `text.panel` | optional `function(x, y, labels, cex,
font, ...)` to be applied on the diagonals. |
| `label.pos` | `y` position of labels in the text panel. |
| `line.main` | if `main` is specified, `line.main` gives the `line` argument to `<mtext>()` which draws the title. You may want to specify `oma` when changing `line.main`. |
| `cex.labels, font.labels` | graphics parameters for the text panel. |
| `row1attop` | logical. Should the layout be matrix-like with row 1 at the top, or graph-like with row 1 at the bottom? The latter (non default) leads to a basically symmetric scatterplot matrix. |
| `gap` | distance between subplots, in margin lines. |
| `log` | a character string indicating if logarithmic axes are to be used, see `<plot.default>` *or* a numeric vector of indices specifying the indices of those variables where logarithmic axes should be used for both x and y. `log = "xy"` specifies logarithmic axes for all variables. |
| `horOdd, verOdd` | `[logical](../../base/html/logical)` (or integer) determining how the horizontal and vertical axis labeling happens. If true, the axis labelling starts at the first (from top left) row or column, respectively. |
### Details
The *ij*th scatterplot contains `x[,i]` plotted against `x[,j]`. The scatterplot can be customised by setting panel functions to appear as something completely different. The off-diagonal panel functions are passed the appropriate columns of `x` as `x` and `y`: the diagonal panel function (if any) is passed a single column, and the `text.panel` function is passed a single `(x, y)` location and the column name. Setting some of these panel functions to `[NULL](../../base/html/null)` is equivalent to *not* drawing anything there.
The [graphical parameters](par) `pch` and `col` can be used to specify a vector of plotting symbols and colors to be used in the plots.
The [graphical parameter](par) `oma` will be set by `pairs.default` unless supplied as an argument.
A panel function should not attempt to start a new plot, but just plot within a given coordinate system: thus `plot` and `boxplot` are not panel functions.
By default, missing values are passed to the panel functions and will often be ignored within a panel. However, for the formula method and `na.action = na.omit`, all cases which contain a missing values for any of the variables are omitted completely (including when the scales are selected).
Arguments `horInd` and `verInd` were introduced in **R** 3.2.0. If given the same value they can be used to select or re-order variables: with different ranges of consecutive values they can be used to plot rectangular windows of a full pairs plot; in the latter case ‘diagonal’ refers to the diagonal of the full plot.
### Author(s)
Enhancements for **R** 1.0.0 contributed by Dr. Jens Oehlschlägel-Akiyoshi and R-core members.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### Examples
```
pairs(iris[1:4], main = "Anderson's Iris Data -- 3 species",
pch = 21, bg = c("red", "green3", "blue")[unclass(iris$Species)])
## formula method, "graph" layout (row 1 at bottom):
pairs(~ Fertility + Education + Catholic, data = swiss, row1attop=FALSE,
subset = Education < 20, main = "Swiss data, Education < 20")
pairs(USJudgeRatings, gap=1/10) # (gap: not wasting plotting area)
## show only lower triangle (and suppress labeling for whatever reason):
pairs(USJudgeRatings, text.panel = NULL, upper.panel = NULL)
## put histograms on the diagonal
panel.hist <- function(x, ...)
{
usr <- par("usr"); on.exit(par(usr))
par(usr = c(usr[1:2], 0, 1.5) )
h <- hist(x, plot = FALSE)
breaks <- h$breaks; nB <- length(breaks)
y <- h$counts; y <- y/max(y)
rect(breaks[-nB], 0, breaks[-1], y, col = "cyan", ...)
}
pairs(USJudgeRatings[1:5], panel = panel.smooth,
cex = 1.5, pch = 24, bg = "light blue", horOdd=TRUE,
diag.panel = panel.hist, cex.labels = 2, font.labels = 2)
## put (absolute) correlations on the upper panels,
## with size proportional to the correlations.
panel.cor <- function(x, y, digits = 2, prefix = "", cex.cor, ...)
{
usr <- par("usr"); on.exit(par(usr))
par(usr = c(0, 1, 0, 1))
r <- abs(cor(x, y))
txt <- format(c(r, 0.123456789), digits = digits)[1]
txt <- paste0(prefix, txt)
if(missing(cex.cor)) cex.cor <- 0.8/strwidth(txt)
text(0.5, 0.5, txt, cex = cex.cor * r)
}
pairs(USJudgeRatings, lower.panel = panel.smooth, upper.panel = panel.cor,
gap=0, row1attop=FALSE)
pairs(iris[-5], log = "xy") # plot all variables on log scale
pairs(iris, log = 1:4, # log the first four
main = "Lengths and Widths in [log]", line.main=1.5, oma=c(2,2,3,2))
```
r None
`plot.raster` Plotting Raster Images
-------------------------------------
### Description
This functions implements a `[plot](plot.default)` method for raster images.
### Usage
```
## S3 method for class 'raster'
plot(x, y,
xlim = c(0, ncol(x)), ylim = c(0, nrow(x)),
xaxs = "i", yaxs = "i",
asp = 1, add = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `x, y` | raster. `y` will be ignored. |
| `xlim, ylim` | Limits on the plot region (default from dimensions of the raster). |
| `xaxs, yaxs` | Axis interval calculation style (default means that raster fills plot region). |
| `asp` | Aspect ratio (default retains aspect ratio of the raster). |
| `add` | Logical indicating whether to simply add raster to an existing plot. |
| `...` | Further arguments to the `[rasterImage](rasterimage)` function. |
### See Also
`<plot.default>`, `[rasterImage](rasterimage)`.
### Examples
```
require(grDevices)
r <- as.raster(c(0.5, 1, 0.5))
plot(r)
# additional arguments to rasterImage()
plot(r, interpolate=FALSE)
# distort
plot(r, asp=NA)
# fill page
op <- par(mar=rep(0, 4))
plot(r, asp=NA)
par(op)
# normal annotations work
plot(r, asp=NA)
box()
title(main="This is my raster")
# add to existing plot
plot(1)
plot(r, add=TRUE)
```
r None
`screen` Creating and Controlling Multiple Screens on a Single Device
----------------------------------------------------------------------
### Description
`split.screen` defines a number of regions within the current device which can, to some extent, be treated as separate graphics devices. It is useful for generating multiple plots on a single device. Screens can themselves be split, allowing for quite complex arrangements of plots.
`screen` is used to select which screen to draw in.
`erase.screen` is used to clear a single screen, which it does by filling with the background colour.
`close.screen` removes the specified screen definition(s).
### Usage
```
split.screen(figs, screen, erase = TRUE)
screen(n = , new = TRUE)
erase.screen(n = )
close.screen(n, all.screens = FALSE)
```
### Arguments
| | |
| --- | --- |
| `figs` | a two-element vector describing the number of rows and the number of columns in a screen matrix *or* a matrix with 4 columns. If a matrix, then each row describes a screen with values for the left, right, bottom, and top of the screen (in that order) in NDC units, that is 0 at the lower left corner of the device surface, and 1 at the upper right corner. |
| `screen` | a number giving the screen to be split. It defaults to the current screen if there is one, otherwise the whole device region. |
| `erase` | logical: should the selected screen be cleared? |
| `n` | a number indicating which screen to prepare for drawing (`screen`), erase (`erase.screen`), or close (`close.screen`). (`close.screen` will accept a vector of screen numbers.) |
| `new` | logical value indicating whether the screen should be erased as part of the preparation for drawing in the screen. |
| `all.screens` | logical value indicating whether all of the screens should be closed. |
### Details
The first call to `split.screen` places **R** into split-screen mode. The other split-screen functions only work within this mode. While in this mode, certain other commands should be avoided (see the Warnings section below). Split-screen mode is exited by the command `close.screen(all = TRUE)`.
If the current screen is closed, `close.screen` sets the current screen to be the next larger screen number if there is one, otherwise to the first available screen.
### Value
`split.screen(*)` returns a vector of screen numbers for the newly-created screens. With no arguments, `split.screen()` returns a vector of valid screen numbers.
`screen(n)` invisibly returns `n`, the number of the selected screen. With no arguments, `screen()` returns the number of the current screen.
`close.screen()` returns a vector of valid screen numbers.
`screen`, `erase.screen`, and `close.screen` all return `FALSE` if **R** is not in split-screen mode.
### Warnings
The recommended way to use these functions is to completely draw a plot and all additions (i.e., points and lines) to the base plot, prior to selecting and plotting on another screen. The behavior associated with returning to a screen to add to an existing plot is unpredictable and may result in problems that are not readily visible.
These functions are totally incompatible with the other mechanisms for arranging plots on a device: `<par>(mfrow)`, `par(mfcol)` and `<layout>()`.
The functions are also incompatible with some plotting functions, such as `<coplot>`, which make use of these other mechanisms.
`erase.screen` will appear not to work if the background colour is transparent (as it is by default on most devices).
### References
Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S*. Wadsworth & Brooks/Cole.
Murrell, P. (2005) *R Graphics*. Chapman & Hall/CRC Press.
### See Also
`<par>`, `<layout>`, `[Devices](../../grdevices/html/devices)`, `dev.*`
### Examples
```
if (interactive()) {
par(bg = "white") # default is likely to be transparent
split.screen(c(2, 1)) # split display into two screens
split.screen(c(1, 3), screen = 2) # now split the bottom half into 3
screen(1) # prepare screen 1 for output
plot(10:1)
screen(4) # prepare screen 4 for output
plot(10:1)
close.screen(all = TRUE) # exit split-screen mode
split.screen(c(2, 1)) # split display into two screens
split.screen(c(1, 2), 2) # split bottom half in two
plot(1:10) # screen 3 is active, draw plot
erase.screen() # forgot label, erase and redraw
plot(1:10, ylab = "ylab 3")
screen(1) # prepare screen 1 for output
plot(1:10)
screen(4) # prepare screen 4 for output
plot(1:10, ylab = "ylab 4")
screen(1, FALSE) # return to screen 1, but do not clear
plot(10:1, axes = FALSE, lty = 2, ylab = "") # overlay second plot
axis(4) # add tic marks to right-hand axis
title("Plot 1")
close.screen(all = TRUE) # exit split-screen mode
}
```
r None
`plot.default` The Default Scatterplot Function
------------------------------------------------
### Description
Draw a scatter plot with decorations such as axes and titles in the active graphics window.
### Usage
```
## Default S3 method:
plot(x, y = NULL, type = "p", xlim = NULL, ylim = NULL,
log = "", main = NULL, sub = NULL, xlab = NULL, ylab = NULL,
ann = par("ann"), axes = TRUE, frame.plot = axes,
panel.first = NULL, panel.last = NULL, asp = NA,
xgap.axis = NA, ygap.axis = NA,
...)
```
### Arguments
| | |
| --- | --- |
| `x, y` | the `x` and `y` arguments provide the x and y coordinates for the plot. Any reasonable way of defining the coordinates is acceptable. See the function `[xy.coords](../../grdevices/html/xy.coords)` for details. If supplied separately, they must be of the same length. |
| `type` | 1-character string giving the type of plot desired. The following values are possible, for details, see `[plot](plot.default)`: `"p"` for points, `"l"` for lines, `"b"` for both points and lines, `"c"` for empty points joined by lines, `"o"` for overplotted points and lines, `"s"` and `"S"` for stair steps and `"h"` for histogram-like vertical lines. Finally, `"n"` does not produce any points or lines. |
| `xlim` | the x limits (x1, x2) of the plot. Note that `x1 > x2` is allowed and leads to a ‘reversed axis’. The default value, `NULL`, indicates that the range of the [finite](../../base/html/is.finite) values to be plotted should be used. |
| `ylim` | the y limits of the plot. |
| `log` | a character string which contains `"x"` if the x axis is to be logarithmic, `"y"` if the y axis is to be logarithmic and `"xy"` or `"yx"` if both axes are to be logarithmic. |
| `main` | a main title for the plot, see also `<title>`. |
| `sub` | a sub title for the plot. |
| `xlab` | a label for the x axis, defaults to a description of `x`. |
| `ylab` | a label for the y axis, defaults to a description of `y`. |
| `ann` | a logical value indicating whether the default annotation (title and x and y axis labels) should appear on the plot. |
| `axes` | a logical value indicating whether both axes should be drawn on the plot. Use [graphical parameter](par) `"xaxt"` or `"yaxt"` to suppress just one of the axes. |
| `frame.plot` | a logical indicating whether a box should be drawn around the plot. |
| `panel.first` | an ‘expression’ to be evaluated after the plot axes are set up but before any plotting takes place. This can be useful for drawing background grids or scatterplot smooths. Note that this works by lazy evaluation: passing this argument from other `plot` methods may well not work since it may be evaluated too early. |
| `panel.last` | an expression to be evaluated after plotting has taken place but before the axes, title and box are added. See the comments about `panel.first`. |
| `asp` | the *y/x* aspect ratio, see `<plot.window>`. |
| `xgap.axis, ygap.axis` | the *x/y* axis gap factors, passed as `gap.axis` to the two `<axis>()` calls (when `axes` is true, as per default). |
| `...` | other [graphical parameters](par) (see `<par>` and section ‘Details’ below). |
### Details
Commonly used [graphical parameters](par) are:
`col`
The colors for lines and points. Multiple colors can be specified so that each point can be given its own color. If there are fewer colors than points they are recycled in the standard fashion. Lines will all be plotted in the first colour specified.
`bg`
a vector of background colors for open plot symbols, see `<points>`. Note: this is **not** the same setting as `<par>("bg")`.
`pch`
a vector of plotting characters or symbols: see `<points>`.
`cex`
a numerical vector giving the amount by which plotting characters and symbols should be scaled relative to the default. This works as a multiple of `<par>("cex")`. `NULL` and `NA` are equivalent to `1.0`. Note that this does not affect annotation: see below.
`lty`
a vector of line types, see `<par>`.
`cex.main`, `col.lab`, `font.sub`, etc
settings for main- and sub-title and axis annotation, see `<title>` and `<par>`.
`lwd`
a vector of line widths, see `<par>`.
### Note
The presence of `panel.first` and `panel.last` is a historical anomaly: default plots do not have ‘panels’, unlike e.g. `<pairs>` plots. For more control, use lower-level plotting functions: `plot.default` calls in turn some of `[plot.new](frame)`, `<plot.window>`, `<plot.xy>`, `<axis>`, `<box>` and `<title>`, and plots can be built up by calling these individually, or by calling `plot(type = "n")` and adding further elements.
The `plot` generic was moved from the graphics package to the base package in **R** 4.0.0. It is currently re-exported from the graphics namespace to allow packages importing it from there to continue working, but this may change in future versions of **R**.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Cleveland, W. S. (1985) *The Elements of Graphing Data.* Monterey, CA: Wadsworth.
Murrell, P. (2005) *R Graphics*. Chapman & Hall/CRC Press.
### See Also
`[plot](plot.default)`, `<plot.window>`, `[xy.coords](../../grdevices/html/xy.coords)`. For thousands of points, consider using `[smoothScatter](smoothscatter)` instead.
### Examples
```
Speed <- cars$speed
Distance <- cars$dist
plot(Speed, Distance, panel.first = grid(8, 8),
pch = 0, cex = 1.2, col = "blue")
plot(Speed, Distance,
panel.first = lines(stats::lowess(Speed, Distance), lty = "dashed"),
pch = 0, cex = 1.2, col = "blue")
## Show the different plot types
x <- 0:12
y <- sin(pi/5 * x)
op <- par(mfrow = c(3,3), mar = .1+ c(2,2,3,1))
for (tp in c("p","l","b", "c","o","h", "s","S","n")) {
plot(y ~ x, type = tp, main = paste0("plot(*, type = \"", tp, "\")"))
if(tp == "S") {
lines(x, y, type = "s", col = "red", lty = 2)
mtext("lines(*, type = \"s\", ...)", col = "red", cex = 0.8)
}
}
par(op)
##--- Log-Log Plot with custom axes
lx <- seq(1, 5, length.out = 41)
yl <- expression(e^{-frac(1,2) * {log[10](x)}^2})
y <- exp(-.5*lx^2)
op <- par(mfrow = c(2,1), mar = par("mar")-c(1,0,2,0), mgp = c(2, .7, 0))
plot(10^lx, y, log = "xy", type = "l", col = "purple",
main = "Log-Log plot", ylab = yl, xlab = "x")
plot(10^lx, y, log = "xy", type = "o", pch = ".", col = "forestgreen",
main = "Log-Log plot with custom axes", ylab = yl, xlab = "x",
axes = FALSE, frame.plot = TRUE)
my.at <- 10^(1:5)
axis(1, at = my.at, labels = formatC(my.at, format = "fg"))
e.y <- -5:-1 ; at.y <- 10^e.y
axis(2, at = at.y, col.axis = "red", las = 1,
labels = as.expression(lapply(e.y, function(E) bquote(10^.(E)))))
par(op)
```
| programming_docs |
r None
`coplot` Conditioning Plots
----------------------------
### Description
This function produces two variants of the **co**nditioning plots discussed in the reference below.
### Usage
```
coplot(formula, data, given.values, panel = points, rows, columns,
show.given = TRUE, col = par("fg"), pch = par("pch"),
bar.bg = c(num = gray(0.8), fac = gray(0.95)),
xlab = c(x.name, paste("Given :", a.name)),
ylab = c(y.name, paste("Given :", b.name)),
subscripts = FALSE,
axlabels = function(f) abbreviate(levels(f)),
number = 6, overlap = 0.5, xlim, ylim, ...)
co.intervals(x, number = 6, overlap = 0.5)
```
### Arguments
| | |
| --- | --- |
| `formula` | a formula describing the form of conditioning plot. A formula of the form `y ~ x | a` indicates that plots of `y` versus `x` should be produced conditional on the variable `a`. A formula of the form `y ~ x| a * b` indicates that plots of `y` versus `x` should be produced conditional on the two variables `a` and `b`. All three or four variables may be either numeric or factors. When `x` or `y` are factors, the result is almost as if `as.numeric()` was applied, whereas for factor `a` or `b`, the conditioning (and its graphics if `show.given` is true) are adapted. |
| `data` | a data frame containing values for any variables in the formula. By default the environment where `coplot` was called from is used. |
| `given.values` | a value or list of two values which determine how the conditioning on `a` and `b` is to take place. When there is no `b` (i.e., conditioning only on `a`), usually this is a matrix with two columns each row of which gives an interval, to be conditioned on, but is can also be a single vector of numbers or a set of factor levels (if the variable being conditioned on is a factor). In this case (no `b`), the result of `co.intervals` can be used directly as `given.values` argument. |
| `panel` | a `[function](../../base/html/function)(x, y, col, pch, ...)` which gives the action to be carried out in each panel of the display. The default is `points`. |
| `rows` | the panels of the plot are laid out in a `rows` by `columns` array. `rows` gives the number of rows in the array. |
| `columns` | the number of columns in the panel layout array. |
| `show.given` | logical (possibly of length 2 for 2 conditioning variables): should conditioning plots be shown for the corresponding conditioning variables (default `TRUE`). |
| `col` | a vector of colors to be used to plot the points. If too short, the values are recycled. |
| `pch` | a vector of plotting symbols or characters. If too short, the values are recycled. |
| `bar.bg` | a named vector with components `"num"` and `"fac"` giving the background colors for the (shingle) bars, for **num**eric and **fac**tor conditioning variables respectively. |
| `xlab` | character; labels to use for the x axis and the first conditioning variable. If only one label is given, it is used for the x axis and the default label is used for the conditioning variable. |
| `ylab` | character; labels to use for the y axis and any second conditioning variable. |
| `subscripts` | logical: if true the panel function is given an additional (third) argument `subscripts` giving the subscripts of the data passed to that panel. |
| `axlabels` | function for creating axis (tick) labels when x or y are factors. |
| `number` | integer; the number of conditioning intervals, for a and b, possibly of length 2. It is only used if the corresponding conditioning variable is not a `[factor](../../base/html/factor)`. |
| `overlap` | numeric < 1; the fraction of overlap of the conditioning variables, possibly of length 2 for x and y direction. When overlap < 0, there will be *gaps* between the data slices. |
| `xlim` | the range for the x axis. |
| `ylim` | the range for the y axis. |
| `...` | additional arguments to the panel function. |
| `x` | a numeric vector. |
### Details
In the case of a single conditioning variable `a`, when both `rows` and `columns` are unspecified, a ‘close to square’ layout is chosen with `columns >= rows`.
In the case of multiple `rows`, the *order* of the panel plots is from the bottom and from the left (corresponding to increasing `a`, typically).
A panel function should not attempt to start a new plot, but just plot within a given coordinate system: thus `plot` and `boxplot` are not panel functions.
The rendering of arguments `xlab` and `ylab` is not controlled by `<par>` arguments `cex.lab` and `font.lab` even though they are plotted by `<mtext>` rather than `<title>`.
### Value
`co.intervals(., number, .)` returns a (`number` *x* 2) `[matrix](../../base/html/matrix)`, say `ci`, where `ci[k,]` is the `[range](../../base/html/range)` of `x` values for the `k`-th interval.
### References
Chambers, J. M. (1992) *Data for models.* Chapter 3 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole.
Cleveland, W. S. (1993) *Visualizing Data.* New Jersey: Summit Press.
### See Also
`<pairs>`, `<panel.smooth>`, `<points>`.
### Examples
```
## Tonga Trench Earthquakes
coplot(lat ~ long | depth, data = quakes)
given.depth <- co.intervals(quakes$depth, number = 4, overlap = .1)
coplot(lat ~ long | depth, data = quakes, given.values = given.depth, rows = 1)
## Conditioning on 2 variables:
ll.dm <- lat ~ long | depth * mag
coplot(ll.dm, data = quakes)
coplot(ll.dm, data = quakes, number = c(4, 7), show.given = c(TRUE, FALSE))
coplot(ll.dm, data = quakes, number = c(3, 7),
overlap = c(-.5, .1)) # negative overlap DROPS values
## given two factors
Index <- seq_len(nrow(warpbreaks)) # to get nicer default labels
coplot(breaks ~ Index | wool * tension, data = warpbreaks,
show.given = 0:1)
coplot(breaks ~ Index | wool * tension, data = warpbreaks,
col = "red", bg = "pink", pch = 21,
bar.bg = c(fac = "light blue"))
## Example with empty panels:
with(data.frame(state.x77), {
coplot(Life.Exp ~ Income | Illiteracy * state.region, number = 3,
panel = function(x, y, ...) panel.smooth(x, y, span = .8, ...))
## y ~ factor -- not really sensible, but 'show off':
coplot(Life.Exp ~ state.region | Income * state.division,
panel = panel.smooth)
})
```
r None
`mtext` Write Text into the Margins of a Plot
----------------------------------------------
### Description
Text is written in one of the four margins of the current figure region or one of the outer margins of the device region.
### Usage
```
mtext(text, side = 3, line = 0, outer = FALSE, at = NA,
adj = NA, padj = NA, cex = NA, col = NA, font = NA, ...)
```
### Arguments
| | |
| --- | --- |
| `text` | a character or [expression](../../base/html/expression) vector specifying the *text* to be written. Other objects are coerced by `[as.graphicsAnnot](../../grdevices/html/as.graphicsannot)`. |
| `side` | on which side of the plot (1=bottom, 2=left, 3=top, 4=right). |
| `line` | on which MARgin line, starting at 0 counting outwards. |
| `outer` | use outer margins if available. |
| `at` | give location of each string in user coordinates. If the component of `at` corresponding to a particular text item is not a finite value (the default), the location will be determined by `adj`. |
| `adj` | adjustment for each string in reading direction. For strings parallel to the axes, `adj = 0` means left or bottom alignment, and `adj = 1` means right or top alignment. If `adj` is not a finite value (the default), the value of `par("las")` determines the adjustment. For strings plotted parallel to the axis the default is to centre the string. |
| `padj` | adjustment for each string perpendicular to the reading direction (which is controlled by `adj`). For strings parallel to the axes, `padj = 0` means right or top alignment, and `padj = 1` means left or bottom alignment. If `padj` is not a finite value (the default), the value of `par("las")` determines the adjustment. For strings plotted perpendicular to the axis the default is to centre the string. |
| `cex` | character expansion factor. `NULL` and `NA` are equivalent to `1.0`. This is an absolute measure, not scaled by `par("cex")` or by setting `par("mfrow")` or `par("mfcol")`. Can be a vector. |
| `col` | color to use. Can be a vector. `NA` values (the default) mean use `par("col")`. |
| `font` | font for text. Can be a vector. `NA` values (the default) mean use `par("font")`. |
| `...` | Further graphical parameters (see `<par>`), including `family`, `las` and `xpd`. (The latter defaults to the figure region unless `outer = TRUE`, otherwise the device region. It can only be increased.) |
### Details
The user coordinates in the outer margins always range from zero to one, and are not affected by the user coordinates in the figure region(s) — **R** differs here from other implementations of S.
All of the named arguments can be vectors, and recycling will take place to plot as many strings as the longest of the vector arguments.
Note that a vector `adj` has a different meaning from `<text>`. `adj = 0.5` will centre the string, but for `outer = TRUE` on the device region rather than the plot region.
Parameter `las` will determine the orientation of the string(s). For strings plotted perpendicular to the axis the default justification is to place the end of the string nearest the axis on the specified line. (Note that this differs from S, which uses `srt` if `at` is supplied and `las` if it is not. Parameter `srt` is ignored in **R**.)
Note that if the text is to be plotted perpendicular to the axis, `adj` determines the justification of the string *and* the position along the axis unless `at` is specified.
Graphics parameter `"ylbias"` (see `<par>`) determines how the text baseline is placed relative to the nominal line.
### Side Effects
The given text is written onto the current plot.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<title>`, `<text>`, `[plot](plot.default)`, `<par>`; `[plotmath](../../grdevices/html/plotmath)` for details on mathematical annotation.
### Examples
```
plot(1:10, (-4:5)^2, main = "Parabola Points", xlab = "xlab")
mtext("10 of them")
for(s in 1:4)
mtext(paste("mtext(..., line= -1, {side, col, font} = ", s,
", cex = ", (1+s)/2, ")"), line = -1,
side = s, col = s, font = s, cex = (1+s)/2)
mtext("mtext(..., line= -2)", line = -2)
mtext("mtext(..., line= -2, adj = 0)", line = -2, adj = 0)
##--- log axis :
plot(1:10, exp(1:10), log = "y", main = "log =\"y\"", xlab = "xlab")
for(s in 1:4) mtext(paste("mtext(...,side=", s ,")"), side = s)
```
r None
`plothistogram` Plot Histograms
--------------------------------
### Description
These are methods for objects of class `"histogram"`, typically produced by `<hist>`.
### Usage
```
## S3 method for class 'histogram'
plot(x, freq = equidist, density = NULL, angle = 45,
col = NULL, border = par("fg"), lty = NULL,
main = paste("Histogram of",
paste(x$xname, collapse = "\n")),
sub = NULL, xlab = x$xname, ylab,
xlim = range(x$breaks), ylim = NULL,
axes = TRUE, labels = FALSE, add = FALSE,
ann = TRUE, ...)
## S3 method for class 'histogram'
lines(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a `histogram` object, or a list with components `density`, `mid`, etc, see `<hist>` for information about the components of `x`. |
| `freq` | logical; if `TRUE`, the histogram graphic is to present a representation of frequencies, i.e, `x$counts`; if `FALSE`, *relative* frequencies (probabilities), i.e., `x$density`, are plotted. The default is true for equidistant `breaks` and false otherwise. |
| `col` | a colour to be used to fill the bars. The default of `NULL` yields unfilled bars. |
| `border` | the color of the border around the bars. |
| `angle, density` | select shading of bars by lines: see `<rect>`. |
| `lty` | the line type used for the bars, see also `<lines>`. |
| `main, sub, xlab, ylab` | these arguments to `title` have useful defaults here. |
| `xlim, ylim` | the range of x and y values with sensible defaults. |
| `axes` | logical, indicating if axes should be drawn. |
| `labels` | logical or character. Additionally draw labels on top of bars, if not `FALSE`; if `TRUE`, draw the counts or rounded densities; if `labels` is a `character`, draw itself. |
| `add` | logical. If `TRUE`, only the bars are added to the current plot. This is what `lines.histogram(*)` does. |
| `ann` | logical. Should annotations (titles and axis titles) be plotted? |
| `...` | further [graphical parameters](par) to `title` and `axis`. |
### Details
`lines.histogram(*)` is the same as `plot.histogram(*, add = TRUE)`.
### See Also
`<hist>`, `<stem>`, `[density](../../stats/html/density)`.
### Examples
```
(wwt <- hist(women$weight, nclass = 7, plot = FALSE))
plot(wwt, labels = TRUE) # default main & xlab using wwt$xname
plot(wwt, border = "dark blue", col = "light blue",
main = "Histogram of 15 women's weights", xlab = "weight [pounds]")
## Fake "lines" example, using non-default labels:
w2 <- wwt; w2$counts <- w2$counts - 1
lines(w2, col = "Midnight Blue", labels = ifelse(w2$counts, "> 1", "1"))
```
r None
`contour` Display Contours
---------------------------
### Description
Create a contour plot, or add contour lines to an existing plot.
### Usage
```
contour(x, ...)
## Default S3 method:
contour(x = seq(0, 1, length.out = nrow(z)),
y = seq(0, 1, length.out = ncol(z)),
z,
nlevels = 10, levels = pretty(zlim, nlevels),
labels = NULL,
xlim = range(x, finite = TRUE),
ylim = range(y, finite = TRUE),
zlim = range(z, finite = TRUE),
labcex = 0.6, drawlabels = TRUE, method = "flattest",
vfont, axes = TRUE, frame.plot = axes,
col = par("fg"), lty = par("lty"), lwd = par("lwd"),
add = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `x, y` | locations of grid lines at which the values in `z` are measured. These must be in ascending order. By default, equally spaced values from 0 to 1 are used. If `x` is a `list`, its components `x$x` and `x$y` are used for `x` and `y`, respectively. If the list has component `z` this is used for `z`. |
| `z` | a matrix containing the values to be plotted (`NA`s are allowed). Note that `x` can be used instead of `z` for convenience. |
| `nlevels` | number of contour levels desired **iff** `levels` is not supplied. |
| `levels` | numeric vector of levels at which to draw contour lines. |
| `labels` | a vector giving the labels for the contour lines. If `NULL` then the levels are used as labels, otherwise this is coerced by `[as.character](../../base/html/character)`. |
| `labcex` | `cex` for contour labelling. This is an absolute size, not a multiple of `par("cex")`. |
| `drawlabels` | logical. Contours are labelled if `TRUE`. |
| `method` | character string specifying where the labels will be located. Possible values are `"simple"`, `"edge"` and `"flattest"` (the default). See the ‘Details’ section. |
| `vfont` | if `NULL`, the current font family and face are used for the contour labels. If a character vector of length 2 then Hershey vector fonts are used for the contour labels. The first element of the vector selects a typeface and the second element selects a fontindex (see `<text>` for more information). The default is `NULL` on graphics devices with high-quality rotation of text and `c("sans serif", "plain")` otherwise. |
| `xlim, ylim, zlim` | x-, y- and z-limits for the plot. |
| `axes, frame.plot` | logical indicating whether axes or a box should be drawn, see `<plot.default>`. |
| `col` | colour(s) for the lines drawn. |
| `lty` | line type(s) for the lines drawn. |
| `lwd` | line width(s) for the lines drawn. |
| `add` | logical. If `TRUE`, add to a current plot. |
| `...` | additional arguments to `<plot.window>`, `<title>`, `[Axis](zaxis)` and `<box>`, typically [graphical parameters](par) such as `cex.axis`. |
### Details
`contour` is a generic function with only a default method in base **R**.
The methods for positioning the labels on contours are `"simple"` (draw at the edge of the plot, overlaying the contour line), `"edge"` (draw at the edge of the plot, embedded in the contour line, with no labels overlapping) and `"flattest"` (draw on the flattest section of the contour, embedded in the contour line, with no labels overlapping). The second and third may not draw a label on every contour line.
For information about vector fonts, see the help for `<text>` and `[Hershey](../../grdevices/html/hershey)`.
Notice that `contour` interprets the `z` matrix as a table of `f(x[i], y[j])` values, so that the x axis corresponds to row number and the y axis to column number, with column 1 at the bottom, i.e. a 90 degree counter-clockwise rotation of the conventional textual layout.
Vector (of length *> 1*) `col`, `lty`, and `lwd` are applied along `levels` and recycled, see the Examples.
Alternatively, use `[contourplot](../../lattice/html/levelplot)` from the [lattice](https://CRAN.R-project.org/package=lattice) package where the `[formula](../../stats/html/formula)` notation allows to use vectors `x`, `y`, and `z` of the same length.
There is limited control over the axes and frame as arguments `col`, `lwd` and `lty` refer to the contour lines (rather than being general [graphical parameters](par)). For more control, add contours to a plot, or add axes and frame to a contour plot.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[options](../../base/html/options)("max.contour.segments")` for the maximal complexity of a single contour line.
`[contourLines](../../grdevices/html/contourlines)`, `<filled.contour>` for color-filled contours, `[contourplot](../../lattice/html/levelplot)` (and `[levelplot](../../lattice/html/levelplot)`) from package [lattice](https://CRAN.R-project.org/package=lattice). Further, `<image>` and the graphics demo which can be invoked as `demo(graphics)`.
### Examples
```
require(grDevices) # for colours
x <- -6:16
op <- par(mfrow = c(2, 2))
contour(outer(x, x), method = "edge", vfont = c("sans serif", "plain"))
z <- outer(x, sqrt(abs(x)), FUN = "/")
image(x, x, z)
contour(x, x, z, col = "pink", add = TRUE, method = "edge",
vfont = c("sans serif", "plain"))
contour(x, x, z, ylim = c(1, 6), method = "simple", labcex = 1,
xlab = quote(x[1]), ylab = quote(x[2]))
contour(x, x, z, ylim = c(-6, 6), nlevels = 20, lty = 2, method = "simple",
main = "20 levels; \"simple\" labelling method")
par(op)
## Passing multiple colours / lty / lwd :
op <- par(mfrow = c(1, 2))
z <- outer(-9:25, -9:25)
## Using default levels <- pretty(range(z, finite = TRUE), 10),
## the first and last of which typically are *not* drawn:
(levs <- pretty(z, n=10)) # -300 -200 ... 600 700
contour(z, col = 1:4)
## Set levels explicitly; show that 'lwd' and 'lty' are recycled as well:
contour(z, levels=levs[-c(1,length(levs))], col = 1:5, lwd = 1:3 *1.5, lty = 1:3)
par(op)
## Persian Rug Art:
x <- y <- seq(-4*pi, 4*pi, length.out = 27)
r <- sqrt(outer(x^2, y^2, "+"))
opar <- par(mfrow = c(2, 2), mar = rep(0, 4))
for(f in pi^(0:3))
contour(cos(r^2)*exp(-r/f),
drawlabels = FALSE, axes = FALSE, frame.plot = TRUE)
rx <- range(x <- 10*1:nrow(volcano))
ry <- range(y <- 10*1:ncol(volcano))
ry <- ry + c(-1, 1) * (diff(rx) - diff(ry))/2
tcol <- terrain.colors(12)
par(opar); opar <- par(pty = "s", bg = "lightcyan")
plot(x = 0, y = 0, type = "n", xlim = rx, ylim = ry, xlab = "", ylab = "")
u <- par("usr")
rect(u[1], u[3], u[2], u[4], col = tcol[8], border = "red")
contour(x, y, volcano, col = tcol[2], lty = "solid", add = TRUE,
vfont = c("sans serif", "plain"))
title("A Topographic Map of Maunga Whau", font = 4)
abline(h = 200*0:4, v = 200*0:4, col = "lightgray", lty = 2, lwd = 0.1)
## contourLines produces the same contour lines as contour
plot(x = 0, y = 0, type = "n", xlim = rx, ylim = ry, xlab = "", ylab = "")
u <- par("usr")
rect(u[1], u[3], u[2], u[4], col = tcol[8], border = "red")
contour(x, y, volcano, col = tcol[1], lty = "solid", add = TRUE,
vfont = c("sans serif", "plain"))
line.list <- contourLines(x, y, volcano)
invisible(lapply(line.list, lines, lwd=3, col=adjustcolor(2, .3)))
par(opar)
```
| programming_docs |
r None
`cdplot` Conditional Density Plots
-----------------------------------
### Description
Computes and plots conditional densities describing how the conditional distribution of a categorical variable `y` changes over a numerical variable `x`.
### Usage
```
cdplot(x, ...)
## Default S3 method:
cdplot(x, y,
plot = TRUE, tol.ylab = 0.05, ylevels = NULL,
bw = "nrd0", n = 512, from = NULL, to = NULL,
col = NULL, border = 1, main = "", xlab = NULL, ylab = NULL,
yaxlabels = NULL, xlim = NULL, ylim = c(0, 1), ...)
## S3 method for class 'formula'
cdplot(formula, data = list(),
plot = TRUE, tol.ylab = 0.05, ylevels = NULL,
bw = "nrd0", n = 512, from = NULL, to = NULL,
col = NULL, border = 1, main = "", xlab = NULL, ylab = NULL,
yaxlabels = NULL, xlim = NULL, ylim = c(0, 1), ...,
subset = NULL)
```
### Arguments
| | |
| --- | --- |
| `x` | an object, the default method expects a single numerical variable (or an object coercible to this). |
| `y` | a `"factor"` interpreted to be the dependent variable |
| `formula` | a `"formula"` of type `y ~ x` with a single dependent `"factor"` and a single numerical explanatory variable. |
| `data` | an optional data frame. |
| `plot` | logical. Should the computed conditional densities be plotted? |
| `tol.ylab` | convenience tolerance parameter for y-axis annotation. If the distance between two labels drops under this threshold, they are plotted equidistantly. |
| `ylevels` | a character or numeric vector specifying in which order the levels of the dependent variable should be plotted. |
| `bw, n, from, to, ...` | arguments passed to `[density](../../stats/html/density)` |
| `col` | a vector of fill colors of the same length as `levels(y)`. The default is to call `[gray.colors](../../grdevices/html/gray.colors)`. |
| `border` | border color of shaded polygons. |
| `main, xlab, ylab` | character strings for annotation |
| `yaxlabels` | character vector for annotation of y axis, defaults to `levels(y)`. |
| `xlim, ylim` | the range of x and y values with sensible defaults. |
| `subset` | an optional vector specifying a subset of observations to be used for plotting. |
### Details
`cdplot` computes the conditional densities of `x` given the levels of `y` weighted by the marginal distribution of `y`. The densities are derived cumulatively over the levels of `y`.
This visualization technique is similar to spinograms (see `<spineplot>`) and plots *P(y | x)* against *x*. The conditional probabilities are not derived by discretization (as in the spinogram), but using a smoothing approach via `[density](../../stats/html/density)`.
Note, that the estimates of the conditional densities are more reliable for high-density regions of *x*. Conversely, the are less reliable in regions with only few *x* observations.
### Value
The conditional density functions (cumulative over the levels of `y`) are returned invisibly.
### Author(s)
Achim Zeileis [[email protected]](mailto:[email protected])
### References
Hofmann, H., Theus, M. (2005), *Interactive graphics for visualizing conditional distributions*, Unpublished Manuscript.
### See Also
`<spineplot>`, `[density](../../stats/html/density)`
### Examples
```
## NASA space shuttle o-ring failures
fail <- factor(c(2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 2, 1, 2, 1, 1, 1,
1, 2, 1, 1, 1, 1, 1),
levels = 1:2, labels = c("no", "yes"))
temperature <- c(53, 57, 58, 63, 66, 67, 67, 67, 68, 69, 70, 70,
70, 70, 72, 73, 75, 75, 76, 76, 78, 79, 81)
## CD plot
cdplot(fail ~ temperature)
cdplot(fail ~ temperature, bw = 2)
cdplot(fail ~ temperature, bw = "SJ")
## compare with spinogram
(spineplot(fail ~ temperature, breaks = 3))
## highlighting for failures
cdplot(fail ~ temperature, ylevels = 2:1)
## scatter plot with conditional density
cdens <- cdplot(fail ~ temperature, plot = FALSE)
plot(I(as.numeric(fail) - 1) ~ jitter(temperature, factor = 2),
xlab = "Temperature", ylab = "Conditional failure probability")
lines(53:81, 1 - cdens[[1]](53:81), col = 2)
```
r None
`identify` Identify Points in a Scatter Plot
---------------------------------------------
### Description
`identify` reads the position of the graphics pointer when the (first) mouse button is pressed. It then searches the coordinates given in `x` and `y` for the point closest to the pointer. If this point is close enough to the pointer, its index will be returned as part of the value of the call.
### Usage
```
identify(x, ...)
## Default S3 method:
identify(x, y = NULL, labels = seq_along(x), pos = FALSE,
n = length(x), plot = TRUE, atpen = FALSE, offset = 0.5,
tolerance = 0.25, order = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `x, y` | coordinates of points in a scatter plot. Alternatively, any object which defines coordinates (a plotting structure, time series etc: see `[xy.coords](../../grdevices/html/xy.coords)`) can be given as `x`, and `y` left missing. |
| `labels` | an optional character vector giving labels for the points. Will be coerced using `[as.character](../../base/html/character)`, and recycled if necessary to the length of `x`. Excess labels will be discarded, with a warning. |
| `pos` | if `pos` is `TRUE`, a component is added to the return value which indicates where text was plotted relative to each identified point: see Value. |
| `n` | the maximum number of points to be identified. |
| `plot` | logical: if `plot` is `TRUE`, the labels are printed near the points and if `FALSE` they are omitted. |
| `atpen` | logical: if `TRUE` and `plot = TRUE`, the lower-left corners of the labels are plotted at the points clicked rather than relative to the points. |
| `offset` | the distance (in character widths) which separates the label from identified points. Negative values are allowed. Not used if `atpen = TRUE`. |
| `tolerance` | the maximal distance (in inches) for the pointer to be ‘close enough’ to a point. |
| `order` | if `order` is `TRUE`, a component is added to the return value which indicates the order in which points were identified: see Value. |
| `...` | further arguments passed to `<par>` such as `cex`, `col` and `font`. |
### Details
`identify` is a generic function, and only the default method is described here.
`identify` is only supported on screen devices such as `X11`, `windows` and `quartz`. On other devices the call will do nothing.
Clicking near (as defined by `tolerance`) a point adds it to the list of identified points. Points can be identified only once, and if the point has already been identified or the click is not near any of the points a message is printed immediately on the **R** console.
If `plot` is `TRUE`, the point is labelled with the corresponding element of `labels`. If `atpen` is false (the default) the labels are placed below, to the left, above or to the right of the identified point, depending on where the pointer was relative to the point. If `atpen` is true, the labels are placed with the bottom left of the string's box at the pointer.
For the usual `[X11](../../grdevices/html/x11)` device the identification process is terminated by pressing any mouse button other than the first. For the `[quartz](../../grdevices/html/quartz)` device the process is terminated by pressing either the pop-up menu equivalent (usually second mouse button or `Ctrl`-click) or the `ESC` key.
On most devices which support `identify`, successful selection of a point is indicated by a bell sound unless `[options](../../base/html/options)(locatorBell = FALSE)` has been set.
If the window is resized or hidden and then exposed before the identification process has terminated, any labels drawn by `identify` will disappear. These will reappear once the identification process has terminated and the window is resized or hidden and exposed again. This is because the labels drawn by `identify` are not recorded in the device's display list until the identification process has terminated.
If you interrupt the `identify` call this leaves the graphics device in an undefined state, with points labelled but labels not recorded in the display list. Copying a device in that state will give unpredictable results.
### Value
If both `pos` and `order` are `FALSE`, an integer vector containing the indices of the identified points.
If either of `pos` or `order` is `TRUE`, a list containing a component `ind`, indicating which points were identified and (if `pos` is `TRUE`) a component `pos`, indicating where the labels were placed relative to the identified points (1=below, 2=left, 3=above, 4=right and 0=no offset, used if `atpen = TRUE`) and (if `order` is `TRUE`) a component `order`, indicating the order in which points were identified.
### Technicalities
The algorithm used for placing labels is the same as used by `text` if `pos` is specified there, the difference being that the position of the pointer relative the identified point determines `pos` in `identify`.
For labels placed to the left of a point, the right-hand edge of the string's box is placed `offset` units to the left of the point, and analogously for points to the right. The baseline of the text is placed below the point so as to approximately centre string vertically. For labels placed above or below a point, the string is centered horizontally on the point. For labels placed above, the baseline of the text is placed `offset` units above the point, and for those placed below, the baseline is placed so that the top of the string's box is approximately `offset` units below the point. If you want more precise placement (e.g., centering) use `plot = FALSE` and plot via `<text>` or `<points>`: see the examples.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<locator>`, `<text>`.
`[dev.capabilities](../../grdevices/html/dev.capabilities)` to see if it is supported.
### Examples
```
## A function to use identify to select points, and overplot the
## points with another symbol as they are selected
identifyPch <- function(x, y = NULL, n = length(x), plot = FALSE, pch = 19, ...)
{
xy <- xy.coords(x, y); x <- xy$x; y <- xy$y
sel <- rep(FALSE, length(x))
while(sum(sel) < n) {
ans <- identify(x[!sel], y[!sel], labels = which(!sel), n = 1, plot = plot, ...)
if(!length(ans)) break
ans <- which(!sel)[ans]
points(x[ans], y[ans], pch = pch)
sel[ans] <- TRUE
}
## return indices of selected points
which(sel)
}
if(dev.interactive()) { ## use it
x <- rnorm(50); y <- rnorm(50)
plot(x,y); identifyPch(x,y) # how fast to get all?
}
```
r None
`polypath` Path Drawing
------------------------
### Description
`path` draws a path whose vertices are given in `x` and `y`.
### Usage
```
polypath(x, y = NULL,
border = NULL, col = NA, lty = par("lty"),
rule = "winding", ...)
```
### Arguments
| | |
| --- | --- |
| `x, y` | vectors containing the coordinates of the vertices of the path. |
| `col` | the color for filling the path. The default, `NA`, is to leave paths unfilled. |
| `border` | the color to draw the border. The default, `NULL`, means to use `<par>("fg")`. Use `border = NA` to omit borders. For compatibility with S, `border` can also be logical, in which case `FALSE` is equivalent to `NA` (borders omitted) and `TRUE` is equivalent to `NULL` (use the foreground colour), |
| `lty` | the line type to be used, as in `<par>`. |
| `rule` | character value specifying the path fill mode: either `"winding"` or `"evenodd"`. |
| `...` | [graphical parameters](par) such as `xpd`, `lend`, `ljoin` and `lmitre` can be given as arguments. |
### Details
The coordinates can be passed in a plotting structure (a list with `x` and `y` components), a two-column matrix, .... See `[xy.coords](../../grdevices/html/xy.coords)`.
It is assumed that the path is to be closed by joining the last point to the first point.
The coordinates can contain missing values. The behaviour is similar to that of `<polygon>`, except that instead of breaking a polygon into several polygons, `NA` values break the path into several sub-paths (including closing the last point to the first point in each sub-path). See the examples below.
The distinction between a path and a polygon is that the former can contain holes, as interpreted by the fill rule; these fill a region if the path border encircles it an odd or non-zero number of times, respectively.
Hatched shading (as implemented for `polygon()`) is not (currently) supported.
Not all graphics devices support this function: for example `xfig` and `pictex` do not.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Murrell, P. (2005) *R Graphics*. Chapman & Hall/CRC Press.
### See Also
`<segments>` for even more flexibility, `<lines>`, `<rect>`, `<box>`, `<polygon>`.
`<par>` for how to specify colors.
### Examples
```
plotPath <- function(x, y, col = "grey", rule = "winding") {
plot.new()
plot.window(range(x, na.rm = TRUE), range(y, na.rm = TRUE))
polypath(x, y, col = col, rule = rule)
if (!is.na(col))
mtext(paste("Rule:", rule), side = 1, line = 0)
}
plotRules <- function(x, y, title) {
plotPath(x, y)
plotPath(x, y, rule = "evenodd")
mtext(title, side = 3, line = 0)
plotPath(x, y, col = NA)
}
op <- par(mfrow = c(5, 3), mar = c(2, 1, 1, 1))
plotRules(c(.1, .1, .9, .9, NA, .2, .2, .8, .8),
c(.1, .9, .9, .1, NA, .2, .8, .8, .2),
"Nested rectangles, both clockwise")
plotRules(c(.1, .1, .9, .9, NA, .2, .8, .8, .2),
c(.1, .9, .9, .1, NA, .2, .2, .8, .8),
"Nested rectangles, outer clockwise, inner anti-clockwise")
plotRules(c(.1, .1, .4, .4, NA, .6, .9, .9, .6),
c(.1, .4, .4, .1, NA, .6, .6, .9, .9),
"Disjoint rectangles")
plotRules(c(.1, .1, .6, .6, NA, .4, .4, .9, .9),
c(.1, .6, .6, .1, NA, .4, .9, .9, .4),
"Overlapping rectangles, both clockwise")
plotRules(c(.1, .1, .6, .6, NA, .4, .9, .9, .4),
c(.1, .6, .6, .1, NA, .4, .4, .9, .9),
"Overlapping rectangles, one clockwise, other anti-clockwise")
par(op)
```
r None
`read.dta` Read Stata Binary Files
-----------------------------------
### Description
Reads a file in Stata version 5–12 binary format into a data frame.
Frozen: will not support Stata formats after 12.
### Usage
```
read.dta(file, convert.dates = TRUE, convert.factors = TRUE,
missing.type = FALSE,
convert.underscore = FALSE, warn.missing.labels = TRUE)
```
### Arguments
| | |
| --- | --- |
| `file` | a filename or URL as a character string. |
| `convert.dates` | Convert Stata dates to `Date` class, and date-times to `POSIXct` class? |
| `convert.factors` | Use Stata value labels to create factors? (Version 6.0 or later). |
| `missing.type` | For version 8 or later, store information about different types of missing data? |
| `convert.underscore` | Convert `"_"` in Stata variable names to `"."` in R names? |
| `warn.missing.labels` | Warn if a variable is specified with value labels and those value labels are not present in the file. |
### Details
If the filename appears to be a URL (of schemes http:, ftp: or https:) the URL is first downloaded to a temporary file and then read. (https: is only supported on some platforms.)
The variables in the Stata data set become the columns of the data frame. Missing values are correctly handled. The data label, variable labels, timestamp, and variable/dataset characteristics are stored as attributes of the data frame.
By default Stata dates (%d and %td formats) are converted to **R**'s `Date` class, and variables with Stata value labels are converted to factors. Ordinarily, `read.dta` will not convert a variable to a factor unless a label is present for every level. Use `convert.factors = NA` to override this. In any case the value label and format information is stored as attributes on the returned data frame. Stata's date formats are sketchily documented: if necessary use `convert.dates = FALSE` and examine the attributes to work out how to post-process the dates.
Stata 8 introduced a system of 27 different missing data values. If `missing.type` is `TRUE` a separate list is created with the same variable names as the loaded data. For string variables the list value is `NULL`. For other variables the value is `NA` where the observation is not missing and 0–26 when the observation is missing. This is attached as the `"missing"` attribute of the returned value.
The default file format for Stata 13, `format-115`, is substantially different from those for Stata 5–12.
### Value
A data frame with attributes. These will include `"datalabel"`, `"time.stamp"`, `"formats"`, `"types"`, `"val.labels"`, `"var.labels"` and `"version"` and may include `"label.table"` and `"expansion.table"`. Possible versions are `5, 6, 7`, `-7` (Stata 7SE, ‘format-111’), `8` (Stata 8 and 9, ‘format-113’), `10` (Stata 10 and 11, ‘format-114’). and `12` (Stata 12, ‘format-115’).
The value labels in attribute `"val.labels"` name a table for each variable, or are an empty string. The tables are elements of the named list attribute `"label.table"`: each is an integer vector with names.
### Author(s)
Thomas Lumley and R-core members: support for value labels by Brian Quistorff.
### References
Stata Users Manual (versions 5 & 6), Programming manual (version 7), or online help (version 8 and later) describe the format of the files. Or directly at <https://www.stata.com/help.cgi?dta_114> and <https://www.stata.com/help.cgi?dta_113>, but note that these have been changed since first published.
### See Also
Different approaches are available in package memisc (see its help for `Stata.file`), function `read_dta` in package haven and package readstata13.
`<write.dta>`, `[attributes](../../base/html/attributes)`, `[Date](../../base/html/dates)`, `[factor](../../base/html/factor)`
### Examples
```
write.dta(swiss,swissfile <- tempfile())
read.dta(swissfile)
```
r None
`write.dbf` Write a DBF File
-----------------------------
### Description
The function tries to write a data frame to a DBF file.
### Usage
```
write.dbf(dataframe, file, factor2char = TRUE, max_nchar = 254)
```
### Arguments
| | |
| --- | --- |
| `dataframe` | a data frame object. |
| `file` | a file name to be written to. |
| `factor2char` | logical, default `TRUE`, convert factor columns to character: otherwise they are written as the internal integer codes. |
| `max_nchar` | The maximum number of characters allowed in a character field. Strings which exceed this will be truncated with a warning. See Details. |
### Details
Dots in column names are replaced by underlines in the DBF file, and names are truncated to 11 characters.
Only vector columns of classes `"logical"`, `"numeric"`, `"integer"`, `"character"`, `"factor"` and `"Date"` can be written. Other columns should be converted to one of these.
Maximum precision (number of digits including minus sign and decimal sign) for numeric is 19 - scale (digits after the decimal sign) which is calculated internally based on the number of digits before the decimal sign.
The original DBASE format limited character fields to 254 bytes. It is said that Clipper and FoxPro can read up to 32K, and it is possible to write a reader that could accept up to 65535 bytes. (The documentation suggests that only ASCII characters can be assumed to be supported.) Readers expecting the older standard (which includes Excel 2003, Access 2003 and OpenOffice 2.0) will truncate the field to the maximum width modulo 256, so increase `max_nchar` only if you are sure the intended reader supports wider character fields.
### Value
Invisible `NULL`.
### Note
Other applications have varying abilities to read the data types used here. Microsoft Access reads `"numeric"`, `"integer"`, `"character"` and `"Date"` fields, including recognizing missing values, but not `"logical"` (read as `0,-1`). Microsoft Excel understood all possible types but did not interpret missing values in character fields correctly (showing them as character nuls).
### Author(s)
Nicholas J. Lewin-Koh, modified by Roger Bivand and Brian Ripley; shapelib by Frank Warmerdam.
### References
<http://shapelib.maptools.org/>
<https://www.clicketyclick.dk/databases/xbase/format/data_types.html>
### See Also
`<read.dbf>`
### Examples
```
str(warpbreaks)
try1 <- paste(tempfile(), ".dbf", sep = "")
write.dbf(warpbreaks, try1, factor2char = FALSE)
in1 <- read.dbf(try1)
str(in1)
try2 <- paste(tempfile(), ".dbf", sep = "")
write.dbf(warpbreaks, try2, factor2char = TRUE)
in2 <- read.dbf(try2)
str(in2)
unlink(c(try1, try2))
```
| programming_docs |
r None
`write.arff` Write Data into ARFF Files
----------------------------------------
### Description
Writes data into Weka Attribute-Relation File Format (ARFF) files.
### Usage
```
write.arff(x, file, eol = "\n", relation = deparse(substitute(x)))
```
### Arguments
| | |
| --- | --- |
| `x` | the data to be written, preferably a matrix or data frame. If not, coercion to a data frame is attempted. |
| `file` | either a character string naming a file, or a connection. `""` indicates output to the standard output connection. |
| `eol` | the character(s) to print at the end of each line (row). |
| `relation` | The name of the relation to be written in the file. |
### Details
`relation` will be passed through `[make.names](../../base/html/make.names)` before writing to the file, in an attempt to it them acceptable to Weka, and column names what do not start with an alphabetic character will have `X` prepended.
However, the references say that ARFF files are ASCII files, and that encoding is not enforced.
### References
Attribute-Relation File Format <https://waikato.github.io/weka-wiki/formats_and_processing/arff/>.
### See Also
`<read.arff>`; functions `write.arff` and `read.arff` in package RWeka which provide some support for logicals via conversion to or from factors.
### Examples
```
write.arff(iris, file = "")
```
r None
`read.epiinfo` Read Epi Info Data Files
----------------------------------------
### Description
Reads data files in the `.REC` format used by Epi Info versions 6 and earlier and by EpiData. Epi Info is a public domain database and statistics package produced by the US Centers for Disease Control and EpiData is a freely available data entry and validation system.
### Usage
```
read.epiinfo(file, read.deleted = FALSE, guess.broken.dates = FALSE,
thisyear = NULL, lower.case.names = FALSE)
```
### Arguments
| | |
| --- | --- |
| `file` | A filename, URL, or connection. |
| `read.deleted` | Deleted records are read if `TRUE`, omitted if `FALSE` or replaced with `NA` if `NA`. |
| `guess.broken.dates` | Attempt to convert dates with 0 or 2 digit year information (see ‘Details’). |
| `thisyear` | A 4-digit year to use for dates with no year. Defaults to the current year. |
| `lower.case.names` | Convert variable names to lowercase? |
### Details
Epi Info allows dates to be specified with no year or with a 2 or 4 digits. Dates with four-digit years are always converted to `Date` class. With the `guess.broken.dates` option the function will attempt to convert two-digit years using the operating system's default method (see [Date](../../base/html/dates)) and will use the current year or the `thisyear` argument for dates with no year information.
If `read.deleted` is `TRUE` the `"deleted"` attribute of the data frame indicates the deleted records.
### Value
A data frame.
### Note
Some later versions of Epi Info use the Microsoft Access file format to store data. That may be readable with the RODBC package.
### References
<https://www.cdc.gov/epiinfo/>, <http://www.epidata.dk>
### See Also
[DateTimeClasses](../../base/html/datetimeclasses)
### Examples
```
## Not run: ## That file is not available
read.epiinfo("oswego.rec", guess.broken.dates = TRUE, thisyear = "1972")
## End(Not run)
```
r None
`read.octave` Read Octave Text Data Files
------------------------------------------
### Description
Read a file in Octave text data format into a list.
### Usage
```
read.octave(file)
```
### Arguments
| | |
| --- | --- |
| `file` | a character string with the name of the file to read. |
### Details
This function is used to read in files in Octave text data format, as created by `save -text` in Octave. It knows about most of the common types of variables, including the standard atomic (real and complex scalars, matrices, and *N*-d arrays, strings, ranges, and boolean scalars and matrices) and recursive (structs, cells, and lists) ones, but has no guarantee to read all types. If a type is not recognized, a warning indicating the unknown type is issued, it is attempted to skip the unknown entry, and `NULL` is used as its value. Note that this will give incorrect results, and maybe even errors, in the case of unknown recursive data types.
As Octave can read MATLAB binary files, one can make the contents of such files available to R by using Octave's load and save (as text) facilities as an intermediary step.
### Value
A list with one named component for each variable in the file.
### Author(s)
Stephen Eglen [[email protected]](mailto:[email protected]) and Kurt Hornik
### References
<https://www.gnu.org/software/octave/>
r None
`write.foreign` Write Text Files and Code to Read Them
-------------------------------------------------------
### Description
This function exports simple data frames to other statistical packages by writing the data as free-format text and writing a separate file of instructions for the other package to read the data.
### Usage
```
write.foreign(df, datafile, codefile,
package = c("SPSS", "Stata", "SAS"), ...)
```
### Arguments
| | |
| --- | --- |
| `df` | A data frame |
| `datafile` | Name of file for data output |
| `codefile` | Name of file for code output |
| `package` | Name of package |
| `...` | Other arguments for the individual `writeForeign` functions |
### Details
The work for this function is done by `foreign:::writeForeignStata`, `foreign:::writeForeignSAS` and `foreign:::writeForeignSPSS`. To add support for another package, eg Systat, create a function `writeForeignSystat` with the same first three arguments as `write.foreign`. This will be called from `write.foreign` when `package="Systat"`.
Numeric variables and factors are supported for all packages: dates and times (`Date`, `dates`, `date`, and `POSIXt` classes) and logical vectors are also supported for SAS and characters are supported for SPSS.
For `package="SAS"` there are optional arguments `dataname = "rdata"` taking a string that will be the SAS data set name, `validvarname` taking either `"V6"` or `"V7"`, and `libpath = NULL` taking a string that will be the directory where the target SAS datset will be written when the generated SAS code been run.
For `package="SPSS"` there is an optional argument `maxchars = 32L` taking an integer that causes the variable names (not variable labels) to be abbreviated to not more than `maxchars` chars. For compatibility with SPSS version 12 and before, change this to `maxchars = 8L`. In single byte locales with SPSS versions 13 or later, this can be set to `maxchars = 64L`.
For `package="SPSS"`, as a side effect, the decimal indicator is always set by `SET DECIMAL=DOT.` which may override user settings of the indicator or its default derived from the current locale.
### Value
Invisible `NULL`.
### Author(s)
Thomas Lumley and Stephen Weigand
### Examples
```
## Not run:
datafile <- tempfile()
codefile <- tempfile()
write.foreign(esoph, datafile, codefile, package="SPSS")
file.show(datafile)
file.show(codefile)
unlink(datafile)
unlink(codefile)
## End(Not run)
```
r None
`read.mtp` Read a Minitab Portable Worksheet
---------------------------------------------
### Description
Return a list with the data stored in a file as a Minitab Portable Worksheet.
### Usage
```
read.mtp(file)
```
### Arguments
| | |
| --- | --- |
| `file` | character variable with the name of the file to read. The file must be in Minitab Portable Worksheet format. |
### Value
A list with one component for each column, matrix, or constant stored in the Minitab worksheet.
### Note
This function was written around 1990 for the format current then. Later versions of Minitab appear to have added to the format.
### Author(s)
Douglas M. Bates
### References
<https://www.minitab.com/>
### Examples
```
## Not run:
read.mtp("ex1-10.mtp")
## End(Not run)
```
r None
`read.systat` Obtain a Data Frame from a Systat File
-----------------------------------------------------
### Description
`read.systat` reads a rectangular data file stored by the Systat `SAVE` command as (legacy) `*.sys` or more recently `*.syd` files.
### Usage
```
read.systat(file, to.data.frame = TRUE)
```
### Arguments
| | |
| --- | --- |
| `file` | character variable with the name of the file to read |
| `to.data.frame` | return a data frame (otherwise a list) |
### Details
The function only reads those Systat files that are rectangular data files (`mtype = 1`), and warns when files have non-standard variable name codings. The files tested were produced on MS-DOS and Windows: files for the Mac version of Systat have a completely different format.
The C code was originally written for an add-on module for Systat described in Bivand (1992 paper). Variable names retain the trailing dollar in the list returned when `to.data.frame` is `FALSE`, and in that case character variables are returned as is and filled up to 12 characters with blanks on the right. The original function was limited to reading Systat files with up to 256 variables (a Systat limitation); it will now read up to 8192 variables.
If there is a user comment in the header this is returned as attribute `"comment"`. Such comments are always a multiple of 72 characters (with a maximum of 720 chars returned), normally padded with trailing spaces.
### Value
A data frame (or list) with one component for each variable in the saved data set.
### Author(s)
Roger Bivand
### References
Systat Manual, 1987, 1989
Bivand, R. S. (1992) SYSTAT-compatible software for modelling spatial dependence among observations. *Computers and Geosciences* **18**, 951–963.
### Examples
```
summary(iris)
iris.s <- read.systat(system.file("files/Iris.syd", package="foreign")[1])
str(iris.s)
summary(iris.s)
```
r None
`read.ssd` Obtain a Data Frame from a SAS Permanent Dataset, via read.xport
----------------------------------------------------------------------------
### Description
Generates a SAS program to convert the ssd contents to SAS transport format and then uses `read.xport` to obtain a data frame.
### Usage
```
read.ssd(libname, sectionnames,
tmpXport=tempfile(), tmpProgLoc=tempfile(), sascmd="sas")
```
### Arguments
| | |
| --- | --- |
| `libname` | character string defining the SAS library (usually a directory reference) |
| `sectionnames` | character vector giving member names. These are files in the `libname` directory. They will usually have a `.ssd0x` or `.sas7bdat` extension, which should be omitted. Use of ASCII names of at most 8 characters is strongly recommended. |
| `tmpXport` | character string: location where temporary xport format archive should reside – defaults to a randomly named file in the session temporary directory, which will be removed. |
| `tmpProgLoc` | character string: location where temporary conversion SAS program should reside – defaults to a randomly named file in session temporary directory, which will be removed on successful operation. |
| `sascmd` | character string giving full path to SAS executable. |
### Details
Creates a SAS program and runs it.
Error handling is primitive.
### Value
A data frame if all goes well, or `NULL` with warnings and some enduring side effects (log file for auditing)
### Note
**This requires SAS to be available.** If you have a SAS dataset without access to SAS you will need another product to convert it to a format such as `.csv`, for example ‘Stat/Transfer’ or ‘DBMS/Copy’ or the ‘SAS System Viewer’ (Windows only).
SAS requires section names to be no more than 8 characters. This is worked by the use of symbolic links: these are barely supported on Windows.
### Author(s)
For Unix: VJ Carey [[email protected]](mailto:[email protected])
### See Also
`<read.xport>`
### Examples
```
## if there were some files on the web we could get a real
## runnable example
## Not run:
R> list.files("trialdata")
[1] "baseline.sas7bdat" "form11.sas7bdat" "form12.sas7bdat"
[4] "form13.sas7bdat" "form22.sas7bdat" "form23.sas7bdat"
[7] "form3.sas7bdat" "form4.sas7bdat" "form48.sas7bdat"
[10] "form50.sas7bdat" "form51.sas7bdat" "form71.sas7bdat"
[13] "form72.sas7bdat" "form8.sas7bdat" "form9.sas7bdat"
[16] "form90.sas7bdat" "form91.sas7bdat"
R> baseline <- read.ssd("trialdata", "baseline")
R> form90 <- read.ssd("trialdata", "form90")
## Or for a Windows example
sashome <- "/Program Files/SAS/SAS 9.1"
read.ssd(file.path(sashome, "core", "sashelp"), "retail",
sascmd = file.path(sashome, "sas.exe"))
## End(Not run)
```
r None
`read.spss` Read an SPSS Data File
-----------------------------------
### Description
`read.spss` reads a file stored by the SPSS `save` or `export` commands.
This was orignally written in 2000 and has limited support for changes in SPSS formats since (which have not been many).
### Usage
```
read.spss(file, use.value.labels = TRUE, to.data.frame = FALSE,
max.value.labels = Inf, trim.factor.names = FALSE,
trim_values = TRUE, reencode = NA, use.missings = to.data.frame,
sub = ".", add.undeclared.levels = c("sort", "append", "no"),
duplicated.value.labels = c("append", "condense"),
duplicated.value.labels.infix = "_duplicated_", ...)
```
### Arguments
| | |
| --- | --- |
| `file` | character string: the name of the file or URL to read. |
| `use.value.labels` | logical: convert variables with value labels into **R** factors with those levels? This is only done if there are at least as many labels as values of the variable (when values without a matching label are returned as `NA`). |
| `to.data.frame` | logical: return a data frame? |
| `max.value.labels` | logical: only variables with value labels and at most this many unique values will be converted to factors if `TRUE`. |
| `trim.factor.names` | logical: trim trailing spaces from factor levels? |
| `trim_values` | logical: should values and value labels have trailing spaces ignored when matching for `use.value.labels = TRUE`? |
| `reencode` | logical: should character strings be re-encoded to the current locale. The default, `NA`, means to do so in UTF-8 or latin-1 locales, only. Alternatively a character string specifying an encoding to assume for the file. |
| `use.missings` | logical: should information on user-defined missing values be used to set the corresponding values to `NA`? |
| `sub` | character string: If not `NA` it is used by `[iconv](../../base/html/iconv)` to replace any non-convertible bytes in character/factor input. Default is `"."`. For back compatibility with foreign versions <= 0.8-68 use `sub=NA`. |
| `add.undeclared.levels` | character: specify how to handle variables with at least one value label and further non-missing values that have no value label (like a factor levels in R). For `"sort"` (the default) it adds undeclared factor levels to the already declared levels (and labels) and sort them according to level, for `"append"` it appends undeclared factor levels to declared levels (and labels) without sorting, and for `"no"` this does not convert to factor in case of numeric SPSS levels (not labels), and still converts to factor if the SPSS levels are characters and `to.data.frame=TRUE`. For back compatibility with foreign versions <= 0.8-68 use `add.undeclared.levels="no"` (not recommended as this may convert some values with missing corresponding value labels to `NA`). |
| `duplicated.value.labels` | character: what to do with duplicated value labels for different levels. For `"append"` (the default), the first original value label is kept while further duplicated labels are renamed to `paste0(label, duplicated.value.labels.infix, level)`, for `"condense"`, all levels with identical labels are condensed into exactly the first of these levels in R. Back compatibility with foreign versions <= 0.8-68 is not given as R versions >= 3.4.0 no longer support duplicated factor labels. |
| `duplicated.value.labels.infix` | character: the infix used for labels of factor levels with duplicated value labels in SPSS (default `"_duplicated_"`) if `duplicated.value.labels="append"`. |
| `...` | passed to `[as.data.frame](../../base/html/as.data.frame)` if `to.data.frame = TRUE`. |
### Details
This uses modified code from the PSPP project (<http://www.gnu.org/software/pspp/> for reading the SPSS formats.
If the filename appears to be a URL (of schemes http:, ftp: or https:) the URL is first downloaded to a temporary file and then read. (https: is supported where supported by `[download.file](../../utils/html/download.file)` with its current default `method`.)
Occasionally in SPSS, value labels will be added to some values of a continuous variable (e.g. to distinguish different types of missing data), and you will not want these variables converted to factors. By setting `max.value.labels` you can specify that variables with a large number of distinct values are not converted to factors even if they have value labels.
If SPSS variable labels are present, they are returned as the `"variable.labels"` attribute of the answer.
Fixed length strings (including value labels) are padded on the right with spaces by SPSS, and so are read that way by **R**. The default argument `trim_values=TRUE` causes trailing spaces to be ignored when matching to value labels, as examples have been seen where the strings and the value labels had different amounts of padding. See the examples for `[sub](../../base/html/grep)` for ways to remove trailing spaces in character data.
URL <https://docs.microsoft.com/en-us/windows/win32/intl/code-page-identifiers> provides a list of translations from Windows codepage numbers to encoding names that `[iconv](../../base/html/iconv)` is likely to know about and so suitable values for `reencode`. Automatic re-encoding is attempted for apparent codepages of 200 or more in a UTF-8 or latin-1 locale: some other high-numbered codepages can be re-encoded on most systems, but the encoding names are platform-dependent (see `[iconvlist](../../base/html/iconv)`).
### Value
A list (or optionally a data frame) with one component for each variable in the saved data set.
If what looks like a Windows codepage was recorded in the SPSS file, it is attached (as a number) as attribute `"codepage"` to the result.
There may be attributes `"label.table"` and `"variable.labels"`. Attribute `"label.table"` is a named list of value labels with one element per variable, either `NULL` or a named character vector. Attribute `"variable.labels"` is a named character vector with names the short variable names and elements the long names.
If there are user-defined missing values, there will be a attribute `"Missings"`. This is a named list with one list element per variable. Each element has an element `type`, a length-one character vector giving the type of missingness, and may also have an element `value` with the values corresponding to missingness. This is a complex subject (where the **R** and C source code for `read.spss` is the main documentation), but the simplest cases are types `"one"`, `"two"` and `"three"` with a corresponding number of (real or string) values whose labels can be found from the `"label.table"` attribute. Other possibilities are a finite or semi-infinite range, possibly plus a single value. See also <http://www.gnu.org/software/pspp/manual/html_node/Missing-Observations.html#Missing-Observations>.
### Note
If SPSS value labels are converted to factors the underlying numerical codes will not in general be the same as the SPSS numerical values, since the numerical codes in R are always *1,2,3,…*.
You may see warnings about the file encoding for SPSS `save` files: it is possible such files contain non-ASCII character data which need re-encoding. The most common occurrence is Windows codepage 1252, a superset of Latin-1. The encoding is recorded (as an integer) in attribute `"codepage"` of the result if it looks like a Windows codepage. Automatic re-encoding is done only in UTF-8 and latin-1 locales: see argument `reencode`.
### Author(s)
Saikat DebRoy and the R-core team
### See Also
A different interface also based on the PSPP codebase is available in package memisc: see its help for `spss.system.file`.
### Examples
```
(sav <- system.file("files", "electric.sav", package = "foreign"))
dat <- read.spss(file=sav)
str(dat) # list structure with attributes
dat <- read.spss(file=sav, to.data.frame=TRUE)
str(dat) # now a data.frame
### Now we use an example file that is not very well structured and
### hence may need some special treatment with appropriate argument settings.
### Expect lots of warnings as value labels (corresponding to R factor labels) are uncomplete,
### and an unsupported long string variable is present in the data
(sav <- system.file("files", "testdata.sav", package = "foreign"))
### Examples for add.undeclared.levels:
## add.undeclared.levels = "sort" (default):
x.sort <- read.spss(file=sav, to.data.frame = TRUE)
## add.undeclared.levels = "append":
x.append <- read.spss(file=sav, to.data.frame = TRUE,
add.undeclared.levels = "append")
## add.undeclared.levels = "no":
x.no <- read.spss(file=sav, to.data.frame = TRUE,
add.undeclared.levels = "no")
levels(x.sort$factor_n_undeclared)
levels(x.append$factor_n_undeclared)
str(x.no$factor_n_undeclared)
### Examples for duplicated.value.labels:
## duplicated.value.labels = "append" (default)
x.append <- read.spss(file=sav, to.data.frame=TRUE)
## duplicated.value.labels = "condense"
x.condense <- read.spss(file=sav, to.data.frame=TRUE,
duplicated.value.labels = "condense")
levels(x.append$factor_n_duplicated)
levels(x.condense$factor_n_duplicated)
as.numeric(x.append$factor_n_duplicated)
as.numeric(x.condense$factor_n_duplicated)
## Long Strings (>255 chars) are imported in consecutive separate variables
## (see warning about subtype 14):
x <- read.spss(file=sav, to.data.frame=TRUE, stringsAsFactors=FALSE)
cat.long.string <- function(x, w=70) cat(paste(strwrap(x, width=w), "\n"))
## first part: x$string_500:
cat.long.string(x$string_500)
## second part: x$STRIN0:
cat.long.string(x$STRIN0)
## complete long string:
long.string <- apply(x[,c("string_500", "STRIN0")], 1, paste, collapse="")
cat.long.string(long.string)
```
| programming_docs |
r None
`read.dbf` Read a DBF File
---------------------------
### Description
The function reads a DBF file into a data frame, converting character fields to factors, and trying to respect NULL fields.
The DBF format is documented but not much adhered to. There is is no guarantee this will read all DBF files.
### Usage
```
read.dbf(file, as.is = FALSE)
```
### Arguments
| | |
| --- | --- |
| `file` | name of input file |
| `as.is` | should character vectors not be converted to factors? |
### Details
DBF is the extension used for files written for the ‘XBASE’ family of database languages, ‘covering the dBase, Clipper, FoxPro, and their Windows equivalents Visual dBase, Visual Objects, and Visual FoxPro, plus some older products’ (<https://www.clicketyclick.dk/databases/xbase/format/>). Most of these follow the file structure used by Ashton-Tate's dBase II, III or 4 (later owned by Borland).
`read.dbf` is based on C code from <http://shapelib.maptools.org/> which implements the ‘XBASE’ specification. It can convert fields of type `"L"` (logical), `"N"` and `"F"` (numeric and float) and `"D"` (dates): all other field types are read as-is as character vectors. A numeric field is read as an **R** integer vector if it is encoded to have no decimals, otherwise as a numeric vector. However, if the numbers are too large to fit into an integer vector, it is changed to numeric. Note that is possible to read integers that cannot be represented exactly even as doubles: this sometimes occurs if IDs are incorrectly coded as numeric.
### Value
A data frame of data from the DBF file; note that the field names are adjusted to use in R using `[make.names](../../base/html/make.names)(unique=TRUE)`.
There is an attribute `"data_type"` giving the single-character dBase types for each field.
### Note
Not to be able to read a particular ‘DBF’ file is not a bug: this is a convenience function especially for shapefiles.
### Author(s)
Nicholas Lewin-Koh and Roger Bivand; shapelib by Frank Warmerdam
### References
<http://shapelib.maptools.org/>.
### See Also
`<write.dbf>`
### Examples
```
x <- read.dbf(system.file("files/sids.dbf", package="foreign")[1])
str(x)
summary(x)
```
r None
`read.xport` Read a SAS XPORT Format Library
---------------------------------------------
### Description
Reads a file as a SAS XPORT format library and returns a list of data.frames.
### Usage
```
read.xport(file, ...)
```
### Arguments
| | |
| --- | --- |
| `file` | character variable with the name of the file to read. The file must be in SAS XPORT format. |
| `...` | passed to `[as.data.frame](../../base/html/as.data.frame)` when creating the data frames. |
### Value
If there is a more than one dataset in the XPORT format library, a named list of data frames, otherwise a data frame. The columns of the data frames will be either numeric (corresponding to numeric in SAS) or factor (corresponding to character in SAS). All SAS numeric missing values (including special missing values represented by `._`, `.A` to `.Z` by SAS) are mapped to **R** `NA`.
Trailing blanks are removed from character columns before conversion to a factor. Some sources claim that character missing values in SAS are represented by `' '` or `''`: these are not treated as **R** missing values.
### Author(s)
Saikat DebRoy [[email protected]](mailto:[email protected])
### References
SAS Technical Support document TS-140: “The Record Layout of a Data Set in SAS Transport (XPORT) Format” available at <https://support.sas.com/techsup/technote/ts140.pdf>.
### See Also
`<lookup.xport>`
### Examples
```
## Not run: ## no XPORT file is installed
read.xport("test.xpt")
## End(Not run)
```
r None
`read.arff` Read Data from ARFF Files
--------------------------------------
### Description
Reads data from Weka Attribute-Relation File Format (ARFF) files.
### Usage
```
read.arff(file)
```
### Arguments
| | |
| --- | --- |
| `file` | a character string with the name of the ARFF file to read from, or a `[connection](../../base/html/connections)` which will be opened if necessary, and if so closed at the end of the function call. |
### Value
A data frame containing the data from the ARFF file.
### References
Attribute-Relation File Format <https://waikato.github.io/weka-wiki/formats_and_processing/arff/>.
### See Also
`<write.arff>`; functions `write.arff` and `read.arff` in package RWeka which provide some support for logicals via conversion to or from factors.
r None
`write.dta` Write Files in Stata Binary Format
-----------------------------------------------
### Description
Writes the data frame to file in the Stata binary format. Does not write array variables unless they can be `[drop](../../base/html/drop)`-ed to a vector.
Frozen: will not support Stata formats after 10 (also used by Stata 11).
### Usage
```
write.dta(dataframe, file, version = 7L,
convert.dates = TRUE, tz = "GMT",
convert.factors = c("labels", "string", "numeric", "codes"))
```
### Arguments
| | |
| --- | --- |
| `dataframe` | a data frame. |
| `file` | character string giving filename. |
| `version` | integer: Stata version: 6, 7, 8 and 10 are supported, and 9 is mapped to 8, 11 to 10. |
| `convert.dates` | logical: convert `Date` and `POSIXct` objects: see section ‘Dates’. |
| `tz` | timezone for date conversion. |
| `convert.factors` | how to handle factors. |
### Details
The major difference between supported file formats in Stata versions is that version 7.0 and later allow 32-character variable names (5 and 6 were restricted to 8-character names). The `abbreviate` function is used to trim variable names to the permitted length. A warning is given if this is needed and it is an error for the abbreviated names not to be unique. Each version of Stata is claimed to be able to read all earlier formats.
The columns in the data frame become variables in the Stata data set. Missing values are handled correctly.
There are four options for handling factors. The default is to use Stata ‘value labels’ for the factor levels. With `convert.factors = "string"`, the factor levels are written as strings (the name of the value label is taken from the `"val.labels"` attribute if it exists or the variable name otherwise). With `convert.factors = "numeric"` the numeric values of the levels are written, or `NA` if they cannot be coerced to numeric. Finally, `convert.factors = "codes"` writes the underlying integer codes of the factors. This last used to be the only available method and is provided largely for backwards compatibility.
If the `"label.table"` attribute contains value labels with names not already attached to a variable (not the variable name or name from `"val.labels"`) then these will be written out as well.
If the `"datalabel"` attribute contains a string, it is written out as the dataset label otherwise the dataset label is `"Written by R."`.
If the `"expansion.table"` attribute exists expansion fields are written. This attribute should contain a `[list](../../base/html/list)` where each element is `[character](../../base/html/character)` vector of length three. The first vector element contains the name of a variable or "\_dta" (meaning the dataset). The second element contains the characeristic name. The third contains the associated data.
If the `"val.labels"` attribute contains a `[character](../../base/html/character)` vector with a string label for each value then this is written as the value labels. Otherwise the variable names are used.
If the `"var.labels"` attribute contains a `[character](../../base/html/character)` vector with a string label for each variable then this is written as the variable labels. Otherwise the variable names are repeated as variable labels.
For Stata 8 or later use the default `version = 7` – the only advantage of Stata 8 format over 7 is that it can represent multiple different missing value types, and **R** doesn't have them. Stata 10/11 allows longer format lists, but **R** does not make use of them.
Note that the Stata formats are documented to use ASCII strings – **R** does not enforce this, but use of non-ASCII character strings will not be portable as the encoding is not recorded. Up to 244 bytes are allowed in character data, and longer strings will be truncated with a warning.
Stata uses some large numerical values to represent missing values. This function does not currently check, and hence integers greater than `2147483620` and doubles greater than `8.988e+307` may be misinterpreted by Stata.
### Value
`NULL`
### Dates
Unless disabled by argument `convert.dates = FALSE`, **R** date and date-time objects (`POSIXt` classes) are converted into the Stata date format, the number of days since 1960-01-01. (For date-time objects this may lose information.) Stata can be told that these are dates by
```
format xdate %td;
```
It is possible to pass objects of class `POSIXct` to Stata to be treated as one of its versions of date-times. Stata uses the number of milliseconds since 1960-01-01, either excluding (format `%tc`) or counting (format `%tC`) leap seconds. So either an object of class `POSICct` can be passed to Stata with `convert.dates = FALSE` and converted in Stata, or `315619200` should be added and then multiplied by `1000` before passing to `write.dta` and assigning format `%tc`. Stata's comments on the first route are at <https://www.stata.com/manuals13/ddatetime.pdf>, but at the time of writing were wrong: **R** uses POSIX conventions and hence does not count leap seconds.
### Author(s)
Thomas Lumley and R-core members: support for value labels by Brian Quistorff.
### References
Stata 6.0 Users Manual, Stata 7.0 Programming manual, Stata online help (version 8 and later, also <https://www.stata.com/help.cgi?dta_114> and <https://www.stata.com/help.cgi?dta_113>) describe the file formats.
### See Also
`<read.dta>`, `[attributes](../../base/html/attributes)`, `[DateTimeClasses](../../base/html/datetimeclasses)`, `[abbreviate](../../base/html/abbreviate)`
### Examples
```
write.dta(swiss, swissfile <- tempfile())
read.dta(swissfile)
```
r None
`lookup.xport` Lookup Information on a SAS XPORT Format Library
----------------------------------------------------------------
### Description
Scans a file as a SAS XPORT format library and returns a list containing information about the SAS library.
### Usage
```
lookup.xport(file)
```
### Arguments
| | |
| --- | --- |
| `file` | character variable with the name of the file to read. The file must be in SAS XPORT format. |
### Value
A list with one component for each dataset in the XPORT format library.
### Author(s)
Saikat DebRoy
### References
SAS Technical Support document TS-140: “The Record Layout of a Data Set in SAS Transport (XPORT) Format” available as <https://support.sas.com/techsup/technote/ts140.pdf>.
### See Also
`<read.xport>`
### Examples
```
## Not run: ## no XPORT file is installed.
lookup.xport("test.xpt")
## End(Not run)
```
r None
`read.S` Read an S3 Binary or data.dump File
---------------------------------------------
### Description
Reads binary data files or `data.dump` files that were produced in S version 3.
### Usage
```
data.restore(file, print = FALSE, verbose = FALSE, env = .GlobalEnv)
read.S(file)
```
### Arguments
| | |
| --- | --- |
| `file` | the filename of the S-PLUS `data.dump` or binary file. |
| `print` | whether to print the name of each object as read from the file. |
| `verbose` | whether to print the name of every subitem within each object. |
| `env` | environment within which to create the restored object(s). |
### Details
`read.S` can read the binary files produced in some older versions of S-PLUS on either Windows (versions 3.x, 4.x, 2000) or Unix (version 3.x with 4 byte integers). It automatically detects whether the file was produced on a big- or little-endian machine and adapts itself accordingly.
`data.restore` can read a similar range of files produced by `data.dump` and for newer versions of S-PLUS, those from `data.dump(....., oldStyle=TRUE)`.
Not all S3 objects can be handled in the current version. The most frequently encountered exceptions are functions and expressions; you will also have trouble with objects that contain model formulas. In particular, comments will be lost from function bodies, and the argument lists of functions will often be changed.
### Value
For `read.S`, an R version of the S3 object.
For `data.restore`, the name of the file.
### Author(s)
Duncan Murdoch
### Examples
```
## if you have an S-PLUS _Data file containing 'myobj'
## Not run: read.S(file.path("_Data", "myobj"))
data.restore("dumpdata", print = TRUE)
## End(Not run)
```
r None
`compile` Byte Code Compiler
-----------------------------
### Description
These functions provide an interface to a byte code compiler for **R**.
### Usage
```
cmpfun(f, options = NULL)
compile(e, env = .GlobalEnv, options = NULL, srcref = NULL)
cmpfile(infile, outfile, ascii = FALSE, env = .GlobalEnv,
verbose = FALSE, options = NULL, version = NULL)
loadcmp(file, envir = .GlobalEnv, chdir = FALSE)
disassemble(code)
enableJIT(level)
compilePKGS(enable)
getCompilerOption(name, options)
setCompilerOptions(...)
```
### Arguments
| | |
| --- | --- |
| `f` | a closure. |
| `options` | list of named compiler options: see ‘Details’. |
| `env` | the top level environment for the compiling. |
| `srcref` | initial source reference for the expression. |
| `file,infile,outfile` | pathnames; outfile defaults to infile with a ‘.Rc’ extension in place of any existing extension. |
| `ascii` | logical; should the compiled file be saved in ascii format? |
| `verbose` | logical; should the compiler show what is being compiled? |
| `version` | the workspace format version to use. `NULL` specifies the current default format (3). Version 1 was the default from **R** 0.99.0 to **R** 1.3.1 and version 2 from **R** 1.4.0 to 3.5.0. Version 3 is supported from **R** 3.5.0. |
| `envir` | environment to evaluate loaded expressions in. |
| `chdir` | logical; change directory before evaluation? |
| `code` | byte code expression or compiled closure |
| `e` | expression to compile. |
| `level` | integer; the JIT level to use (`0` to `3`, or negative to *return* it). |
| `enable` | logical; enable compiling packages if `TRUE`. |
| `name` | character string; name of option to return. |
| `...` | named compiler options to set. |
### Details
The function `cmpfun` compiles the body of a closure and returns a new closure with the same formals and the body replaced by the compiled body expression.
`compile` compiles an expression into a byte code object; the object can then be evaluated with `eval`.
`cmpfile` parses the expressions in `infile`, compiles them, and writes the compiled expressions to `outfile`. If `outfile` is not provided, it is formed from `infile` by replacing or appending a `.Rc` suffix.
`loadcmp` is used to load compiled files. It is similar to `sys.source`, except that its default loading environment is the global environment rather than the base environment.
`disassemble` produces a printed representation of the code that may be useful to give a hint of what is going on.
`enableJIT` enables or disables just-in-time (JIT) compilation. JIT is disabled if the argument is 0. If `level` is 1 then larger closures are compiled before their first use. If `level` is 2, then some small closures are also compiled before their second use. If `level` is 3 then in addition all top level loops are compiled before they are executed. JIT level 3 requires the compiler option `optimize` to be 2 or 3. The JIT level can also be selected by starting **R** with the environment variable `R_ENABLE_JIT` set to one of these values. Calling `enableJIT` with a negative argument returns the current JIT level. The default JIT level is `3`.
`compilePKGS` enables or disables compiling packages when they are installed. This requires that the package uses lazy loading as compilation occurs as functions are written to the lazy loading data base. This can also be enabled by starting **R** with the environment variable `_R_COMPILE_PKGS_` set to a positive integer value. This should not be enabled outside package installation, because it causes any serialized function to be compiled, which comes with time and space overhead. `R_COMPILE_PKGS` can be used, instead, to instruct `INSTALL` to enable/disable compilation of packages during installation.
Currently the compiler warns about a variety of things. It does this by using `cat` to print messages. Eventually this should use the condition handling mechanism.
The `options` argument can be used to control compiler operation. There are currently four options: `optimize`, `suppressAll`, `suppressUndefined`, and `suppressNoSuperAssignVar`. `optimize` specifies the optimization level, an integer from `0` to `3` (the current out-of-the-box default is `2`). `suppressAll` should be a scalar logical; if `TRUE` no messages will be shown (this is the default). `suppressUndefined` can be `TRUE` to suppress all messages about undefined variables, or it can be a character vector of the names of variables for which messages should not be shown. `suppressNoSuperAssignVar` can be `TRUE` to suppress messages about super assignments to a variable for which no binding is visible at compile time. During compilation of packages, `suppressAll` is currently `FALSE`, `suppressUndefined` is `TRUE` and `suppressNoSuperAssignVar` is `TRUE`.
`getCompilerOption` returns the value of the specified option. The default value is returned unless a value is supplied in the `options` argument; the `options` argument is primarily for internal use. `setCompilerOption` sets the default option values. Options to set are identified by argument names, e.g. `setCompilerOptions(suppressAll = TRUE, optimize = 3)`. It returns a named list of the previous values.
Calling the compiler a byte code compiler is actually a bit of a misnomer: the external representation of code objects currently uses `int` operands, and when compiled with `gcc` the internal representation is actually threaded code rather than byte code.
### Author(s)
Luke Tierney
### Examples
```
oldJIT <- enableJIT(0)
# a simple example
f <- function(x) x+1
fc <- cmpfun(f)
fc(2)
disassemble(fc)
# old R version of lapply
la1 <- function(X, FUN, ...) {
FUN <- match.fun(FUN)
if (!is.list(X))
X <- as.list(X)
rval <- vector("list", length(X))
for(i in seq_along(X))
rval[i] <- list(FUN(X[[i]], ...))
names(rval) <- names(X) # keep `names' !
return(rval)
}
# a small variation
la2 <- function(X, FUN, ...) {
FUN <- match.fun(FUN)
if (!is.list(X))
X <- as.list(X)
rval <- vector("list", length(X))
for(i in seq_along(X)) {
v <- FUN(X[[i]], ...)
if (is.null(v)) rval[i] <- list(v)
else rval[[i]] <- v
}
names(rval) <- names(X) # keep `names' !
return(rval)
}
# Compiled versions
la1c <- cmpfun(la1)
la2c <- cmpfun(la2)
# some timings
x <- 1:10
y <- 1:100
system.time(for (i in 1:10000) lapply(x, is.null))
system.time(for (i in 1:10000) la1(x, is.null))
system.time(for (i in 1:10000) la1c(x, is.null))
system.time(for (i in 1:10000) la2(x, is.null))
system.time(for (i in 1:10000) la2c(x, is.null))
system.time(for (i in 1:1000) lapply(y, is.null))
system.time(for (i in 1:1000) la1(y, is.null))
system.time(for (i in 1:1000) la1c(y, is.null))
system.time(for (i in 1:1000) la2(y, is.null))
system.time(for (i in 1:1000) la2c(y, is.null))
enableJIT(oldJIT)
```
r None
`BunchKaufman-methods` Bunch-Kaufman Decomposition Methods
-----------------------------------------------------------
### Description
The Bunch-Kaufman Decomposition of a square symmetric matrix *A* is *A = P LDL' P'* where *P* is a permutation matrix, *L* is *unit*-lower triangular and *D* is *block*-diagonal with blocks of dimension *1 x 1* or *2 x 2*.
This is generalization of a pivoting *LDL'* Cholesky decomposition.
### Usage
```
## S4 method for signature 'dsyMatrix'
BunchKaufman(x, ...)
## S4 method for signature 'dspMatrix'
BunchKaufman(x, ...)
## S4 method for signature 'matrix'
BunchKaufman(x, uplo = NULL, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a symmetric square matrix. |
| `uplo` | optional string, `"U"` or `"L"` indicating which “triangle” half of `x` should determine the result. The default is `"U"` unless `x` has a `uplo` slot which is the case for those inheriting from class `[symmetricMatrix](symmetricmatrix-class)`, where `x@uplo` will be used. |
| `...` | potentially further arguments passed to methods. |
### Details
FIXME: We really need an `expand()` method in order to *work* with the result!
### Value
an object of class `[BunchKaufman](cholesky-class)`, which can also be used as a (triangular) matrix directly. Somewhat amazingly, it inherits its `uplo` slot from `x`.
### Methods
Currently, only methods for **dense** numeric symmetric matrices are implemented. To compute the Bunch-Kaufman decomposition, the methods use either one of two Lapack routines:
`x = "dspMatrix"`
routine `dsptrf()`; whereas
`x = "dsyMatrix"`
, and
`x = "matrix"`
use `dsytrf()`.
### References
The original LAPACK source code, including documentation; <https://www.netlib.org/lapack/double/dsytrf.f> and <https://www.netlib.org/lapack/double/dsptrf.f>
### See Also
The resulting class, `[BunchKaufman](cholesky-class)`. Related decompositions are the LU, `<lu>`, and the Cholesky, `<chol>` (and for *sparse* matrices, `[Cholesky](cholesky)`).
### Examples
```
data(CAex)
dim(CAex)
isSymmetric(CAex)# TRUE
CAs <- as(CAex, "symmetricMatrix")
if(FALSE) # no method defined yet for *sparse* :
bk. <- BunchKaufman(CAs)
## does apply to *dense* symmetric matrices:
bkCA <- BunchKaufman(as(CAs, "denseMatrix"))
bkCA
image(bkCA)# shows how sparse it is, too
str(R.CA <- as(bkCA, "sparseMatrix"))
## an upper triangular 72x72 matrix with only 144 non-zero entries
```
| programming_docs |
r None
`nMatrix-class` Class "nMatrix" of Non-zero Pattern Matrices
-------------------------------------------------------------
### Description
The `nMatrix` class is the virtual “mother” class of all ***n**on-zero pattern* (or simply *patter**n***) matrices in the Matrix package.
### Slots
Common to *all* matrix object in the package:
`Dim`:
Object of class `"integer"` - the dimensions of the matrix - must be an integer vector with exactly two non-negative values.
`Dimnames`:
list of length two; each component containing NULL or a `[character](../../base/html/character)` vector length equal the corresponding `Dim` element.
### Methods
There is a bunch of coercion methods (for `[as](../../methods/html/as)(..)`), e.g.,
coerce
`signature(from = "matrix", to = "nMatrix")`: Note that these coercions (must) coerce `[NA](../../base/html/na)`s to non-zero, hence conceptually `TRUE`. This is particularly important when `[sparseMatrix](sparsematrix-class)` objects are coerced to `"nMatrix"` and hence to `[nsparseMatrix](nsparsematrix-classes)`.
coerce
`signature(from = "dMatrix", to = "nMatrix")`, and
coerce
`signature(from = "lMatrix", to = "nMatrix")`: For dense matrices with `[NA](../../base/html/na)`s, these coercions are valid since Matrix version 1.2.0 (still with a `[warning](../../base/html/warning)` or a `[message](../../base/html/message)` if `"Matrix.warn"`, or `"Matrix.verbose"` `[options](../../base/html/options)` are set.)
coerce
`signature(from = "nMatrix", to = "matrix")`: ...
coerce
`signature(from = "nMatrix", to = "dMatrix")`: ...
coerce
`signature(from = "nMatrix", to = "lMatrix")`: ...
— — —
Additional methods contain group methods, such as
Ops
`signature(e1 = "nMatrix", e2 = "....")`, ...
Arith
`signature(e1 = "nMatrix", e2 = "....")`, ...
Compare
`signature(e1 = "nMatrix", e2 = "....")`, ...
Logic
`signature(e1 = "nMatrix", e2 = "....")`, ...
Summary
`signature(x = "nMatrix", "....")`, ...
### See Also
The classes `[lMatrix](dmatrix-class)`, `[nsparseMatrix](nsparsematrix-classes)`, and the mother class, `[Matrix](matrix-class)`.
### Examples
```
getClass("nMatrix")
L3 <- Matrix(upper.tri(diag(3)))
L3 # an "ltCMatrix"
as(L3, "nMatrix") # -> ntC*
## similar, not using Matrix()
as(upper.tri(diag(3)), "nMatrix")# currently "ngTMatrix"
```
r None
`diagU2N` Transform Triangular Matrices from Unit Triangular to General Triangular and Back
--------------------------------------------------------------------------------------------
### Description
Transform a triangular matrix `x`, i.e., of `[class](../../base/html/class)` `"[triangularMatrix](triangularmatrix-class)"`, from (internally!) unit triangular (“unitriangular”) to “general” triangular (`diagU2N(x)`) or back (`diagN2U(x)`). Note that the latter, `diagN2U(x)`, also sets the diagonal to one in cases where `diag(x)` was not all one.
`.diagU2N(x)` assumes but does *not* check that `x` is a `[triangularMatrix](triangularmatrix-class)` with `diag` slot `"U"`, and should hence be used with care.
### Usage
```
diagN2U(x, cl = getClassDef(class(x)), checkDense = FALSE)
diagU2N(x, cl = getClassDef(class(x)), checkDense = FALSE)
.diagU2N(x, cl, checkDense = FALSE)
```
### Arguments
| | |
| --- | --- |
| `x` | a `[triangularMatrix](triangularmatrix-class)`, often sparse. |
| `cl` | (optional, for speedup only:) class (definition) of `x`. |
| `checkDense` | logical indicating if dense (see `[denseMatrix](densematrix-class)`) matrices should be considered at all; i.e., when false, as per default, the result will be sparse even when `x` is dense. |
### Details
The concept of unit triangular matrices with a `diag` slot of `"U"` stems from LAPACK.
### Value
a triangular matrix of the same `[class](../../base/html/class)` but with a different `diag` slot. For `diagU2N` (semantically) with identical entries as `x`, whereas in `diagN2U(x)`, the off-diagonal entries are unchanged and the diagonal is set to all `1` even if it was not previously.
### Note
Such internal storage details should rarely be of relevance to the user. Hence, these functions really are rather *internal* utilities.
### See Also
`"[triangularMatrix](triangularmatrix-class)"`, `"[dtCMatrix](dtcmatrix-class)"`.
### Examples
```
(T <- Diagonal(7) + triu(Matrix(rpois(49, 1/4), 7,7), k = 1))
(uT <- diagN2U(T)) # "unitriangular"
(t.u <- diagN2U(10*T))# changes the diagonal!
stopifnot(all(T == uT), diag(t.u) == 1,
identical(T, diagU2N(uT)))
T[upper.tri(T)] <- 5
T <- diagN2U(as(T,"triangularMatrix"))
stopifnot(T@diag == "U")
dT <- as(T, "denseMatrix")
dt. <- diagN2U(dT)
dtU <- diagN2U(dT, checkDense=TRUE)
stopifnot(is(dtU, "denseMatrix"), is(dt., "sparseMatrix"),
all(dT == dt.), all(dT == dtU),
dt.@diag == "U", dtU@diag == "U")
```
r None
`mat2triplet` Map Matrix to its Triplet Representation
-------------------------------------------------------
### Description
From an **R** object coercible to `"[TsparseMatrix](tsparsematrix-class)"`, typically a (sparse) matrix, produce its triplet representation which may collapse to a “Duplet” in the case of binary aka pattern, such as `"[nMatrix](nmatrix-class)"` objects.
### Usage
```
mat2triplet(x, uniqT = FALSE)
```
### Arguments
| | |
| --- | --- |
| `x` | any **R** object for which `as(x, "[TsparseMatrix](tsparsematrix-class)")` works; typically a `[matrix](../../base/html/matrix)` of one of the Matrix package matrices. |
| `uniqT` | `[logical](../../base/html/logical)` indicating if the triplet representation should be ‘unique’ in the sense of `[uniqTsparse](uniqtsparse)()`. |
### Value
A `[list](../../base/html/list)`, typically with three components,
| | |
| --- | --- |
| `i` | vector of row indices for all non-zero entries of `x` |
| `i` | vector of columns indices for all non-zero entries of `x` |
| `x` | vector of all non-zero entries of `x`; exists **only** when `as(x, "TsparseMatrix")` is **not** a `"[nsparseMatrix](nsparsematrix-classes)"`. |
Note that the `[order](../../base/html/order)` of the entries is determined by the coercion to `"[TsparseMatrix](tsparsematrix-class)"` and hence typically with increasing `j` (and increasing `i` within ties of `j`).
### Note
The `mat2triplet()` utility was created to be a more efficient and more predictable substitute for `[summary](../../base/html/summary)(<sparseMatrix>)`. UseRs have wrongly expected the latter to return a data frame with columns `i` and `j` which however is wrong for a `"[diagonalMatrix](diagonalmatrix-class)"`.
### See Also
The `summary()` method for `"sparseMatrix"`, `[summary,sparseMatrix-method](sparsematrix-class)`.
`mat2triplet()` is conceptually the *inverse* function of `[spMatrix](spmatrix)` and (one case of) `[sparseMatrix](sparsematrix)`.
### Examples
```
if(FALSE) ## The function is defined (don't redefine here!), simply as
mat2triplet <- function(x, uniqT = FALSE) {
T <- as(x, "TsparseMatrix")
if(uniqT && anyDuplicatedT(T)) T <- .uniqTsparse(T)
if(is(T, "nsparseMatrix"))
list(i = T@i + 1L, j = T@j + 1L)
else list(i = T@i + 1L, j = T@j + 1L, x = T@x)
}
i <- c(1,3:8); j <- c(2,9,6:10); x <- 7 * (1:7)
(Ax <- sparseMatrix(i, j, x = x)) ## 8 x 10 "dgCMatrix"
str(trA <- mat2triplet(Ax))
stopifnot(i == sort(trA$i), sort(j) == trA$j, x == sort(trA$x))
D <- Diagonal(x=4:2)
summary(D)
str(mat2triplet(D))
```
r None
`symmetricMatrix-class` Virtual Class of Symmetric Matrices in Package Matrix
------------------------------------------------------------------------------
### Description
The virtual class of symmetric matrices, `"symmetricMatrix"`, from the package Matrix contains numeric and logical, dense and sparse matrices, e.g., see the examples with the “actual” subclasses.
The main use is in methods (and C functions) that can deal with all symmetric matrices, and in `as(*, "symmetricMatrix")`.
### Slots
`uplo`:
Object of class `"character"`. Must be either "U", for upper triangular, and "L", for lower triangular.
`Dim, Dimnames`:
The dimension (a length-2 `"integer"`) and corresponding names (or `NULL`), inherited from the `[Matrix](matrix-class)`, see there. See below, about storing only one of the two `Dimnames` components.
`factors`:
a list of matrix factorizations, also from the `Matrix` class.
### Extends
Class `"Matrix"`, directly.
### Methods
coerce
`signature(from = "ddiMatrix", to =
"symmetricMatrix")`: and many other coercion methods, some of which are particularly optimized.
dimnames
`signature(object = "symmetricMatrix")`: returns *symmetric* `[dimnames](../../base/html/dimnames)`, even when the `Dimnames` slot only has row or column names. This allows to save storage for large (typically sparse) symmetric matrices.
isSymmetric
`signature(object = "symmetricMatrix")`: returns `TRUE` trivially.
There's a C function `symmetricMatrix_validate()` called by the internal validity checking functions, and also from `[getValidity](../../methods/html/validobject)(getClass("symmetricMatrix"))`.
### Validity and `[dimnames](../../base/html/dimnames)`
The validity checks do not require a symmetric `Dimnames` slot, so it can be `list(NULL, <character>)`, e.g., for efficiency. However, `[dimnames](../../base/html/dimnames)()` and other functions and methods should behave as if the dimnames were symmetric, i.e., with both list components identical.
### See Also
`[isSymmetric](../../base/html/issymmetric)` which has efficient methods ([isSymmetric-methods](issymmetric-methods)) for the Matrix classes. Classes `[triangularMatrix](triangularmatrix-class)`, and, e.g., `[dsyMatrix](dsymatrix-class)` for numeric *dense* matrices, or `[lsCMatrix](lsparsematrix-classes)` for a logical *sparse* matrix class.
### Examples
```
## An example about the symmetric Dimnames:
sy <- sparseMatrix(i= c(2,4,3:5), j= c(4,7:5,5), x = 1:5, dims = c(7,7),
symmetric=TRUE, dimnames = list(NULL, letters[1:7]))
sy # shows symmetrical dimnames
sy@Dimnames # internally only one part is stored
dimnames(sy) # both parts - as sy *is* symmetrical
showClass("symmetricMatrix")
## The names of direct subclasses:
scl <- getClass("symmetricMatrix")@subclasses
directly <- sapply(lapply(scl, slot, "by"), length) == 0
names(scl)[directly]
## Methods -- applicaple to all subclasses above:
showMethods(classes = "symmetricMatrix")
```
r None
`diagonalMatrix-class` Class "diagonalMatrix" of Diagonal Matrices
-------------------------------------------------------------------
### Description
Class "diagonalMatrix" is the virtual class of all diagonal matrices.
### Objects from the Class
A virtual Class: No objects may be created from it.
### Slots
`diag`:
code"character" string, either `"U"` or `"N"`, where `"U"` means ‘unit-diagonal’.
`Dim`:
matrix dimension, and
`Dimnames`:
the `[dimnames](../../base/html/dimnames)`, a `[list](../../base/html/list)`, see the `[Matrix](matrix-class)` class description. Typically `list(NULL,NULL)` for diagonal matrices.
### Extends
Class `"[sparseMatrix](sparsematrix-class)"`, directly.
### Methods
These are just a subset of the signature for which defined methods. Currently, there are (too) many explicit methods defined in order to ensure efficient methods for diagonal matrices.
coerce
`signature(from = "matrix", to = "diagonalMatrix")`: ...
coerce
`signature(from = "Matrix", to = "diagonalMatrix")`: ...
coerce
`signature(from = "diagonalMatrix", to = "generalMatrix")`: ...
coerce
`signature(from = "diagonalMatrix", to = "triangularMatrix")`: ...
coerce
`signature(from = "diagonalMatrix", to = "nMatrix")`: ...
coerce
`signature(from = "diagonalMatrix", to = "matrix")`: ...
coerce
`signature(from = "diagonalMatrix", to = "sparseVector")`: ...
t
`signature(x = "diagonalMatrix")`: ...
and many more methods
solve
`signature(a = "diagonalMatrix", b, ...)`: is trivially implemented, of course; see also `<solve-methods>`.
which
`signature(x = "nMatrix")`, semantically equivalent to base function `[which](../../base/html/which)(x, arr.ind)`.
"Math"
`signature(x = "diagonalMatrix")`: all these group methods return a `"diagonalMatrix"`, apart from `[cumsum](../../base/html/cumsum)()` etc which return a *vector* also for base `[matrix](../../base/html/matrix)`.
\*
`signature(e1 = "ddiMatrix", e2="denseMatrix")`: arithmetic and other operators from the `[Ops](../../methods/html/s4groupgeneric)` group have a few dozen explicit method definitions, in order to keep the results *diagonal* in many cases, including the following:
/
`signature(e1 = "ddiMatrix", e2="denseMatrix")`: the result is from class `[ddiMatrix](ddimatrix-class)` which is typically very desirable. Note that when `e2` contains off-diagonal zeros or `[NA](../../base/html/na)`s, we implicitly use *0 / x = 0*, hence differing from traditional **R** arithmetic (where *0/0 |-> NaN*), in order to preserve sparsity.
summary
`(object = "diagonalMatrix")`: Returns an object of S3 class `"diagSummary"` which is the summary of the vector `object@x` plus a simple heading, and an appropriate `[print](../../base/html/print)` method.
### See Also
`[Diagonal](diagonal)()` as constructor of these matrices, and `[isDiagonal](istriangular)`. `[ddiMatrix](ddimatrix-class)` and `[ldiMatrix](ldimatrix-class)` are “actual” classes extending `"diagonalMatrix"`.
### Examples
```
I5 <- Diagonal(5)
D5 <- Diagonal(x = 10*(1:5))
## trivial (but explicitly defined) methods:
stopifnot(identical(crossprod(I5), I5),
identical(tcrossprod(I5), I5),
identical(crossprod(I5, D5), D5),
identical(tcrossprod(D5, I5), D5),
identical(solve(D5), solve(D5, I5)),
all.equal(D5, solve(solve(D5)), tolerance = 1e-12)
)
solve(D5)# efficient as is diagonal
# an unusual way to construct a band matrix:
rbind2(cbind2(I5, D5),
cbind2(D5, I5))
```
r None
`SparseM-conv` Sparse Matrix Coercion from and to those from package SparseM
-----------------------------------------------------------------------------
### Description
Methods for coercion from and to sparse matrices from package SparseM are provided here, for ease of porting functionality to the Matrix package, and comparing functionality of the two packages. All these work via the usual `[as](../../methods/html/as)(., "<class>")` coercion,
```
as(from, Class)
```
### Methods
from = "matrix.csr", to = "dgRMatrix"
...
from = "matrix.csc", to = "dgCMatrix"
...
from = "matrix.coo", to = "dgTMatrix"
...
from = "dgRMatrix", to = "matrix.csr"
...
from = "dgCMatrix", to = "matrix.csc"
...
from = "dgTMatrix", to = "matrix.coo"
...
from = "sparseMatrix", to = "matrix.csr"
...
from = "matrix.csr", to = "dgCMatrix"
...
from = "matrix.coo", to = "dgCMatrix"
...
from = "matrix.csr", to = "Matrix"
...
from = "matrix.csc", to = "Matrix"
...
from = "matrix.coo", to = "Matrix"
...
### See Also
The documentation in CRAN package [SparseM](https://CRAN.R-project.org/package=SparseM), such as `[SparseM.ontology](../../sparsem/html/sparsem.ontology)`, and one important class, `[matrix.csr](../../sparsem/html/matrix.csr-class)`.
r None
`boolean-matprod` Boolean Arithmetic Matrix Products: %&% and Methods
----------------------------------------------------------------------
### Description
For boolean or “patter**n**” matrices, i.e., **R** objects of class `[nMatrix](nmatrix-class)`, it is natural to allow matrix products using boolean instead of numerical arithmetic.
In package Matrix, we use the binary operator `%&%` (aka “infix”) function) for this and provide methods for all our matrices and the traditional **R** matrices (see `[matrix](../../base/html/matrix)`).
### Value
a pattern matrix, i.e., inheriting from `"[nMatrix](nmatrix-class)"`, or an `"[ldiMatrix](ldimatrix-class)"` in case of a diagonal matrix.
### Methods
We provide methods for both the “traditional” (**R** base) matrices and numeric vectors and conceptually all matrices and `[sparseVector](sparsevector-class)`s in package Matrix.
`signature(x = "ANY", y = "ANY")`
`signature(x = "ANY", y = "Matrix")`
`signature(x = "Matrix", y = "ANY")`
`signature(x = "mMatrix", y = "mMatrix")`
`signature(x = "nMatrix", y = "nMatrix")`
`signature(x = "nMatrix", y = "nsparseMatrix")`
`signature(x = "nsparseMatrix", y = "nMatrix")`
`signature(x = "nsparseMatrix", y = "nsparseMatrix")`
`signature(x = "sparseVector", y = "mMatrix")`
`signature(x = "mMatrix", y = "sparseVector")`
`signature(x = "sparseVector", y = "sparseVector")`
### Note
The current implementation ends up coercing both `x` and `y` to (virtual) class `[nsparseMatrix](nsparsematrix-classes)` which may be quite inefficient. A future implementation may well return a matrix with **different** class, but the “same” content, i.e., the same matrix entries *m[i,j]*.
### Examples
```
set.seed(7)
L <- Matrix(rnorm(20) > 1, 4,5)
(N <- as(L, "nMatrix"))
D <- Matrix(round(rnorm(30)), 5,6) # -> values in -1:1 (for this seed)
L %&% D
stopifnot(identical(L %&% D, N %&% D),
all(L %&% D == as((L %*% abs(D)) > 0, "sparseMatrix")))
## cross products , possibly with boolArith = TRUE :
crossprod(N) # -> sparse patter'n' (TRUE/FALSE : boolean arithmetic)
crossprod(N +0) # -> numeric Matrix (with same "pattern")
stopifnot(all(crossprod(N) == t(N) %&% N),
identical(crossprod(N), crossprod(N +0, boolArith=TRUE)),
identical(crossprod(L), crossprod(N , boolArith=FALSE)))
crossprod(D, boolArith = TRUE) # pattern: "nsCMatrix"
crossprod(L, boolArith = TRUE) # ditto
crossprod(L, boolArith = FALSE) # numeric: "dsCMatrix"
```
r None
`RsparseMatrix-class` Class "RsparseMatrix" of Sparse Matrices in Row-compressed Form
--------------------------------------------------------------------------------------
### Description
The `"RsparseMatrix"` class is the virtual class of all sparse matrices coded in sorted compressed row-oriented form. Since it is a virtual class, no objects may be created from it. See `showClass("RsparseMatrix")` for its subclasses.
### Slots
`j`:
Object of class `"integer"` of length `nnzero` (number of non-zero elements). These are the row numbers for each non-zero element in the matrix.
`p`:
Object of class `"integer"` of pointers, one for each row, to the initial (zero-based) index of elements in the row.
`Dim`, `Dimnames`:
inherited from the superclass, see `[sparseMatrix](sparsematrix-class)`.
### Extends
Class `"sparseMatrix"`, directly. Class `"Matrix"`, by class `"sparseMatrix"`.
### Methods
Originally, **few** methods were defined on purpose, as we rather use the `[CsparseMatrix](csparsematrix-class)` in Matrix. Then, more methods were added but *beware* that these typically do *not* return `"RsparseMatrix"` results, but rather Csparse\* or Tsparse\* ones; e.g., `R[i, j] <- v` for an `"RsparseMatrix"` `R` works, but after the assignment, `R` is a (triplet) `"TsparseMatrix"`.
t
`signature(x = "RsparseMatrix")`: ...
coerce
`signature(from = "RsparseMatrix", to = "CsparseMatrix")`: ...
coerce
`signature(from = "RsparseMatrix", to = "TsparseMatrix")`: ...
### See Also
its superclass, `[sparseMatrix](sparsematrix-class)`, and, e.g., class `[dgRMatrix](dgrmatrix-class)` for the links to other classes.
### Examples
```
showClass("RsparseMatrix")
```
r None
`dgCMatrix-class` Compressed, sparse, column-oriented numeric matrices
-----------------------------------------------------------------------
### Description
The `dgCMatrix` class is a class of sparse numeric matrices in the compressed, sparse, column-oriented format. In this implementation the non-zero elements in the columns are sorted into increasing row order. `dgCMatrix` is the *“standard”* class for sparse numeric matrices in the Matrix package.
### Objects from the Class
Objects can be created by calls of the form `new("dgCMatrix",
...)`, more typically via `as(*, "CsparseMatrix")` or similar. Often however, more easily via `[Matrix](matrix)(*, sparse = TRUE)`, or most efficiently via `[sparseMatrix](sparsematrix)()`.
### Slots
`x`:
Object of class `"numeric"` - the non-zero elements of the matrix.
...
all other slots are inherited from the superclass `"[CsparseMatrix](csparsematrix-class)"`.
### Methods
Matrix products (e.g., [crossprod-methods](matrix-products)), and (among other)
coerce
`signature(from = "matrix", to = "dgCMatrix")`
coerce
`signature(from = "dgCMatrix", to = "matrix")`
coerce
`signature(from = "dgCMatrix", to = "dgTMatrix")`
diag
`signature(x = "dgCMatrix")`: returns the diagonal of `x`
dim
`signature(x = "dgCMatrix")`: returns the dimensions of `x`
image
`signature(x = "dgCMatrix")`: plots an image of `x` using the `[levelplot](../../lattice/html/levelplot)` function
solve
`signature(a = "dgCMatrix", b = "...")`: see `<solve-methods>`, notably the extra argument `sparse`.
lu
`signature(x = "dgCMatrix")`: computes the LU decomposition of a square `dgCMatrix` object
### See Also
Classes `[dsCMatrix](dscmatrix-class)`, `[dtCMatrix](dtcmatrix-class)`, `<lu>`
### Examples
```
(m <- Matrix(c(0,0,2:0), 3,5))
str(m)
m[,1]
```
| programming_docs |
r None
`unpack` Representation of Packed and Unpacked (Dense) Matrices
----------------------------------------------------------------
### Description
“Packed” matrix storage here applies to dense matrices (`[denseMatrix](densematrix-class)`) only, and there is available only for symmetric (`[symmetricMatrix](symmetricmatrix-class)`) or triangular (`[triangularMatrix](triangularmatrix-class)`) matrices, where only one triangle of the matrix needs to be stored.
`unpack()` unpacks “packed” matrices, where
`pack()` produces “packed” matrices.
### Usage
```
pack(x, ...)
## S4 method for signature 'matrix'
pack(x, symmetric = NA, upperTri = NA, ...)
unpack(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | for `unpack()`:
a matrix stored in packed form, e.g., of class `"d?pMatrix"` where "?" is "t" for triangular or "s" for symmetric. for `pack()`:
a (symmetric or triangular) matrix stored in full storage. |
| `symmetric` | logical (including `NA`) for optionally specifying if `x` is symmetric (or rather triangular). |
| `upperTri` | (for the triangular case only) logical (incl. `NA`) indicating if `x` is upper (or lower) triangular. |
| `...` | further arguments passed to or from other methods. |
### Details
These are generic functions with special methods for different types of packed (or non-packed) symmetric or triangular dense matrices. Use `[showMethods](../../methods/html/showmethods)("unpack")` to list the methods for `unpack()`, and similarly for `pack()`.
### Value
for `unpack()`:
A `[Matrix](matrix-class)` object containing the full-storage representation of `x`.
for `pack()`:
A packed `Matrix` (i.e. of class `"..pMatrix"`) representation of `x`.
### Examples
```
showMethods("unpack")
(cp4 <- chol(Hilbert(4))) # is triangular
tp4 <- as(cp4,"dtpMatrix")# [t]riangular [p]acked
str(tp4)
(unpack(tp4))
stopifnot(identical(tp4, pack(unpack(tp4))))
(s <- crossprod(matrix(sample(15), 5,3))) # traditional symmetric matrix
(sp <- pack(s))
mt <- as.matrix(tt <- tril(s))
(pt <- pack(mt))
stopifnot(identical(pt, pack(tt)),
dim(s ) == dim(sp), all(s == sp),
dim(mt) == dim(pt), all(mt == pt), all(mt == tt))
showMethods("pack")
```
r None
`dpoMatrix-class` Positive Semi-definite Dense (Packed | Non-packed) Numeric Matrices
--------------------------------------------------------------------------------------
### Description
* The `"dpoMatrix"` class is the class of positive-semidefinite symmetric matrices in nonpacked storage.
* The `"dppMatrix"` class is the same except in packed storage. Only the upper triangle or the lower triangle is required to be available.
* The `"corMatrix"` class of correlation matrices extends `"dpoMatrix"` with a slot `sd`, which allows to restore the original covariance matrix.
### Objects from the Class
Objects can be created by calls of the form `new("dpoMatrix", ...)` or from `crossprod` applied to an `"dgeMatrix"` object.
### Slots
`uplo`:
Object of class `"character"`. Must be either "U", for upper triangular, and "L", for lower triangular.
`x`:
Object of class `"numeric"`. The numeric values that constitute the matrix, stored in column-major order.
`Dim`:
Object of class `"integer"`. The dimensions of the matrix which must be a two-element vector of non-negative integers.
`Dimnames`:
inherited from class `"Matrix"`
`factors`:
Object of class `"list"`. A named list of factorizations that have been computed for the matrix.
`sd`:
(for `"corMatrix"`) a `[numeric](../../base/html/numeric)` vector of length `n` containing the (original) *sqrt(var(.))* entries which allow reconstruction of a covariance matrix from the correlation matrix.
### Extends
Class `"dsyMatrix"`, directly.
Classes `"dgeMatrix"`, `"symmetricMatrix"`, and many more by class `"dsyMatrix"`.
### Methods
chol
`signature(x = "dpoMatrix")`: Returns (and stores) the Cholesky decomposition of `x`, see `<chol>`.
determinant
`signature(x = "dpoMatrix")`: Returns the `[determinant](../../base/html/det)` of `x`, via `chol(x)`, see above.
rcond
`signature(x = "dpoMatrix", norm = "character")`: Returns (and stores) the reciprocal of the condition number of `x`. The `norm` can be `"O"` for the one-norm (the default) or `"I"` for the infinity-norm. For symmetric matrices the result does not depend on the norm.
solve
`signature(a = "dpoMatrix", b = "....")`
, and
solve
`signature(a = "dppMatrix", b = "....")`
work via the Cholesky composition, see also the Matrix `<solve-methods>`.
Arith
`signature(e1 = "dpoMatrix", e2 = "numeric")` (and quite a few other signatures): The result of (“elementwise” defined) arithmetic operations is typically *not* positive-definite anymore. The only exceptions, currently, are multiplications, divisions or additions with *positive* `length(.) == 1` numbers (or `[logical](../../base/html/logical)`s).
### See Also
Classes `[dsyMatrix](dsymatrix-class)` and `[dgeMatrix](dgematrix-class)`; further, `[Matrix](matrix)`, `<rcond>`, `[chol](../../base/html/chol)`, `[solve](../../base/html/solve)`, `[crossprod](matrix-products)`.
### Examples
```
h6 <- Hilbert(6)
rcond(h6)
str(h6)
h6 * 27720 # is ``integer''
solve(h6)
str(hp6 <- as(h6, "dppMatrix"))
### Note that as(*, "corMatrix") *scales* the matrix
(ch6 <- as(h6, "corMatrix"))
stopifnot(all.equal(h6 * 27720, round(27720 * h6), tolerance = 1e-14),
all.equal(ch6@sd^(-2), 2*(1:6)-1, tolerance= 1e-12))
chch <- chol(ch6)
stopifnot(identical(chch, ch6@factors$Cholesky),
all(abs(crossprod(chch) - ch6) < 1e-10))
```
r None
`chol2inv-methods` Inverse from Choleski or QR Decomposition – Matrix Methods
------------------------------------------------------------------------------
### Description
Invert a symmetric, positive definite square matrix from its Choleski decomposition. Equivalently, compute *(X'X)^(-1)* from the (*R* part) of the QR decomposition of *X*.
Even more generally, given an upper triangular matrix *R*, compute *(R'R)^(-1)*.
### Methods
x = "ANY"
the default method from base, see `[chol2inv](../../base/html/chol2inv)`, for traditional matrices.
x = "dtrMatrix"
method for the numeric triangular matrices, built on the same LAPACK `DPOTRI` function as the base method.
x = "denseMatrix"
if `x` is coercable to a `[triangularMatrix](triangularmatrix-class)`, call the `"dtrMatrix"` method above.
x = "sparseMatrix"
if `x` is coercable to a `[triangularMatrix](triangularmatrix-class)`, use `[solve](solve-methods)()` currently.
### See Also
`<chol>` (for `[Matrix](matrix-class)` objects); further, `[chol2inv](../../base/html/chol2inv)` (from the base package), `[solve](solve-methods)`.
### Examples
```
(M <- Matrix(cbind(1, 1:3, c(1,3,7))))
(cM <- chol(M)) # a "Cholesky" object, inheriting from "dtrMatrix"
chol2inv(cM) %*% M # the identity
stopifnot(all(chol2inv(cM) %*% M - Diagonal(nrow(M))) < 1e-10)
```
r None
`is.null.DN` Are the Dimnames dn NULL-like ?
---------------------------------------------
### Description
Are the `[dimnames](../../base/html/dimnames)` `dn` `[NULL](../../base/html/null)`-like?
`is.null.DN(dn)` is less strict than `[is.null](../../base/html/null)(dn)`, because it is also true (`[TRUE](../../base/html/logical)`) when the dimnames `dn` are “like” `NULL`, or `list(NULL,NULL)`, as they can easily be for the traditional **R** matrices (`[matrix](../../base/html/matrix)`) which have no formal `[class](../../base/html/class)` definition, and hence much freedom in how their `[dimnames](../../base/html/dimnames)` look like.
### Usage
```
is.null.DN(dn)
```
### Arguments
| | |
| --- | --- |
| `dn` | `[dimnames](../../base/html/dimnames)()` of a `[matrix](../../base/html/matrix)`-like **R** object. |
### Value
`[logical](../../base/html/logical)` `[TRUE](../../base/html/logical)` or `[FALSE](../../base/html/logical)`.
### Note
This function is really to be used on “traditional” matrices rather than those inheriting from `[Matrix](matrix-class)`, as the latter will always have dimnames `list(NULL,NULL)` exactly, in such a case.
### Author(s)
Martin Maechler
### See Also
`[is.null](../../base/html/null)`, `[dimnames](../../base/html/dimnames)`, `[matrix](../../base/html/matrix)`.
### Examples
```
m <- matrix(round(100 * rnorm(6)), 2,3); m1 <- m2 <- m3 <- m4 <- m
dimnames(m1) <- list(NULL, NULL)
dimnames(m2) <- list(NULL, character())
dimnames(m3) <- rev(dimnames(m2))
dimnames(m4) <- rep(list(character()),2)
m4 ## prints absolutely identically to m
stopifnot(m == m1, m1 == m2, m2 == m3, m3 == m4,
identical(capture.output(m) -> cm,
capture.output(m1)),
identical(cm, capture.output(m2)),
identical(cm, capture.output(m3)),
identical(cm, capture.output(m4)))
```
r None
`indMatrix-class` Index Matrices
---------------------------------
### Description
The `"indMatrix"` class is the class of index matrices, stored as 1-based integer index vectors. An index matrix is a matrix with exactly one non-zero entry per row. Index matrices are useful for mapping observations to unique covariate values, for example.
Matrix (vector) multiplication with index matrices is equivalent to replicating and permuting rows, or “sampling rows with replacement”, and is implemented that way in the Matrix package, see the ‘Details’ below.
### Details
Matrix (vector) multiplication with index matrices from the left is equivalent to replicating and permuting rows of the matrix on the right hand side. (Similarly, matrix multiplication with the transpose of an index matrix from the right corresponds to selecting *columns*.) The crossproduct of an index matrix *M* with itself is a diagonal matrix with the number of entries in each column of *M* on the diagonal, i.e., *M'M=*`Diagonal(x=table(M@perm))`.
Permutation matrices (of class `[pMatrix](pmatrix-class)`) are special cases of index matrices: They are square, of dimension, say, *n \* n*, and their index vectors contain exactly all of `1:n`.
While “row-indexing” (of more than one row *or* using `drop=FALSE`) stays within the `"indMatrix"` class, all other subsetting/indexing operations (“column-indexing”, including, `[diag](../../base/html/diag)`) on `"indMatrix"` objects treats them as nonzero-pattern matrices (`"[ngTMatrix](nsparsematrix-classes)"` specifically), such that non-matrix subsetting results in `[logical](../../base/html/logical)` vectors. Sub-assignment (`M[i,j] <- v`) is not sensible and hence an error for these matrices.
### Objects from the Class
Objects can be created by calls of the form `new("indMatrix", ...)` or by coercion from an integer index vector, see below.
### Slots
`perm`:
An integer, 1-based index vector, i.e. an integer vector of length `Dim[1]` whose elements are taken from `1:Dim[2]`.
`Dim`:
`[integer](../../base/html/integer)` vector of length two. In some applications, the matrix will be skinny, i.e., with at least as many rows as columns.
`Dimnames`:
a `[list](../../base/html/list)` of length two where each component is either `[NULL](../../base/html/null)` or a `[character](../../base/html/character)` vector of length equal to the corresponding `Dim` element.
### Extends
Class `"[sparseMatrix](sparsematrix-class)"` and `"[generalMatrix](generalmatrix-class)"`, directly.
### Methods
%\*%
`signature(x = "matrix", y = "indMatrix")` and other signatures (use `showMethods("%*%", class="indMatrix")`): ...
coerce
`signature(from = "integer", to = "indMatrix")`: This enables typical `"indMatrix"` construction, given an index vector from elements in `1:Dim[2]`, see the first example.
coerce
`signature(from = "numeric", to = "indMatrix")`: a user convenience, to allow `as(perm, "indMatrix")` for numeric `perm` with integer values.
coerce
`signature(from = "list", to = "indMatrix")`: The list must have two (integer-valued) entries: the first giving the index vector with elements in `1:Dim[2]`, the second giving `Dim[2]`. This allows `"indMatrix"` construction for cases in which the values represented by the rightmost column(s) are not associated with any observations, i.e., in which the index does not contain values `Dim[2], Dim[2]-1, Dim[2]-2, ...`
coerce
`signature(from = "indMatrix", to = "matrix")`: coercion to a traditional FALSE/TRUE `[matrix](../../base/html/matrix)` of `[mode](../../base/html/mode)` `logical`.
coerce
`signature(from = "indMatrix", to = "ngTMatrix")`: coercion to sparse logical matrix of class `[ngTMatrix](nsparsematrix-classes)`.
t
`signature(x = "indMatrix")`: return the transpose of the index matrix (which is no longer an `indMatrix`, but of class `[ngTMatrix](nsparsematrix-classes)`.
colSums, colMeans, rowSums, rowMeans
`signature(x = "indMatrix")`: return the column or row sums or means.
rbind2
`signature(x = "indMatrix", y = "indMatrix")`: a fast method for rowwise catenation of two index matrices (with the same number of columns).
kronecker
`signature(X = "indMatrix", Y = "indMatrix")`: return the kronecker product of two index matrices, which corresponds to the index matrix of the interaction of the two.
### Author(s)
Fabian Scheipl, Uni Muenchen, building on existing `"[pMatrix](pmatrix-class)"`, after a nice hike's conversation with Martin Maechler; diverse tweaks by the latter. The `[crossprod](matrix-products)(x,y)` and `[kronecker](../../base/html/kronecker)(x,y)` methods when both arguments are `"indMatrix"` have been made considerably faster thanks to a suggestion by Boris Vaillant.
### See Also
The permutation matrices `[pMatrix](pmatrix-class)` are special index matrices. The “pattern” matrices, `[nMatrix](nmatrix-class)` and its subclasses.
### Examples
```
p1 <- as(c(2,3,1), "pMatrix")
(sm1 <- as(rep(c(2,3,1), e=3), "indMatrix"))
stopifnot(all(sm1 == p1[rep(1:3, each=3),]))
## row-indexing of a <pMatrix> turns it into an <indMatrix>:
class(p1[rep(1:3, each=3),])
set.seed(12) # so we know '10' is in sample
## random index matrix for 30 observations and 10 unique values:
(s10 <- as(sample(10, 30, replace=TRUE),"indMatrix"))
## Sample rows of a numeric matrix :
(mm <- matrix(1:10, nrow=10, ncol=3))
s10 %*% mm
set.seed(27)
IM1 <- as(sample(1:20, 100, replace=TRUE), "indMatrix")
IM2 <- as(sample(1:18, 100, replace=TRUE), "indMatrix")
(c12 <- crossprod(IM1,IM2))
## same as cross-tabulation of the two index vectors:
stopifnot(all(c12 - unclass(table(IM1@perm, IM2@perm)) == 0))
# 3 observations, 4 implied values, first does not occur in sample:
as(2:4, "indMatrix")
# 3 observations, 5 values, first and last do not occur in sample:
as(list(2:4, 5), "indMatrix")
as(sm1, "ngTMatrix")
s10[1:7, 1:4] # gives an "ngTMatrix" (most economic!)
s10[1:4, ] # preserves "indMatrix"-class
I1 <- as(c(5:1,6:4,7:3), "indMatrix")
I2 <- as(7:1, "pMatrix")
(I12 <- rbind(I1, I2))
stopifnot(is(I12, "indMatrix"),
identical(I12, rbind(I1, I2)),
colSums(I12) == c(2L,2:4,4:2))
```
r None
`formatSparseM` Formatting Sparse Numeric Matrices Utilities
-------------------------------------------------------------
### Description
Utilities for formatting sparse numeric matrices in a flexible way. These functions are used by the `[format](../../base/html/format)` and `print` methods for sparse matrices and can be applied as well to standard **R** matrices. Note that *all* arguments but the first are optional.
`formatSparseM()` is the main “workhorse” of `[formatSpMatrix](printspmatrix)`, the `format` method for sparse matrices.
`.formatSparseSimple()` is a simple helper function, also dealing with (short/empty) column names construction.
### Usage
```
formatSparseM(x, zero.print = ".", align = c("fancy", "right"),
m = as(x,"matrix"), asLogical=NULL, uniDiag=NULL,
digits=NULL, cx, iN0, dn = dimnames(m))
.formatSparseSimple(m, asLogical=FALSE, digits=NULL,
col.names, note.dropping.colnames = TRUE,
dn=dimnames(m))
```
### Arguments
| | |
| --- | --- |
| `x` | an **R** object inheriting from class `[sparseMatrix](sparsematrix-class)`. |
| `zero.print` | character which should be used for *structural* zeroes. The default `"."` may occasionally be replaced by `" "` (blank); using `"0"` would look almost like `print()`ing of non-sparse matrices. |
| `align` | a string specifying how the `zero.print` codes should be aligned, see `[formatSpMatrix](printspmatrix)`. |
| `m` | (optional) a (standard **R**) `[matrix](../../base/html/matrix)` version of `x`. |
| `asLogical` | should the matrix be formatted as a logical matrix (or rather as a numeric one); mostly for `formatSparseM()`. |
| `uniDiag` | logical indicating if the diagonal entries of a sparse unit triangular or unit-diagonal matrix should be formatted as `"I"` instead of `"1"` (to emphasize that the 1's are “structural”). |
| `digits` | significant digits to use for printing, see `[print.default](../../base/html/print.default)`. |
| `cx` | (optional) character matrix; a formatted version of `x`, still with strings such as `"0.00"` for the zeros. |
| `iN0` | (optional) integer vector, specifying the location of the *non*-zeroes of `x`. |
| `col.names, note.dropping.colnames` | see `[formatSpMatrix](printspmatrix)`. |
| `dn` | `[dimnames](../../base/html/dimnames)` to be used; a list (of length two) with row and column names (or `[NULL](../../base/html/null)`). |
### Value
a character matrix like `cx`, where the zeros have been replaced with (padded versions of) `zero.print`. As this is a *dense* matrix, do not use these functions for really large (really) sparse matrices!
### Author(s)
Martin Maechler
### See Also
`[formatSpMatrix](printspmatrix)` which calls `formatSparseM()` and is the `[format](../../base/html/format)` method for sparse matrices.
`[printSpMatrix](printspmatrix)` which is used by the (typically implicitly called) `[show](../../methods/html/show)` and `[print](../../base/html/print)` methods for sparse matrices.
### Examples
```
m <- suppressWarnings(matrix(c(0, 3.2, 0,0, 11,0,0,0,0,-7,0), 4,9))
fm <- formatSparseM(m)
noquote(fm)
## nice, but this is nicer {with "units" vertically aligned}:
print(fm, quote=FALSE, right=TRUE)
## and "the same" as :
Matrix(m)
## align = "right" is cheaper --> the "." are not aligned:
noquote(f2 <- formatSparseM(m,align="r"))
stopifnot(f2 == fm | m == 0, dim(f2) == dim(m),
(f2 == ".") == (m == 0))
```
r None
`rankMatrix` Rank of a Matrix
------------------------------
### Description
Compute ‘the’ matrix rank, a well-defined functional in theory(\*), somewhat ambiguous in practice. We provide several methods, the default corresponding to Matlab's definition.
(\*) The rank of a *n x m* matrix *A*, *rk(A)*, is the maximal number of linearly independent columns (or rows); hence *rk(A) <= min(n,m)*.
### Usage
```
rankMatrix(x, tol = NULL,
method = c("tolNorm2", "qr.R", "qrLINPACK", "qr",
"useGrad", "maybeGrad"),
sval = svd(x, 0, 0)$d, warn.t = TRUE, warn.qr = TRUE)
qr2rankMatrix(qr, tol = NULL, isBqr = is.qr(qr), do.warn = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x` | numeric matrix, of dimension *n x m*, say. |
| `tol` | nonnegative number specifying a (relative, “scalefree”) tolerance for testing of “practically zero” with specific meaning depending on `method`; by default, `max(dim(x)) * [.Machine](../../base/html/zmachine)$double.eps` is according to Matlab's default (for its only method which is our `method="tolNorm2"`). |
| `method` | a character string specifying the computational method for the rank, can be abbreviated:
`"tolNorm2"`:
the number of singular values `>= tol * max(sval)`;
`"qrLINPACK"`:
for a dense matrix, this is the rank of `[qr](../../base/html/qr)(x, tol, LAPACK=FALSE)` (which is `qr(...)$rank`); This ("qr\*", dense) version used to be *the* recommended way to compute a matrix rank for a while in the past. For sparse `x`, this is equivalent to `"qr.R"`.
`"qr.R"`:
this is the rank of triangular matrix *R*, where `qr()` uses LAPACK or a "sparseQR" method (see `<qr-methods>`) to compute the decomposition *QR*. The rank of *R* is then defined as the number of “non-zero” diagonal entries *d\_i* of *R*, and “non-zero”s fulfill *|d\_i| >= tol \* max(|d\_i|)*.
`"qr"`:
is for back compatibility; for dense `x`, it corresponds to `"qrLINPACK"`, whereas for sparse `x`, it uses `"qr.R"`. For all the "qr\*" methods, singular values `sval` are not used, which may be crucially important for a large sparse matrix `x`, as in that case, when `sval` is not specified, the default, computing `[svd](../../base/html/svd)()` currently coerces `x` to a dense matrix.
`"useGrad"`:
considering the “gradient” of the (decreasing) singular values, the index of the *smallest* gap.
`"maybeGrad"`:
choosing method `"useGrad"` only when that seems *reasonable*; otherwise using `"tolNorm2"`. |
| `sval` | numeric vector of non-increasing singular values of `x`; typically unspecified and computed from `x` when needed, i.e., unless `method = "qr"`. |
| `warn.t` | logical indicating if `rankMatrix()` should warn when it needs `[t](../../base/html/t)(x)` instead of `x`. Currently, for `method = "qr"` only, gives a warning by default because the caller often could have passed `t(x)` directly, more efficiently. |
| `warn.qr` | in the *QR* cases (i.e., if `method` starts with `"qr"`), `rankMatrix()` calls `qr2rankMarix(.., do.warn = warn.qr)`, see below. |
| | |
| --- | --- |
| `qr` | an **R** object resulting from `[qr](qr-methods)(x,..)`, i.e., typically inheriting from `[class](../../base/html/class)` `"[qr](qr-methods)"` or `"[sparseQR](sparseqr-class)"`. |
| `isBqr` | `[logical](../../base/html/logical)` indicating if `qr` is resulting from base `[qr](../../base/html/qr)()`. (Otherwise, it is typically from Matrix package sparse `[qr](qr-methods)`.) |
| `do.warn` | logical; if true, warn about non-finite (or in the `sparseQR` case negative) diagonal entries in the *R* matrix of the *QR* decomposition. Do not change lightly! |
### Details
`qr2rankMatrix()` is typically called from `rankMatrix()` for the `"qr"`\* `method`s, but can be used directly - much more efficiently in case the `qr`-decomposition is available anyway.
### Value
If `x` is a matrix of all `0` (or of zero dimension), the rank is zero; otherwise, typically a positive integer in `1:min(dim(x))` with attributes detailing the method used.
There are rare cases where the sparse *QR* decomposition “fails” in so far as the diagonal entries of *R*, the *d\_i* (see above), end with non-finite, typically `[NaN](../../base/html/is.finite)` entries. Then, a warning is signalled (unless `warn.qr` / `do.warn` is not true) and `NA` (specifically, `[NA\_integer\_](../../base/html/na)`) is returned.
### Note
For large sparse matrices `x`, unless you can specify `sval` yourself, currently `method = "qr"` may be the only feasible one, as the others need `sval` and call `[svd](../../base/html/svd)()` which currently coerces `x` to a `[denseMatrix](densematrix-class)` which may be very slow or impossible, depending on the matrix dimensions.
Note that in the case of sparse `x`, `method = "qr"`, all non-strictly zero diagonal entries *d\_i* where counted, up to including Matrix version 1.1-0, i.e., that method implicitly used `tol = 0`, see also the `set.seed(42)` example below.
### Author(s)
Martin Maechler; for the "\*Grad" methods building on suggestions by Ravi Varadhan.
### See Also
`[qr](qr-methods)`, `[svd](../../base/html/svd)`.
### Examples
```
rankMatrix(cbind(1, 0, 1:3)) # 2
(meths <- eval(formals(rankMatrix)$method))
## a "border" case:
H12 <- Hilbert(12)
rankMatrix(H12, tol = 1e-20) # 12; but 11 with default method & tol.
sapply(meths, function(.m.) rankMatrix(H12, method = .m.))
## tolNorm2 qr.R qrLINPACK qr useGrad maybeGrad
## 11 11 12 12 11 11
## The meaning of 'tol' for method="qrLINPACK" and *dense* x is not entirely "scale free"
rMQL <- function(ex, M) rankMatrix(M, method="qrLINPACK",tol = 10^-ex)
rMQR <- function(ex, M) rankMatrix(M, method="qr.R", tol = 10^-ex)
sapply(5:15, rMQL, M = H12) # result is platform dependent
## 7 7 8 10 10 11 11 11 12 12 12 {x86_64}
sapply(5:15, rMQL, M = 1000 * H12) # not identical unfortunately
## 7 7 8 10 11 11 12 12 12 12 12
sapply(5:15, rMQR, M = H12)
## 5 6 7 8 8 9 9 10 10 11 11
sapply(5:15, rMQR, M = 1000 * H12) # the *same*
## "sparse" case:
M15 <- kronecker(diag(x=c(100,1,10)), Hilbert(5))
sapply(meths, function(.m.) rankMatrix(M15, method = .m.))
#--> all 15, but 'useGrad' has 14.
sapply(meths, function(.m.) rankMatrix(M15, method = .m., tol = 1e-7)) # all 14
## "large" sparse
n <- 250000; p <- 33; nnz <- 10000
L <- sparseMatrix(i = sample.int(n, nnz, replace=TRUE),
j = sample.int(p, nnz, replace=TRUE), x = rnorm(nnz))
(st1 <- system.time(r1 <- rankMatrix(L))) # warning+ ~1.5 sec (2013)
(st2 <- system.time(r2 <- rankMatrix(L, method = "qr"))) # considerably faster!
r1[[1]] == print(r2[[1]]) ## --> ( 33 TRUE )
## another sparse-"qr" one, which ``failed'' till 2013-11-23:
set.seed(42)
f1 <- factor(sample(50, 1000, replace=TRUE))
f2 <- factor(sample(50, 1000, replace=TRUE))
f3 <- factor(sample(50, 1000, replace=TRUE))
D <- t(do.call(rbind, lapply(list(f1,f2,f3), as, 'sparseMatrix')))
dim(D); nnzero(D) ## 1000 x 150 // 3000 non-zeros (= 2%)
stopifnot(rankMatrix(D, method='qr') == 148,
rankMatrix(crossprod(D),method='qr') == 148)
## zero matrix has rank 0 :
stopifnot(sapply(meths, function(.m.)
rankMatrix(matrix(0, 2, 2), method = .m.)) == 0)
```
| programming_docs |
r None
`solve-methods` Methods in Package Matrix for Function solve()
---------------------------------------------------------------
### Description
Methods for function `[solve](solve-methods)` to solve a linear system of equations, or equivalently, solve for *X* in
*A X = B*
where *A* is a square matrix, and *X*, *B* are matrices or vectors (which are treated as 1-column matrices), and the **R** syntax is
```
X <- solve(A,B)
```
In `solve(a,b)` in the Matrix package, `a` may also be a `[MatrixFactorization](matrixfactorization-class)` instead of directly a matrix.
### Usage
```
## S4 method for signature 'CHMfactor,ddenseMatrix'
solve(a, b,
system = c("A", "LDLt", "LD", "DLt", "L", "Lt", "D", "P", "Pt"), ...)
## S4 method for signature 'dgCMatrix,matrix'
solve(a, b, sparse = FALSE, tol = .Machine$double.eps, ...)
solve(a, b, ...) ## *the* two-argument version, almost always preferred to
# solve(a) ## the *rarely* needed one-argument version
```
### Arguments
| | |
| --- | --- |
| `a` | a square numeric matrix, *A*, typically of one of the classes in Matrix. Logical matrices are coerced to corresponding numeric ones. |
| `b` | numeric vector or matrix (dense or sparse) as RHS of the linear system *Ax = b*. |
| `system` | only if `a` is a `[CHMfactor](chmfactor-class)`: character string indicating the kind of linear system to be solved, see below. Note that the default, `"A"`, does *not* solve the triangular system (but `"L"` does). |
| `sparse` | only when `a` is a `[sparseMatrix](sparsematrix-class)`, i.e., typically a `[dgCMatrix](dgcmatrix-class)`: logical specifying if the result should be a (formally) sparse matrix. |
| | |
| --- | --- |
| `tol` | only used when `a` is sparse, in the `[isSymmetric](../../base/html/issymmetric)(a, tol=*)` test, where that applies. |
| `...` | potentially further arguments to the methods. |
### Methods
`signature(a = "ANY", b = "ANY")`
is simply the base package's S3 generic `[solve](solve-methods)`.
`signature(a = "CHMfactor", b = "...."), system= *`
The `solve` methods for a `"[CHMfactor](chmfactor-class)"` object take an optional third argument `system` whose value can be one of the character strings `"A"`, `"LDLt"`, `"LD"`, `"DLt"`, `"L"`, `"Lt"`, `"D"`, `"P"` or `"Pt"`. This argument describes the system to be solved. The default, `"A"`, is to solve *Ax = b* for *x* where `A` is sparse, positive-definite matrix that was factored to produce `a`. Analogously, `system = "L"` returns the solution *x*, of *Lx = b*; similarly, for all system codes **but** `"P"` and `"Pt"` where, e.g., `x <-
solve(a, b,system="P")` is equivalent to `x <- P %*% b`.
If `b` is a `[sparseMatrix](sparsematrix-class)`, `system` is used as above the corresponding sparse CHOLMOD algorithm is called.
`signature(a = "ddenseMatrix", b = "....")`
(for all `b`) work via `as(a, "dgeMatrix")`, using the its methods, see below.
`signature(a = "denseLU", b = "missing")`
basically computes uses triangular forward- and back-solve.
`signature(a = "dgCMatrix", b = "matrix")`
, and
`signature(a = "dgCMatrix", b = "ddenseMatrix")`
with extra argument list `( sparse = FALSE, tol = .Machine$double.eps )` : Uses the sparse `<lu>(a)` decomposition (which is cached in `a`'s `factor` slot). By default, `sparse=FALSE`, returns a `[denseMatrix](densematrix-class)`, since *U^{-1} L^{-1} B* may not be sparse at all, even when *L* and *U* are.
If `sparse=TRUE`, returns a `[sparseMatrix](sparsematrix-class)` (which may not be very sparse at all, even if `a` *was* sparse).
`signature(a = "dgCMatrix", b = "dsparseMatrix")`
, and
`signature(a = "dgCMatrix", b = "missing")`
with extra argument list `( sparse=FALSE, tol = .Machine$double.eps )` : Checks if `a` is symmetric, and in that case, coerces it to `"[symmetricMatrix](symmetricmatrix-class)"`, and then computes a *sparse* solution via sparse Cholesky factorization, independently of the `sparse` argument. If `a` is not symmetric, the sparse `<lu>` decomposition is used and the result will be sparse or dense, depending on the `sparse` argument, exactly as for the above (`b =
"ddenseMatrix"`) case.
`signature(a = "dgeMatrix", b = ".....")`
solve the system via internal LU, calling LAPACK routines `dgetri` or `dgetrs`.
`signature(a = "diagonalMatrix", b = "matrix")`
and other `b`s: Of course this is trivially implemented, as *D^{-1}* is diagonal with entries *1 / D[i,i]*.
`signature(a = "dpoMatrix", b = "....Matrix")`
, and
`signature(a = "dppMatrix", b = "....Matrix")`
The Cholesky decomposition of `a` is calculated (if needed) while solving the system.
`signature(a = "dsCMatrix", b = "....")`
All these methods first try Cholmod's Cholesky factorization; if that works, i.e., typically if `a` is positive semi-definite, it is made use of. Otherwise, the sparse LU decomposition is used as for the “general” matrices of class `"dgCMatrix"`.
`signature(a = "dspMatrix", b = "....")`
, and
`signature(a = "dsyMatrix", b = "....")`
all end up calling LAPACK routines `dsptri`, `dsptrs`, `dsytrs` and `dsytri`.
`signature(a = "dtCMatrix", b = "CsparseMatrix")`
,
`signature(a = "dtCMatrix", b = "dgeMatrix")`
, etc sparse triangular solve, in traditional S/**R** also known as `[backsolve](../../base/html/backsolve)`, or `[forwardsolve](../../base/html/backsolve)`. `solve(a,b)` is a `[sparseMatrix](sparsematrix-class)` if `b` is, and hence a `[denseMatrix](densematrix-class)` otherwise.
`signature(a = "dtrMatrix", b = "ddenseMatrix")`
, and
`signature(a = "dtpMatrix", b = "matrix")`
, and similar `b`, including `"missing"`, and `"diagonalMatrix"`:
all use LAPACK based versions of efficient triangular `[backsolve](../../base/html/backsolve)`, or `[forwardsolve](../../base/html/backsolve)`.
`signature(a = "Matrix", b = "diagonalMatrix")`
works via `as(b, "CsparseMatrix")`.
`signature(a = "sparseQR", b = "ANY")`
simply uses `[qr.coef](../../base/html/qr)(a, b)`.
`signature(a = "pMatrix", b = ".....")`
these methods typically use `[crossprod](matrix-products)(a,b)`, as the inverse of a permutation matrix is the same as its transpose.
`signature(a = "TsparseMatrix", b = "ANY")`
all work via `as(a, "CsparseMatrix")`.
### See Also
`[solve](solve-methods)`, `<lu>`, and class documentations `[CHMfactor](chmfactor-class)`, `[sparseLU](sparselu-class)`, and `[MatrixFactorization](matrixfactorization-class)`.
### Examples
```
## A close to symmetric example with "quite sparse" inverse:
n1 <- 7; n2 <- 3
dd <- data.frame(a = gl(n1,n2), b = gl(n2,1,n1*n2))# balanced 2-way
X <- sparse.model.matrix(~ -1+ a + b, dd)# no intercept --> even sparser
XXt <- tcrossprod(X)
diag(XXt) <- rep(c(0,0,1,0), length.out = nrow(XXt))
n <- nrow(ZZ <- kronecker(XXt, Diagonal(x=c(4,1))))
image(a <- 2*Diagonal(n) + ZZ %*% Diagonal(x=c(10, rep(1, n-1))))
isSymmetric(a) # FALSE
image(drop0(skewpart(a)))
image(ia0 <- solve(a)) # checker board, dense [but really, a is singular!]
try(solve(a, sparse=TRUE))##-> error [ TODO: assertError ]
ia. <- solve(a, sparse=TRUE, tol = 1e-19)##-> *no* error
if(R.version$arch == "x86_64")
## Fails on 32-bit [Fedora 19, R 3.0.2] from Matrix 1.1-0 on [FIXME ??] only
stopifnot(all.equal(as.matrix(ia.), as.matrix(ia0)))
a <- a + Diagonal(n)
iad <- solve(a)
ias <- solve(a, sparse=TRUE)
stopifnot(all.equal(as(ias,"denseMatrix"), iad, tolerance=1e-14))
I. <- iad %*% a ; image(I.)
I0 <- drop0(zapsmall(I.)); image(I0)
.I <- a %*% iad
.I0 <- drop0(zapsmall(.I))
stopifnot( all.equal(as(I0, "diagonalMatrix"), Diagonal(n)),
all.equal(as(.I0,"diagonalMatrix"), Diagonal(n)) )
```
r None
`dtRMatrix-class-def` Triangular Sparse Compressed Row Matrices
----------------------------------------------------------------
### Description
The `dtRMatrix` class is a class of triangular, sparse matrices in the compressed, row-oriented format. In this implementation the non-zero elements in the rows are sorted into increasing columnd order.
### Objects from the Class
This class is currently still mostly unimplemented!
Objects can be created by calls of the form `new("dtRMatrix", ...)`.
### Slots
`uplo`:
Object of class `"character"`. Must be either "U", for upper triangular, and "L", for lower triangular. At present only the lower triangle form is allowed.
`diag`:
Object of class `"character"`. Must be either `"U"`, for unit triangular (diagonal is all ones), or `"N"`; see `[triangularMatrix](triangularmatrix-class)`.
`j`:
Object of class `"integer"` of length `<nnzero>(.)` (number of non-zero elements). These are the row numbers for each non-zero element in the matrix.
`p`:
Object of class `"integer"` of pointers, one for each row, to the initial (zero-based) index of elements in the row. (Only present in the `dsRMatrix` class.)
`x`:
Object of class `"numeric"` - the non-zero elements of the matrix.
`Dim`:
The dimension (a length-2 `"integer"`)
`Dimnames`:
corresponding names (or `NULL`), inherited from the `[Matrix](matrix-class)`, see there.
### Extends
Class `"dgRMatrix"`, directly. Class `"dsparseMatrix"`, by class `"dgRMatrix"`. Class `"dMatrix"`, by class `"dgRMatrix"`. Class `"sparseMatrix"`, by class `"dgRMatrix"`. Class `"Matrix"`, by class `"dgRMatrix"`.
### Methods
No methods currently with class "dsRMatrix" in the signature.
### See Also
Classes `[dgCMatrix](dgcmatrix-class)`, `[dgTMatrix](dgtmatrix-class)`, `[dgeMatrix](dgematrix-class)`
### Examples
```
(m0 <- new("dtRMatrix"))
(m2 <- new("dtRMatrix", Dim = c(2L,2L),
x = c(5, 1:2), p = c(0L,2:3), j= c(0:1,1L)))
str(m2)
(m3 <- as(Diagonal(2), "RsparseMatrix"))# --> dtRMatrix
```
r None
`updown` Up- and Down-Dating a Cholesky Decomposition
------------------------------------------------------
### Description
Compute the up- or down-dated Cholesky decomposition
### Usage
```
updown(update, C, L)
```
### Arguments
| | |
| --- | --- |
| `update` | logical (`TRUE` or `FALSE`) or `"+"` or `"-"` indicating if an up- or a down-date is to be computed. |
| `C` | any **R** object, coercable to a sparse matrix (i.e., of subclass of `[sparseMatrix](sparsematrix-class)`). |
| `L` | a Cholesky factor, specifically, of class `"[CHMfactor](chmfactor-class)"`. |
### Value
an updated Cholesky factor, of the same dimension as `L`. Typically of class `"[dCHMsimpl](chmfactor-class)"` (a sub class of `"[CHMfactor](chmfactor-class)"`).
### Methods
`signature(update = "character", C = "mMatrix", L = "CHMfactor")`
..
`signature(update = "logical", C = "mMatrix", L = "CHMfactor")`
..
### Author(s)
Contributed by Nicholas Nagle, University of Tennessee, Knoxville, USA
### References
CHOLMOD manual, currently beginning of chapter~18. ...
### See Also
`[Cholesky](cholesky)`,
### Examples
```
dn <- list(LETTERS[1:3], letters[1:5])
## pointer vectors can be used, and the (i,x) slots are sorted if necessary:
m <- sparseMatrix(i = c(3,1, 3:2, 2:1), p= c(0:2, 4,4,6), x = 1:6, dimnames = dn)
cA <- Cholesky(A <- crossprod(m) + Diagonal(5))
166 * as(cA,"Matrix") ^ 2
uc1 <- updown("+", Diagonal(5), cA)
## Hmm: this loses positive definiteness:
uc2 <- updown("-", 2*Diagonal(5), cA)
image(show(as(cA, "Matrix")))
image(show(c2 <- as(uc2,"Matrix")))# severely negative entries
##--> Warning
```
r None
`replValue-class` Virtual Class "replValue" - Simple Class for subassignment Values
------------------------------------------------------------------------------------
### Description
The class `"replValue"` is a virtual class used for values in signatures for sub-assignment of Matrix matrices.
In fact, it is a simple class union (`[setClassUnion](../../methods/html/setclassunion)`) of `"numeric"` and `"logical"` (and maybe `"complex"` in the future).
### Objects from the Class
Since it is a virtual Class, no objects may be created from it.
### See Also
`[Subassign-methods](subassign-methods)`, also for examples.
### Examples
```
showClass("replValue")
```
r None
`CAex` Albers' example Matrix with "Difficult" Eigen Factorization
-------------------------------------------------------------------
### Description
An example of a sparse matrix for which `[eigen](../../base/html/eigen)()` seemed to be difficult, an unscaled version of this has been posted to the web, accompanying an E-mail to R-help (<https://stat.ethz.ch/mailman/listinfo/r-help>), by Casper J Albers, Open University, UK.
### Usage
```
data(CAex)
```
### Format
This is a *72 \* 72* symmetric matrix with 216 non-zero entries in five bands, stored as sparse matrix of class `[dgCMatrix](dgcmatrix-class)`.
### Details
Historical note (2006-03-30): In earlier versions of **R**, `[eigen](../../base/html/eigen)(CAex)` fell into an infinite loop whereas `[eigen](../../base/html/eigen)(CAex, EISPACK=TRUE)` had been okay.
### Examples
```
data(CAex)
str(CAex) # of class "dgCMatrix"
image(CAex)# -> it's a simple band matrix with 5 bands
## and the eigen values are basically 1 (42 times) and 0 (30 x):
zapsmall(ev <- eigen(CAex, only.values=TRUE)$values)
## i.e., the matrix is symmetric, hence
sCA <- as(CAex, "symmetricMatrix")
## and
stopifnot(class(sCA) == "dsCMatrix",
as(sCA, "matrix") == as(CAex, "matrix"))
```
r None
`band` Extract bands of a matrix
---------------------------------
### Description
Returns a new matrix formed by extracting the lower triangle (`tril`) or the upper triangle (`triu`) or a general band relative to the diagonal (`band`), and setting other elements to zero. The general forms of these functions include integer arguments to specify how many diagonal bands above or below the main diagonal are not set to zero.
### Usage
```
band(x, k1, k2, ...)
tril(x, k = 0, ...)
triu(x, k = 0, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a matrix-like object |
| `k,k1,k2` | integers specifying the diagonal bands that will not be set to zero. These are given relative to the main diagonal, which is `k=0`. A negative value of `k` indicates a diagonal below the main diagonal and a positive value indicates a diagonal above the main diagonal. |
| `...` | Optional arguments used by specific methods. (None used at present.) |
### Value
An object of an appropriate matrix class. The class of the value of `tril` or `triu` inherits from `[triangularMatrix](triangularmatrix-class)` when appropriate. Note that the result is of class `[sparseMatrix](sparsematrix-class)` only if `x` is.
### Methods
x = "CsparseMatrix"
method for compressed, sparse, column-oriented matrices.
x = "TsparseMatrix"
method for sparse matrices in triplet format.
x = "RsparseMatrix"
method for compressed, sparse, row-oriented matrices.
x = "ddenseMatrix"
method for dense numeric matrices, including packed numeric matrices.
### See Also
`[bandSparse](bandsparse)` for the *construction* of a banded sparse matrix directly from its non-zero diagonals.
### Examples
```
## A random sparse matrix :
set.seed(7)
m <- matrix(0, 5, 5)
m[sample(length(m), size = 14)] <- rep(1:9, length=14)
(mm <- as(m, "CsparseMatrix"))
tril(mm) # lower triangle
tril(mm, -1) # strict lower triangle
triu(mm, 1) # strict upper triangle
band(mm, -1, 2) # general band
(m5 <- Matrix(rnorm(25), nc = 5))
tril(m5) # lower triangle
tril(m5, -1) # strict lower triangle
triu(m5, 1) # strict upper triangle
band(m5, -1, 2) # general band
(m65 <- Matrix(rnorm(30), nc = 5)) # not square
triu(m65) # result in not dtrMatrix unless square
(sm5 <- crossprod(m65)) # symmetric
band(sm5, -1, 1)# symmetric band preserves symmetry property
as(band(sm5, -1, 1), "sparseMatrix")# often preferable
```
r None
`condest` Compute Approximate CONDition number and 1-Norm of (Large) Matrices
------------------------------------------------------------------------------
### Description
“Estimate”, i.e. compute approximately the CONDition number of a (potentially large, often sparse) matrix `A`. It works by apply a fast *randomized* approximation of the 1-norm, `norm(A,"1")`, through `onenormest(.)`.
### Usage
```
condest(A, t = min(n, 5), normA = norm(A, "1"),
silent = FALSE, quiet = TRUE)
onenormest(A, t = min(n, 5), A.x, At.x, n,
silent = FALSE, quiet = silent,
iter.max = 10, eps = 4 * .Machine$double.eps)
```
### Arguments
| | |
| --- | --- |
| `A` | a square matrix, optional for `onenormest()`, where instead of `A`, `A.x` and `At.x` can be specified, see there. |
| `t` | number of columns to use in the iterations. |
| `normA` | number; (an estimate of) the 1-norm of `A`, by default `<norm>(A, "1")`; may be replaced by an estimate. |
| `silent` | logical indicating if warning and (by default) convergence messages should be displayed. |
| `quiet` | logical indicating if convergence messages should be displayed. |
| `A.x, At.x` | when `A` is missing, these two must be given as functions which compute `A %% x`, or `t(A) %% x`, respectively. |
| `n` | `== nrow(A)`, only needed when `A` is not specified. |
| `iter.max` | maximal number of iterations for the 1-norm estimator. |
| `eps` | the relative change that is deemed irrelevant. |
### Details
`<condest>()` calls `<lu>(A)`, and subsequently `onenormest(A.x = , At.x = )` to compute an approximate norm of the *inverse* of `A`, *A^{-1}*, in a way which keeps using sparse matrices efficiently when `A` is sparse.
Note that `onenormest()` uses random vectors and hence *both* functions' results are random, i.e., depend on the random seed, see, e.g., `[set.seed](../../base/html/random)()`.
### Value
Both functions return a `[list](../../base/html/list)`; `condest()` with components,
| | |
| --- | --- |
| `est` | a number *> 0*, the estimated (1-norm) condition number *k.*; when *r :=*`rcond(A)`, *1/k. ~= r*. |
| `v` | the maximal *A x* column, scaled to norm(v) = 1. Consequently, *norm(A v) = norm(A) / est*; when `est` is large, `v` is an approximate null vector. |
The function `onenormest()` returns a list with components,
| | |
| --- | --- |
| `est` | a number *> 0*, the estimated `norm(A, "1")`. |
| `v` | 0-1 integer vector length `n`, with an `1` at the index `j` with maximal column `A[,j]` in *A*. |
| `w` | numeric vector, the largest *A x* found. |
| `iter` | the number of iterations used. |
### Author(s)
This is based on octave's `condest()` and `onenormest()` implementations with original author Jason Riedy, U Berkeley; translation to **R** and adaption by Martin Maechler.
### References
Nicholas J. Higham and Françoise Tisseur (2000). A Block Algorithm for Matrix 1-Norm Estimation, with an Application to 1-Norm Pseudospectra. *SIAM J. Matrix Anal. Appl.* **21**, 4, 1185–1201. <https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.7.9804>
William W. Hager (1984). Condition Estimates. *SIAM J. Sci. Stat. Comput.* **5**, 311–316.
### See Also
`<norm>`, `<rcond>`.
### Examples
```
data(KNex)
mtm <- with(KNex, crossprod(mm))
system.time(ce <- condest(mtm))
sum(abs(ce$v)) ## || v ||_1 == 1
## Prove that || A v || = || A || / est (as ||v|| = 1):
stopifnot(all.equal(norm(mtm %*% ce$v),
norm(mtm) / ce$est))
## reciprocal
1 / ce$est
system.time(rc <- rcond(mtm)) # takes ca 3 x longer
rc
all.equal(rc, 1/ce$est) # TRUE -- the approxmation was good
one <- onenormest(mtm)
str(one) ## est = 12.3
## the maximal column:
which(one$v == 1) # mostly 4, rarely 1, depending on random seed
```
r None
`isSymmetric-methods` Methods for Function isSymmetric in Package 'Matrix'
---------------------------------------------------------------------------
### Description
`isSymmetric(M)` returns a `[logical](../../base/html/logical)` indicating if `M` is a symmetric matrix. This (now) is a base function with a default method for the traditional matrices of `[class](../../base/html/class)` `"matrix"`. Methods here are defined for virtual Matrix classes such that it works for all objects inheriting from class `[Matrix](matrix-class)`.
### See Also
`[forceSymmetric](forcesymmetric)`, `<symmpart>`, and the formal class (and subclasses) `"[symmetricMatrix](symmetricmatrix-class)"`.
### Examples
```
isSymmetric(Diagonal(4)) # TRUE of course
M <- Matrix(c(1,2,2,1), 2,2)
isSymmetric(M) # TRUE (*and* of formal class "dsyMatrix")
isSymmetric(as(M, "dgeMatrix")) # still symmetric, even if not "formally"
isSymmetric(triu(M)) # FALSE
## Look at implementations:
showMethods("isSymmetric", includeDefs=TRUE)# "ANY": base's S3 generic; 6 more
```
| programming_docs |
r None
`rsparsematrix` Random Sparse Matrix
-------------------------------------
### Description
Generate a random sparse matrix efficiently. The default has rounded gaussian non-zero entries, and `rand.x = NULL` generates random patter**n** matrices, i.e. inheriting from `[nsparseMatrix](nsparsematrix-classes)`.
### Usage
```
rsparsematrix(nrow, ncol, density, nnz = round(density * maxE),
symmetric = FALSE,
rand.x = function(n) signif(rnorm(n), 2), ...)
```
### Arguments
| | |
| --- | --- |
| `nrow, ncol` | number of rows and columns, i.e., the matrix dimension (`[dim](../../base/html/dim)`). |
| `density` | optional number in *[0,1]*, the density is the proportion of non-zero entries among all matrix entries. If specified it determines the default for `nnz`, otherwise `nnz` needs to be specified. |
| `nnz` | number of non-zero entries, for a sparse matrix typically considerably smaller than `nrow*ncol`. Must be specified if `density` is not. |
| `symmetric` | logical indicating if result should be a matrix of class `[symmetricMatrix](symmetricmatrix-class)`. Note that in the symmetric case, `nnz` denotes the number of non zero entries of the upper (or lower) part of the matrix, including the diagonal. |
| `rand.x` | `[NULL](../../base/html/null)` or the random number generator for the `x` slot, a `[function](../../base/html/function)` such that `rand.x(n)` generates a numeric vector of length `n`. Typical examples are `rand.x = rnorm`, or `rand.x = runif`; the default is nice for didactical purposes. |
| `...` | optionally further arguments passed to `[sparseMatrix](sparsematrix)()`, notably `repr`. |
### Details
The algorithm first samples “encoded” *(i,j)*s without replacement, via one dimensional indices, if not `symmetric` `[sample.int](../../base/html/sample)(nrow*ncol, nnz)`, then—if `rand.x` is not `NULL`—gets `x <- rand.x(nnz)` and calls `[sparseMatrix](sparsematrix)(i=i, j=j, x=x, ..)`. When `rand.x=NULL`, `[sparseMatrix](sparsematrix)(i=i, j=j, ..)` will return a patter**n** matrix (i.e., inheriting from `[nsparseMatrix](nsparsematrix-classes)`).
### Value
a `[sparseMatrix](sparsematrix-class)`, say `M` of dimension (nrow, ncol), i.e., with `dim(M) == c(nrow, ncol)`, if `symmetric` is not true, with `nzM <- <nnzero>(M)` fulfilling `nzM <= nnz` and typically, `nzM == nnz`.
### Author(s)
Martin Maechler
### Examples
```
set.seed(17)# to be reproducible
M <- rsparsematrix(8, 12, nnz = 30) # small example, not very sparse
M
M1 <- rsparsematrix(1000, 20, nnz = 123, rand.x = runif)
summary(M1)
## a random *symmetric* Matrix
(S9 <- rsparsematrix(9, 9, nnz = 10, symmetric=TRUE)) # dsCMatrix
nnzero(S9)# ~ 20: as 'nnz' only counts one "triangle"
## a random patter*n* aka boolean Matrix (no 'x' slot):
(n7 <- rsparsematrix(5, 12, nnz = 10, rand.x = NULL))
## a [T]riplet representation sparseMatrix:
T2 <- rsparsematrix(40, 12, nnz = 99, repr = "T")
head(T2)
```
r None
`sparseQR-class` Sparse QR decomposition of a sparse matrix
------------------------------------------------------------
### Description
Objects class `"sparseQR"` represent a QR decomposition of a sparse *m x n* (“long”: *m >= n*) rectangular matrix *A*, typically resulting from `[qr](qr-methods)()`, see ‘Details’ notably about row and column permutations for pivoting.
### Details
For a sparse *m x n* (“long”: *m >= n*) rectangular matrix *A*, the sparse QR decomposition is either
of the form *P A = Q R* with a (row) permutation matrix *P*, (encoded in the `p` slot of the result) if the `q` slot is of length 0,
or of the form *P A P\* = Q R* with an extra (column) permutation matrix *P\** (encoded in the `q` slot). Note that the row permutation *P A* in **R** is simply `A[p+1, ]` where `p` is the `p`-slot, a 0-based permutation of `1:m` applied to the rows of the original matrix.
If the `q` slot has length `n` it is a 0-based permutation of `1:n` applied to the columns of the original matrix to reduce the amount of “fill-in” in the matrix *R*, and *A P\** in **R** is simply `A[ , q+1]`.
*R* is an *m by n* matrix that is zero below the main diagonal, i.e., upper triangular (*m by n*) with *m-n* extra zero rows.
The matrix *Q* is a "virtual matrix". It is the product of *n* Householder transformations. The information to generate these Householder transformations is stored in the `V` and `beta` slots.
Note however that `qr.Q()` returns the row permuted matrix *Q\* := P^(-1) Q = P'Q* as permutation matrices are orthogonal; and *Q\** is orthogonal itself because *Q* and *P* are. This is useful because then, as in the dense matrix and base **R** matrix `[qr](qr-methods)` case, we have the mathematical identity
*P A = Q\* R,*
in **R** as
```
A[p+1,] == qr.Q(*) %*% R .
```
The `"sparseQR"` methods for the `qr.*` functions return objects of class `"dgeMatrix"` (see `[dgeMatrix](dgematrix-class)`). Results from `qr.coef`, `qr.resid` and `qr.fitted` (when `k == ncol(R)`) are well-defined and should match those from the corresponding dense matrix calculations. However, because the matrix `Q` is not uniquely defined, the results of `qr.qy` and `qr.qty` do not necessarily match those from the corresponding dense matrix calculations.
Also, the results of `qr.qy` and `qr.qty` apply to the permuted column order when the `q` slot has length `n`.
### Objects from the Class
Objects can be created by calls of the form `new("sparseQR", ...)` but are more commonly created by function `[qr](../../base/html/qr)` applied to a sparse matrix such as a matrix of class `[dgCMatrix](dgcmatrix-class)`.
### Slots
`V`:
Object of class `"dgCMatrix"`. The columns of `V` are the vectors that generate the Householder transformations of which the matrix Q is composed.
`beta`:
Object of class `"numeric"`, the normalizing factors for the Householder transformations.
`p`:
Object of class `"integer"`: Permutation (of `0:(n-1)`) applied to the rows of the original matrix.
`R`:
Object of class `"dgCMatrix"`: An upper triangular matrix of the same dimension as *X*.
`q`:
Object of class `"integer"`: Permutation applied from the right, i.e., to the *columns* of the original matrix. Can be of length 0 which implies no permutation.
### Methods
qr.R
`signature(qr = "sparseQR")`: compute the upper triangular *R* matrix of the QR decomposition. Note that this currently warns because of possible permutation mismatch with the classical `qr.R()` result, *and* you can suppress these warnings by setting `[options](../../base/html/options)()` either `"Matrix.quiet.qr.R"` or (the more general) either `"Matrix.quiet"` to `[TRUE](../../base/html/logical)`.
qr.Q
`signature(qr = "sparseQR")`: compute the orthogonal *Q* matrix of the QR decomposition.
qr.coef
`signature(qr = "sparseQR", y = "ddenseMatrix")`: ...
qr.coef
`signature(qr = "sparseQR", y = "matrix")`: ...
qr.coef
`signature(qr = "sparseQR", y = "numeric")`: ...
qr.fitted
`signature(qr = "sparseQR", y = "ddenseMatrix")`: ...
qr.fitted
`signature(qr = "sparseQR", y = "matrix")`: ...
qr.fitted
`signature(qr = "sparseQR", y = "numeric")`: ...
qr.qty
`signature(qr = "sparseQR", y = "ddenseMatrix")`: ...
qr.qty
`signature(qr = "sparseQR", y = "matrix")`: ...
qr.qty
`signature(qr = "sparseQR", y = "numeric")`: ...
qr.qy
`signature(qr = "sparseQR", y = "ddenseMatrix")`: ...
qr.qy
`signature(qr = "sparseQR", y = "matrix")`: ...
qr.qy
`signature(qr = "sparseQR", y = "numeric")`: ...
qr.resid
`signature(qr = "sparseQR", y = "ddenseMatrix")`: ...
qr.resid
`signature(qr = "sparseQR", y = "matrix")`: ...
qr.resid
`signature(qr = "sparseQR", y = "numeric")`: ...
solve
`signature(a = "sparseQR", b = "ANY")`: For `solve(a,b)`, simply uses `qr.coef(a,b)`.
### See Also
`[qr](../../base/html/qr)`, `[qr.Q](sparseqr-class)`, `[qr.R](../../base/html/qraux)`, `[qr.fitted](../../base/html/qr)`, `[qr.resid](../../base/html/qr)`, `[qr.coef](../../base/html/qr)`, `[qr.qty](../../base/html/qr)`, `[qr.qy](../../base/html/qr)`,
Permutation matrices in the Matrix package: `[pMatrix](pmatrix-class)`; `[dgCMatrix](dgcmatrix-class)`, `[dgeMatrix](dgematrix-class)`.
### Examples
```
data(KNex)
mm <- KNex $ mm
y <- KNex $ y
y. <- as(as.matrix(y), "dgCMatrix")
str(qrm <- qr(mm))
qc <- qr.coef (qrm, y); qc. <- qr.coef (qrm, y.) # 2nd failed in Matrix <= 1.1-0
qf <- qr.fitted(qrm, y); qf. <- qr.fitted(qrm, y.)
qs <- qr.resid (qrm, y); qs. <- qr.resid (qrm, y.)
stopifnot(all.equal(qc, as.numeric(qc.), tolerance=1e-12),
all.equal(qf, as.numeric(qf.), tolerance=1e-12),
all.equal(qs, as.numeric(qs.), tolerance=1e-12),
all.equal(qf+qs, y, tolerance=1e-12))
```
r None
`invPerm` Inverse Permutation Vector
-------------------------------------
### Description
From a permutation vector `p`, compute its *inverse* permutation vector.
### Usage
```
invPerm(p, zero.p = FALSE, zero.res = FALSE)
```
### Arguments
| | |
| --- | --- |
| `p` | an integer vector of length, say, `n`. |
| `zero.p` | logical indicating if `p` contains values `0:(n-1)` or rather (by default, `zero.p = FALSE`) `1:n`. |
| `zero.res` | logical indicating if the result should contain values `0:(n-1)` or rather (by default, `zero.res = FALSE`) `1:n`. |
### Value
an integer vector of the same length (`n`) as `p`. By default, (`zero.p = FALSE, zero.res = FALSE`), `invPerm(p)` is the same as `[order](../../base/html/order)(p)` or `[sort.list](../../base/html/order)(p)` and for that case, the function is equivalent to `invPerm. <- function(p) { p[p] <- seq_along(p) ; p }`.
### Author(s)
Martin Maechler
### See Also
the class of permutation matrices, `[pMatrix](pmatrix-class)`.
### Examples
```
p <- sample(10) # a random permutation vector
ip <- invPerm(p)
p[ip] # == 1:10
## they are indeed inverse of each other:
stopifnot(
identical(p[ip], 1:10),
identical(ip[p], 1:10),
identical(invPerm(ip), p)
)
```
r None
`dtpMatrix-class` Packed Triangular Dense Matrices - "dtpMatrix"
-----------------------------------------------------------------
### Description
The `"dtpMatrix"` class is the class of triangular, dense, numeric matrices in packed storage. The `"dtrMatrix"` class is the same except in nonpacked storage.
### Objects from the Class
Objects can be created by calls of the form `new("dtpMatrix",
...)` or by coercion from other classes of matrices.
### Slots
`uplo`:
Object of class `"character"`. Must be either "U", for upper triangular, and "L", for lower triangular.
`diag`:
Object of class `"character"`. Must be either `"U"`, for unit triangular (diagonal is all ones), or `"N"`; see `[triangularMatrix](triangularmatrix-class)`.
`x`:
Object of class `"numeric"`. The numeric values that constitute the matrix, stored in column-major order. For a packed square matrix of dimension *d \* d*, `length(x)` is of length *d(d+1)/2* (also when `diag == "U"`!).
`Dim`,`Dimnames`:
The dimension (a length-2 `"integer"`) and corresponding names (or `NULL`), inherited from the `[Matrix](matrix-class)`, see there.
### Extends
Class `"ddenseMatrix"`, directly. Class `"triangularMatrix"`, directly. Class `"dMatrix"` and more by class `"ddenseMatrix"` etc, see the examples.
### Methods
%\*%
`signature(x = "dtpMatrix", y = "dgeMatrix")`: Matrix multiplication; ditto for several other signature combinations, see `showMethods("%*%", class = "dtpMatrix")`.
coerce
`signature(from = "dtpMatrix", to = "dtrMatrix")`
coerce
`signature(from = "dtpMatrix", to = "matrix")`
determinant
`signature(x = "dtpMatrix", logarithm = "logical")`: the `[determinant](../../base/html/det)(x)` trivially is `prod(diag(x))`, but computed on log scale to prevent over- and underflow.
diag
`signature(x = "dtpMatrix")`: ...
norm
`signature(x = "dtpMatrix", type = "character")`: ...
rcond
`signature(x = "dtpMatrix", norm = "character")`: ...
solve
`signature(a = "dtpMatrix", b = "...")`: efficiently using internal backsolve or forwardsolve, see `<solve-methods>`.
t
`signature(x = "dtpMatrix")`: `t(x)` remains a `"dtpMatrix"`, lower triangular if `x` is upper triangular, and vice versa.
### See Also
Class `[dtrMatrix](dtrmatrix-class)`
### Examples
```
showClass("dtrMatrix")
example("dtrMatrix-class", echo=FALSE)
(p1 <- as(T2, "dtpMatrix"))
str(p1)
(pp <- as(T, "dtpMatrix"))
ip1 <- solve(p1)
stopifnot(length(p1@x) == 3, length(pp@x) == 3,
p1 @ uplo == T2 @ uplo, pp @ uplo == T @ uplo,
identical(t(pp), p1), identical(t(p1), pp),
all((l.d <- p1 - T2) == 0), is(l.d, "dtpMatrix"),
all((u.d <- pp - T ) == 0), is(u.d, "dtpMatrix"),
l.d@uplo == T2@uplo, u.d@uplo == T@uplo,
identical(t(ip1), solve(pp)), is(ip1, "dtpMatrix"),
all.equal(as(solve(p1,p1), "diagonalMatrix"), Diagonal(2)))
```
r None
`ntrMatrix-class` Triangular Dense Logical Matrices
----------------------------------------------------
### Description
The `"ntrMatrix"` class is the class of triangular, dense, logical matrices in nonpacked storage. The `"ntpMatrix"` class is the same except in packed storage.
### Slots
`x`:
Object of class `"logical"`. The logical values that constitute the matrix, stored in column-major order.
`uplo`:
Object of class `"character"`. Must be either "U", for upper triangular, and "L", for lower triangular.
`diag`:
Object of class `"character"`. Must be either `"U"`, for unit triangular (diagonal is all ones), or `"N"`; see `[triangularMatrix](triangularmatrix-class)`.
`Dim`,`Dimnames`:
The dimension (a length-2 `"integer"`) and corresponding names (or `NULL`), see the `[Matrix](matrix-class)` class.
`factors`:
Object of class `"list"`. A named list of factorizations that have been computed for the matrix.
### Extends
`"ntrMatrix"` extends class `"ngeMatrix"`, directly, whereas
`"ntpMatrix"` extends class `"ndenseMatrix"`, directly.
Both extend Class `"triangularMatrix"`, directly, and class `"denseMatrix"`, `"lMatrix"` and others, *in*directly, use `[showClass](../../methods/html/rclassutils)("nsyMatrix")`, e.g., for details.
### Methods
Currently, mainly `[t](../../base/html/t)()` and coercion methods (for `[as](../../methods/html/as)(.)`; use, e.g., `[showMethods](../../methods/html/showmethods)(class="nsyMatrix")` for details.
### See Also
Classes `[ngeMatrix](ngematrix-class)`, `[Matrix](matrix-class)`; function `[t](../../base/html/t)`
### Examples
```
showClass("ntrMatrix")
str(new("ntpMatrix"))
(nutr <- as(upper.tri(matrix(,4,4)), "ntrMatrix"))
str(nutp <- as(nutr, "ntpMatrix"))# packed matrix: only 10 = (4+1)*4/2 entries
!nutp ## the logical negation (is *not* logical triangular !)
## but this one is:
stopifnot(all.equal(nutp, as(!!nutp, "ntpMatrix")))
```
r None
`all.equal-methods` Matrix Package Methods for Function all.equal()
--------------------------------------------------------------------
### Description
Methods for function `[all.equal](../../base/html/all.equal)()` (from **R** package base) are defined for all `[Matrix](matrix-class)` classes.
### Methods
target = "Matrix", current = "Matrix"
\
target = "ANY", current = "Matrix"
\
target = "Matrix", current = "ANY"
these three methods are simply using `[all.equal.numeric](../../base/html/all.equal)` directly and work via `[as.vector](../../base/html/vector)()`.
There are more methods, notably also for `"[sparseVector](sparsevector-class)"`'s, see `showMethods("all.equal")`.
### Examples
```
showMethods("all.equal")
(A <- spMatrix(3,3, i= c(1:3,2:1), j=c(3:1,1:2), x = 1:5))
ex <- expand(lu. <- lu(A))
stopifnot( all.equal(as(A[lu.@p + 1L, lu.@q + 1L], "CsparseMatrix"),
lu.@L %*% lu.@U),
with(ex, all.equal(as(P %*% A %*% Q, "CsparseMatrix"),
L %*% U)),
with(ex, all.equal(as(A, "CsparseMatrix"),
t(P) %*% L %*% U %*% t(Q))))
```
r None
`ddiMatrix-class` Class "ddiMatrix" of Diagonal Numeric Matrices
-----------------------------------------------------------------
### Description
The class `"ddiMatrix"` of numerical diagonal matrices. Note that diagonal matrices now extend *`sparseMatrix`*, whereas they did extend dense matrices earlier.
### Objects from the Class
Objects can be created by calls of the form `new("ddiMatrix", ...)` but typically rather via `[Diagonal](diagonal)`.
### Slots
`x`:
numeric vector. For an *n \* n* matrix, the `x` slot is of length *n* or `0`, depending on the `diag` slot:
`diag`:
`"character"` string, either `"U"` or `"N"` where `"U"` denotes unit-diagonal, i.e., identity matrices.
`Dim`,`Dimnames`:
matrix dimension and `[dimnames](../../base/html/dimnames)`, see the `[Matrix](matrix-class)` class description.
### Extends
Class `"[diagonalMatrix](diagonalmatrix-class)"`, directly. Class `"[dMatrix](dmatrix-class)"`, directly. Class `"[sparseMatrix](sparsematrix-class)"`, indirectly, see `[showClass](../../methods/html/rclassutils)("ddiMatrix")`.
### Methods
%\*%
`signature(x = "ddiMatrix", y = "ddiMatrix")`: ...
### See Also
Class `[diagonalMatrix](diagonalmatrix-class)` and function `[Diagonal](diagonal)`.
### Examples
```
(d2 <- Diagonal(x = c(10,1)))
str(d2)
## slightly larger in internal size:
str(as(d2, "sparseMatrix"))
M <- Matrix(cbind(1,2:4))
M %*% d2 #> `fast' multiplication
chol(d2) # trivial
stopifnot(is(cd2 <- chol(d2), "ddiMatrix"),
all.equal(cd2@x, c(sqrt(10),1)))
```
r None
`index-class` Virtual Class "index" - Simple Class for Matrix Indices
----------------------------------------------------------------------
### Description
The class `"index"` is a virtual class used for indices (in signatures) for matrix indexing and sub-assignment of Matrix matrices.
In fact, it is currently implemented as a simple class union (`[setClassUnion](../../methods/html/setclassunion)`) of `"numeric"`, `"logical"` and `"character"`.
### Objects from the Class
Since it is a virtual Class, no objects may be created from it.
### See Also
`[[-methods](xtrct-methods)`, and `[Subassign-methods](subassign-methods)`, also for examples.
### Examples
```
showClass("index")
```
r None
`ngeMatrix-class` Class "ngeMatrix" of General Dense Nonzero-pattern Matrices
------------------------------------------------------------------------------
### Description
This is the class of general dense nonzero-pattern matrices, see `[nMatrix](nmatrix-class)`.
### Slots
`x`:
Object of class `"logical"`. The logical values that constitute the matrix, stored in column-major order.
`Dim`,`Dimnames`:
The dimension (a length-2 `"integer"`) and corresponding names (or `NULL`), see the `[Matrix](matrix-class)` class.
`factors`:
Object of class `"list"`. A named list of factorizations that have been computed for the matrix.
### Extends
Class `"ndenseMatrix"`, directly. Class `"lMatrix"`, by class `"ndenseMatrix"`. Class `"denseMatrix"`, by class `"ndenseMatrix"`. Class `"Matrix"`, by class `"ndenseMatrix"`. Class `"Matrix"`, by class `"ndenseMatrix"`.
### Methods
Currently, mainly `[t](../../base/html/t)()` and coercion methods (for `[as](../../methods/html/as)(.)`); use, e.g., `[showMethods](../../methods/html/showmethods)(class="ngeMatrix")` for details.
### See Also
Non-general logical dense matrix classes such as `[ntrMatrix](ntrmatrix-class)`, or `[nsyMatrix](nsymatrix-class)`; *sparse* logical classes such as `[ngCMatrix](nsparsematrix-classes)`.
### Examples
```
showClass("ngeMatrix")
## "lgeMatrix" is really more relevant
```
r None
`expand` Expand a (Matrix) Decomposition into Factors
------------------------------------------------------
### Description
Expands decompositions stored in compact form into factors.
### Usage
```
expand(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a matrix decomposition. |
| `...` | further arguments passed to or from other methods. |
### Details
This is a generic function with special methods for different types of decompositions, see `[showMethods](../../methods/html/showmethods)(expand)` to list them all.
### Value
The expanded decomposition, typically a list of matrix factors.
### Note
Factors for decompositions such as `lu` and `qr` can be stored in a compact form. The function `expand` allows all factors to be fully expanded.
### See Also
The LU `<lu>`, and the `[Cholesky](cholesky)` decompositions which have `expand` methods; `<facmul>`.
### Examples
```
(x <- Matrix(round(rnorm(9),2), 3, 3))
(ex <- expand(lux <- lu(x)))
```
| programming_docs |
r None
`CHMfactor-class` CHOLMOD-based Cholesky Factorizations
--------------------------------------------------------
### Description
The virtual class `"CHMfactor"` is a class of CHOLMOD-based Cholesky factorizations of symmetric, sparse, compressed, column-oriented matrices. Such a factorization is simplicial (virtual class `"CHMsimpl"`) or supernodal (virtual class `"CHMsuper"`). Objects that inherit from these classes are either numeric factorizations (classes `"dCHMsimpl"` and `"dCHMsuper"`) or symbolic factorizations (classes `"nCHMsimpl"` and `"nCHMsuper"`).
### Usage
```
isLDL(x)
## S4 method for signature 'CHMfactor'
update(object, parent, mult = 0, ...)
.updateCHMfactor(object, parent, mult)
## and many more methods, notably,
## solve(a, b, system = c("A","LDLt","LD","DLt","L","Lt","D","P","Pt"), ...)
## ----- see below
```
### Arguments
| | |
| --- | --- |
| `x,object,a` | a `"CHMfactor"` object (almost always the result of `[Cholesky](cholesky)()`). |
| `parent` | a `"[dsCMatrix](dscmatrix-class)"` or `"[dgCMatrix](dgcmatrix-class)"` matrix object with the same nonzero pattern as the matrix that generated `object`. If `parent` is symmetric, of class `"[dsCMatrix](dscmatrix-class)"`, then `object` should be a decomposition of a matrix with the same nonzero pattern as `parent`. If `parent` is not symmetric then `object` should be the decomposition of a matrix with the same nonzero pattern as `tcrossprod(parent)`. Since Matrix version 1.0-8, other `"[sparseMatrix](sparsematrix-class)"` matrices are coerced to `[dsparseMatrix](dsparsematrix-class)` and `[CsparseMatrix](csparsematrix-class)` if needed. |
| `mult` | a numeric scalar (default 0). `mult` times the identity matrix is (implicitly) added to `parent` or `tcrossprod(parent)` before updating the decomposition `object`. |
| `...` | potentially further arguments to the methods. |
### Objects from the Class
Objects can be created by calls of the form `new("dCHMsuper", ...)` but are more commonly created via `[Cholesky](cholesky)()`, applied to `[dsCMatrix](dscmatrix-class)` or `[lsCMatrix](lsparsematrix-classes)` objects.
For an introduction, it may be helpful to look at the `expand()` method and examples below.
### Slots
of `"CHMfactor"` and all classes inheriting from it:
`perm`:
An integer vector giving the 0-based permutation of the rows and columns chosen to reduce fill-in and for post-ordering.
`colcount`:
Object of class `"integer"` ....
`type`:
Object of class `"integer"` ....
Slots of the non virtual classes “[dl]CHM(super|simpl)”:
`p`:
Object of class `"integer"` of pointers, one for each column, to the initial (zero-based) index of elements in the column. Only present in classes that contain `"CHMsimpl"`.
`i`:
Object of class `"integer"` of length nnzero (number of non-zero elements). These are the row numbers for each non-zero element in the matrix. Only present in classes that contain `"CHMsimpl"`.
`x`:
For the `"d*"` classes: `"numeric"` - the non-zero elements of the matrix.
### Methods
isLDL
`(x)` returns a `[logical](../../base/html/logical)` indicating if `x` is an *LDL'* decomposition or (when `FALSE`) an *LL'* one.
coerce
`signature(from = "CHMfactor", to = "sparseMatrix")` (or equivalently, `to = "Matrix"` or `to = "triangularMatrix"`)
`as(*, "sparseMatrix")` returns the lower triangular factor *L* from the *LL'* form of the Cholesky factorization. Note that (currently) the factor from the *LL'* form is always returned, even if the `"CHMfactor"` object represents an *LDL'* decomposition. Furthermore, this is the factor after any fill-reducing permutation has been applied. See the `expand` method for obtaining both the permutation matrix, *P*, and the lower Cholesky factor, *L*.
coerce
`signature(from = "CHMfactor", to = "pMatrix")` returns the permutation matrix *P*, representing the fill-reducing permutation used in the decomposition.
expand
`signature(x = "CHMfactor")` returns a list with components `P`, the matrix representing the fill-reducing permutation, and `L`, the lower triangular Cholesky factor. The original positive-definite matrix *A* corresponds to the product *A = P'LL'P*. Because of fill-in during the decomposition the product may apparently have more non-zeros than the original matrix, even after applying `<drop0>` to it. However, the extra "non-zeros" should be very small in magnitude.
image
`signature(x = "CHMfactor"):` Plot the image of the lower triangular factor, *L*, from the decomposition. This method is equivalent to `image(as(x, "sparseMatrix"))` so the comments in the above description of the `coerce` method apply here too.
solve
`signature(a = "CHMfactor", b = "ddenseMatrix"), system= *`: The `solve` methods for a `"CHMfactor"` object take an optional third argument `system` whose value can be one of the character strings `"A"`, `"LDLt"`, `"LD"`, `"DLt"`, `"L"`, `"Lt"`, `"D"`, `"P"` or `"Pt"`. This argument describes the system to be solved. The default, `"A"`, is to solve *Ax = b* for *x* where `A` is the sparse, positive-definite matrix that was factored to produce `a`. Analogously, `system = "L"` returns the solution *x*, of *Lx = b*. Similarly, for all system codes **but** `"P"` and `"Pt"` where, e.g., `x <- solve(a, b, system="P")` is equivalent to `x <- P %*% b`.
See also `<solve-methods>`.
determinant
`signature(x = "CHMfactor", logarithm =
"logical")` returns the determinant (or the logarithm of the determinant, if `logarithm = TRUE`, the default) of the factor *L* from the *LL'* decomposition (even if the decomposition represented by `x` is of the *LDL'* form (!)). This is the square root of the determinant (half the logarithm of the determinant when `logarithm = TRUE`) of the positive-definite matrix that was decomposed.
update
`signature(object = "CHMfactor"), parent`. The `[update](../../stats/html/update)` method requires an additional argument `parent`, which is *either* a `"[dsCMatrix](dscmatrix-class)"` object, say *A*, (with the same structure of nonzeros as the matrix that was decomposed to produce `object`) or a general `"[dgCMatrix](dgcmatrix-class)"`, say *M*, where *A := M M'* (`== tcrossprod(parent)`) is used for *A*. Further it provides an optional argument `mult`, a numeric scalar. This method updates the numeric values in `object` to the decomposition of *A+mI* where *A* is the matrix above (either the `parent` or *M M'*) and *m* is the scalar `mult`. Because only the numeric values are updated this method should be faster than creating and decomposing *A+mI*. It is not uncommon to want, say, the determinant of *A+mI* for many different values of *m*. This method would be the preferred approach in such cases.
### See Also
`[Cholesky](cholesky)`, also for examples; class `[dgCMatrix](dgcmatrix-class)`.
### Examples
```
## An example for the expand() method
n <- 1000; m <- 200; nnz <- 2000
set.seed(1)
M1 <- spMatrix(n, m,
i = sample(n, nnz, replace = TRUE),
j = sample(m, nnz, replace = TRUE),
x = round(rnorm(nnz),1))
XX <- crossprod(M1) ## = M1'M1 = M M' where M <- t(M1)
CX <- Cholesky(XX)
isLDL(CX)
str(CX) ## a "dCHMsimpl" object
r <- expand(CX)
L.P <- with(r, crossprod(L,P)) ## == L'P
PLLP <- crossprod(L.P) ## == (L'P)' L'P == P'LL'P = XX = M M'
b <- sample(m)
stopifnot(all.equal(PLLP, XX),
all(as.vector(solve(CX, b, system="P" )) == r$P %*% b),
all(as.vector(solve(CX, b, system="Pt")) == t(r$P) %*% b) )
u1 <- update(CX, XX, mult=pi)
u2 <- update(CX, t(M1), mult=pi) # with the original M, where XX = M M'
stopifnot(all.equal(u1,u2, tol=1e-14))
## [ See help(Cholesky) for more examples ]
## -------------
```
r None
`sparse.model.matrix` Construct Sparse Design / Model Matrices
---------------------------------------------------------------
### Description
Construct a sparse model or “design” matrix, from a formula and data frame (`sparse.model.matrix`) or a single factor (`fac2sparse`).
The `fac2[Ss]parse()` functions are utilities, also used internally in the principal user level function `sparse.model.matrix()`.
### Usage
```
sparse.model.matrix(object, data = environment(object),
contrasts.arg = NULL, xlev = NULL, transpose = FALSE,
drop.unused.levels = FALSE, row.names = TRUE,
sep = "", verbose = FALSE, ...)
fac2sparse(from, to = c("d", "i", "l", "n", "z"),
drop.unused.levels = TRUE, repr = c("C","T","R"), giveCsparse)
fac2Sparse(from, to = c("d", "i", "l", "n", "z"),
drop.unused.levels = TRUE, repr = c("C","T","R"), giveCsparse,
factorPatt12, contrasts.arg = NULL)
```
### Arguments
| | |
| --- | --- |
| `object` | an object of an appropriate class. For the default method, a model formula or terms object. |
| `data` | a data frame created with `[model.frame](../../stats/html/model.frame)`. If another sort of object, `model.frame` is called first. |
| `contrasts.arg` | for `sparse.model.matrix()`:
A list, whose entries are contrasts suitable for input to the `[contrasts](../../stats/html/contrasts)` replacement function and whose names are the names of columns of `data` containing `[factor](../../base/html/factor)`s. for `fac2Sparse()`:
character string or `NULL` or (coercable to) `"[sparseMatrix](sparsematrix-class)"`, specifying the contrasts to be applied to the factor levels. |
| `xlev` | to be used as argument of `[model.frame](../../stats/html/model.frame)` if `data` has no `"terms"` attribute. |
| `transpose` | logical indicating if the *transpose* should be returned; if the transposed is used anyway, setting `transpose = TRUE` is more efficient. |
| `drop.unused.levels` | should factors have unused levels dropped? The default for `sparse.model.matrix` has been changed to `FALSE`, 2010-07, for compatibility with **R**'s standard (dense) `[model.matrix](../../stats/html/model.matrix)()`. |
| `row.names` | logical indicating if row names should be used. |
| `sep` | `[character](../../base/html/character)` string passed to `[paste](../../base/html/paste)()` when constructing column names from the variable name and its levels. |
| `verbose` | logical or integer indicating if (and how much) progress output should be printed. |
| `...` | further arguments passed to or from other methods. |
| `from` | (for `fac2sparse()`:) a `[factor](../../base/html/factor)`. |
| `to` | a character indicating the “kind” of sparse matrix to be returned. The default, `"d"` is for `[double](../../base/html/double)`. |
| `giveCsparse` | **deprecated**, replaced with `repr`; logical indicating if the result must be a `[CsparseMatrix](csparsematrix-class)`. |
| `repr` | `[character](../../base/html/character)` string, one of `"C"`, `"T"`, or `"R"`, specifying the sparse *repr*esentation to be used for the result, i.e., one from the super classes `[CsparseMatrix](csparsematrix-class)`, `[TsparseMatrix](tsparsematrix-class)`, or `[RsparseMatrix](rsparsematrix-class)`. |
| `factorPatt12` | logical vector, say `fp`, of length two; when `fp[1]` is true, return “contrasted” `t(X)`; when `fp[2]` is true, the original (“dummy”) `t(X)`, i.e, the result of `[fac2sparse](sparse.model.matrix)()`. |
### Value
a sparse matrix, extending `[CsparseMatrix](csparsematrix-class)` (for `fac2sparse()` if `repr = "C"` as per default; a `[TsparseMatrix](tsparsematrix-class)` or `[RsparseMatrix](rsparsematrix-class)`, otherwise).
For `fac2Sparse()`, a `[list](../../base/html/list)` of length two, both components with the corresponding transposed model matrix, where the corresponding `factorPatt12` is true.
Note that `[model.Matrix](../../matrixmodels/html/model.matrix)(*, sparse=TRUE)` from package MatrixModels may be often be preferable to `sparse.model.matrix()` nowadays, as `model.Matrix()` returns `[modelMatrix](../../matrixmodels/html/modelmatrix-class)` objects with additional slots `assign` and `contrasts` which relate back to the variables used.
`fac2sparse()`, the basic workhorse of `sparse.model.matrix()`, returns the *transpose* (`[t](../../base/html/t)`) of the model matrix.
### Author(s)
Doug Bates and Martin Maechler, with initial suggestions from Tim Hesterberg.
### See Also
`[model.matrix](../../stats/html/model.matrix)` in standard **R**'s package stats.
`[model.Matrix](../../matrixmodels/html/model.matrix)` which calls `sparse.model.matrix` or `model.matrix` depending on its `sparse` argument may be preferred to `sparse.model.matrix`.
`as(f, "sparseMatrix")` (see `coerce(from = "factor", ..)` in the class doc [sparseMatrix](sparsematrix-class)) produces the *transposed* sparse model matrix for a single factor `f` (and *no* contrasts).
### Examples
```
dd <- data.frame(a = gl(3,4), b = gl(4,1,12))# balanced 2-way
options("contrasts") # the default: "contr.treatment"
sparse.model.matrix(~ a + b, dd)
sparse.model.matrix(~ -1+ a + b, dd)# no intercept --> even sparser
sparse.model.matrix(~ a + b, dd, contrasts = list(a="contr.sum"))
sparse.model.matrix(~ a + b, dd, contrasts = list(b="contr.SAS"))
## Sparse method is equivalent to the traditional one :
stopifnot(all(sparse.model.matrix(~ a + b, dd) ==
Matrix(model.matrix(~ a + b, dd), sparse=TRUE)),
all(sparse.model.matrix(~ 0+ a + b, dd) ==
Matrix(model.matrix(~ 0+ a + b, dd), sparse=TRUE)))
(ff <- gl(3,4,, c("X","Y", "Z")))
fac2sparse(ff) # 3 x 12 sparse Matrix of class "dgCMatrix"
##
## X 1 1 1 1 . . . . . . . .
## Y . . . . 1 1 1 1 . . . .
## Z . . . . . . . . 1 1 1 1
## can also be computed via sparse.model.matrix():
f30 <- gl(3,0 )
f12 <- gl(3,0, 12)
stopifnot(
all.equal(t( fac2sparse(ff) ),
sparse.model.matrix(~ 0+ff),
tolerance = 0, check.attributes=FALSE),
is(M <- fac2sparse(f30, drop= TRUE),"CsparseMatrix"), dim(M) == c(0, 0),
is(M <- fac2sparse(f30, drop=FALSE),"CsparseMatrix"), dim(M) == c(3, 0),
is(M <- fac2sparse(f12, drop= TRUE),"CsparseMatrix"), dim(M) == c(0,12),
is(M <- fac2sparse(f12, drop=FALSE),"CsparseMatrix"), dim(M) == c(3,12)
)
```
r None
`MatrixClass` The Matrix (Super-) Class of a Class
---------------------------------------------------
### Description
Return the (maybe super-)`[class](../../base/html/class)` of class `cl` from package Matrix, returning `[character](../../base/html/character)(0)` if there is none.
### Usage
```
MatrixClass(cl, cld = getClassDef(cl), ...Matrix = TRUE,
dropVirtual = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `cl` | string, class name |
| `cld` | its class definition |
| `...Matrix` | `[logical](../../base/html/logical)` indicating if the result must be of pattern `"[dlniz]..Matrix"` where the first letter "[dlniz]" denotes the content kind. |
| `dropVirtual` | `[logical](../../base/html/logical)` indicating if virtual classes are included or not. |
| | |
| --- | --- |
| `...` | further arguments are passed to `[.selectSuperClasses](../../methods/html/selectsuperclasses)()`. |
### Value
a `[character](../../base/html/character)` string
### Author(s)
Martin Maechler, 24 Mar 2009
### See Also
`[Matrix](matrix-class)`, the mother of all Matrix classes.
### Examples
```
mkA <- setClass("A", contains="dgCMatrix")
(A <- mkA())
stopifnot(identical(
MatrixClass("A"),
"dgCMatrix"))
```
r None
`drop0` Drop "Explicit Zeroes" from a Sparse Matrix
----------------------------------------------------
### Description
Returns a sparse matrix with no “explicit zeroes”, i.e., all zero or `FALSE` entries are dropped from the explicitly indexed matrix entries.
### Usage
```
drop0(x, tol = 0, is.Csparse = NA)
```
### Arguments
| | |
| --- | --- |
| `x` | a Matrix, typically sparse, i.e., inheriting from `[sparseMatrix](sparsematrix-class)`. |
| | |
| --- | --- |
| `tol` | non-negative number to be used as tolerance for checking if an entry *x[i,j]* should be considered to be zero. |
| `is.Csparse` | logical indicating prior knowledge about the “Csparseness” of `x`. This exists for possible speedup reasons only. |
### Value
a Matrix like `x` but with no explicit zeros, i.e., `!any(x@x == 0)`, always inheriting from `[CsparseMatrix](csparsematrix-class)`.
### Note
When a sparse matrix is the result of matrix multiplications, you may want to consider combining `drop0()` with `[zapsmall](../../base/html/zapsmall)()`, see the example.
### See Also
`[spMatrix](spmatrix)`, class `[sparseMatrix](sparsematrix-class)`; `<nnzero>`
### Examples
```
m <- spMatrix(10,20, i= 1:8, j=2:9, x = c(0:2,3:-1))
m
drop0(m)
## A larger example:
t5 <- new("dtCMatrix", Dim = c(5L, 5L), uplo = "L",
x = c(10, 1, 3, 10, 1, 10, 1, 10, 10),
i = c(0L,2L,4L, 1L, 3L,2L,4L, 3L, 4L),
p = c(0L, 3L, 5L, 7:9))
TT <- kronecker(t5, kronecker(kronecker(t5,t5), t5))
IT <- solve(TT)
I. <- TT %*% IT ; nnzero(I.) # 697 ( = 625 + 72 )
I.0 <- drop0(zapsmall(I.))
## which actually can be more efficiently achieved by
I.. <- drop0(I., tol = 1e-15)
stopifnot(all(I.0 == Diagonal(625)),
nnzero(I..) == 625)
```
r None
`chol` Choleski Decomposition - 'Matrix' S4 Generic and Methods
----------------------------------------------------------------
### Description
Compute the Choleski factorization of a real symmetric positive-definite square matrix.
### Usage
```
chol(x, ...)
## S4 method for signature 'dsCMatrix'
chol(x, pivot = FALSE, ...)
## S4 method for signature 'dsparseMatrix'
chol(x, pivot = FALSE, cache = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a (sparse or dense) square matrix, here inheriting from class `[Matrix](matrix-class)`; if `x` is not positive definite, an error is signalled. |
| `pivot` | logical indicating if pivoting is to be used. Currently, this is *not* made use of for dense matrices. |
| `cache` | logical indicating if the result should be cached in `x@factors`; note that this argument is experimental and only available for some sparse matrices. |
| `...` | potentially further arguments passed to methods. |
### Details
Note that these Cholesky factorizations are typically *cached* with `x` currently, and these caches are available in `x@factors`, which may be useful for the sparse case when `pivot = TRUE`, where the permutation can be retrieved; see also the examples.
However, this should not be considered part of the API and made use of. Rather consider `[Cholesky](cholesky)()` in such situations, since `chol(x, pivot=TRUE)` uses the same algorithm (but not the same return value!) as `[Cholesky](cholesky)(x, LDL=FALSE)` and `chol(x)` corresponds to `[Cholesky](cholesky)(x, perm=FALSE, LDL=FALSE)`.
### Value
a matrix of class `[Cholesky](cholesky-class)`, i.e., upper triangular: *R* such that *R'R = x* (if `pivot=FALSE`) *or* *P' R'R P = x* (if `pivot=TRUE` and *P* is the corresponding permutation matrix).
### Methods
Use `[showMethods](../../methods/html/showmethods)(chol)` to see all; some are worth mentioning here:
chol
`signature(x = "dgeMatrix")`: works via `"dpoMatrix"`, see class `[dpoMatrix](dpomatrix-class)`.
chol
`signature(x = "dpoMatrix")`: Returns (and stores) the Cholesky decomposition of `x`, via LAPACK routines `dlacpy` and `dpotrf`.
chol
`signature(x = "dppMatrix")`: Returns (and stores) the Cholesky decomposition via LAPACK routine `dpptrf`.
chol
`signature(x = "dsCMatrix", pivot = "logical")`: Returns (and stores) the Cholesky decomposition of `x`. If `pivot` is true, the Approximate Minimal Degree (AMD) algorithm is used to create a reordering of the rows and columns of `x` so as to reduce fill-in.
### References
Timothy A. Davis (2006) *Direct Methods for Sparse Linear Systems*, SIAM Series “Fundamentals of Algorithms”.
Tim Davis (1996), An approximate minimal degree ordering algorithm, *SIAM J. Matrix Analysis and Applications*, **17**, 4, 886–905.
### See Also
The default from base, `[chol](../../base/html/chol)`; for more flexibility (but not returning a matrix!) `[Cholesky](cholesky)`.
### Examples
```
showMethods(chol, inherited = FALSE) # show different methods
sy2 <- new("dsyMatrix", Dim = as.integer(c(2,2)), x = c(14, NA,32,77))
(c2 <- chol(sy2))#-> "Cholesky" matrix
stopifnot(all.equal(c2, chol(as(sy2, "dpoMatrix")), tolerance= 1e-13))
str(c2)
## An example where chol() can't work
(sy3 <- new("dsyMatrix", Dim = as.integer(c(2,2)), x = c(14, -1, 2, -7)))
try(chol(sy3)) # error, since it is not positive definite
## A sparse example --- exemplifying 'pivot'
(mm <- toeplitz(as(c(10, 0, 1, 0, 3), "sparseVector"))) # 5 x 5
(R <- chol(mm)) ## default: pivot = FALSE
R2 <- chol(mm, pivot=FALSE)
stopifnot( identical(R, R2), all.equal(crossprod(R), mm) )
(R. <- chol(mm, pivot=TRUE))# nice band structure,
## but of course crossprod(R.) is *NOT* equal to mm
## --> see Cholesky() and its examples, for the pivot structure & factorization
stopifnot(all.equal(sqrt(det(mm)), det(R)),
all.equal(prod(diag(R)), det(R)),
all.equal(prod(diag(R.)), det(R)))
## a second, even sparser example:
(M2 <- toeplitz(as(c(1,.5, rep(0,12), -.1), "sparseVector")))
c2 <- chol(M2)
C2 <- chol(M2, pivot=TRUE)
## For the experts, check the caching of the factorizations:
ff <- M2@factors[["spdCholesky"]]
FF <- M2@factors[["sPdCholesky"]]
L1 <- as(ff, "Matrix")# pivot=FALSE: no perm.
L2 <- as(FF, "Matrix"); P2 <- as(FF, "pMatrix")
stopifnot(identical(t(L1), c2),
all.equal(t(L2), C2, tolerance=0),#-- why not identical()?
all.equal(M2, tcrossprod(L1)), # M = LL'
all.equal(M2, crossprod(crossprod(L2, P2)))# M = P'L L'P
)
```
| programming_docs |
r None
`sparseVector` Sparse Vector Construction from Nonzero Entries
---------------------------------------------------------------
### Description
User friendly construction of sparse vectors, i.e., objects inheriting from `[class](../../base/html/class)` `[sparseVector](sparsevector-class)`, from indices and values of its non-zero entries.
### Usage
```
sparseVector(x, i, length)
```
### Arguments
| | |
| --- | --- |
| `x` | vector of the non zero entries; may be missing in which case a `"nsparseVector"` will be returned. |
| `i` | integer vector (of the same length as `x`) specifying the indices of the non-zero (or non-`TRUE`) entries of the sparse vector. |
| `length` | length of the sparse vector. |
### Details
zero entries in `x` are dropped automatically, analogously as `<drop0>()` acts on sparse matrices.
### Value
a sparse vector, i.e., inheriting from `[class](../../base/html/class)` `[sparseVector](sparsevector-class)`.
### Author(s)
Martin Maechler
### See Also
`[sparseMatrix](sparsematrix)()` constructor for sparse matrices; the class `[sparseVector](sparsevector-class)`.
### Examples
```
str(sv <- sparseVector(x = 1:10, i = sample(999, 10), length=1000))
sx <- c(0,0,3, 3.2, 0,0,0,-3:1,0,0,2,0,0,5,0,0)
ss <- as(sx, "sparseVector")
stopifnot(identical(ss,
sparseVector(x = c(2, -1, -2, 3, 1, -3, 5, 3.2),
i = c(15L, 10:9, 3L,12L,8L,18L, 4L), length = 20L)))
(ns <- sparseVector(i= c(7, 3, 2), length = 10))
stopifnot(identical(ns,
new("nsparseVector", length = 10, i = c(2, 3, 7))))
```
r None
`abIndex-class` Class "abIndex" of Abstract Index Vectors
----------------------------------------------------------
### Description
The `"abIndex"` `[class](../../base/html/class)`, short for “Abstract Index Vector”, is used for dealing with large index vectors more efficiently, than using integer (or `[numeric](../../base/html/numeric)`) vectors of the kind `2:1000000` or `c(0:1e5, 1000:1e6)`.
Note that the current implementation details are subject to change, and if you consider working with these classes, please contact the package maintainers (`packageDescription("Matrix")$Maintainer`).
### Objects from the Class
Objects can be created by calls of the form `new("abIndex", ...)`, but more easily and typically either by `as(x, "abIndex")` where `x` is an integer (valued) vector, or directly by `[abIseq](abiseq)()` and combination `[c](../../base/html/c)(...)` of such.
### Slots
`kind`:
a `[character](../../base/html/character)` string, one of `("int32", "double", "rleDiff")`, denoting the internal structure of the abIndex object.
`x`:
Object of class `"numLike"`; is used (i.e., not of length `0`) only iff the object is *not* compressed, i.e., currently exactly when `kind != "rleDiff"`.
`rleD`:
object of class `"[rleDiff](rlediff-class)"`, used for compression via `[rle](../../base/html/rle)`.
### Methods
as.numeric, as.integer, as.vector
`signature(x = "abIndex")`: ...
[
`signature(x = "abIndex", i = "index", j = "ANY", drop = "ANY")`: ...
coerce
`signature(from = "numeric", to = "abIndex")`: ...
coerce
`signature(from = "abIndex", to = "numeric")`: ...
coerce
`signature(from = "abIndex", to = "integer")`: ...
length
`signature(x = "abIndex")`: ...
Ops
`signature(e1 = "numeric", e2 = "abIndex")`: These and the following arithmetic and logic operations are **not yet implemented**; see `[Ops](../../methods/html/s4groupgeneric)` for a list of these (S4) group methods.
Ops
`signature(e1 = "abIndex", e2 = "abIndex")`: ...
Ops
`signature(e1 = "abIndex", e2 = "numeric")`: ...
Summary
`signature(x = "abIndex")`: ...
show
`("abIndex")`: simple `[show](../../methods/html/show)` method, building on `show(<rleDiff>)`.
is.na
`("abIndex")`: works analogously to regular vectors.
is.finite, is.infinite
`("abIndex")`: ditto.
### Note
This is currently experimental and not yet used for our own code. Please contact us (`packageDescription("Matrix")$Maintainer`), if you plan to make use of this class.
Partly builds on ideas and code from Jens Oehlschlaegel, as implemented (around 2008, in the GPL'ed part of) package ff.
### See Also
`[rle](../../base/html/rle)` (base) which is used here; `[numeric](../../base/html/numeric)`
### Examples
```
showClass("abIndex")
ii <- c(-3:40, 20:70)
str(ai <- as(ii, "abIndex"))# note
ai # -> show() method
stopifnot(identical(-3:20,
as(abIseq1(-3,20), "vector")))
```
r None
`printSpMatrix` Format and Print Sparse Matrices Flexibly
----------------------------------------------------------
### Description
Format and print sparse matrices flexibly. These are the “workhorses” used by the `[format](../../base/html/format)`, `[show](../../methods/html/show)` and `[print](../../base/html/print)` methods for sparse matrices. If `x` is large, `printSpMatrix2(x)` calls `printSpMatrix()` twice, namely, for the first and the last few rows, suppressing those in between, and also suppresses columns when `x` is too wide.
`printSpMatrix()` basically prints the result of `formatSpMatrix()`.
### Usage
```
formatSpMatrix(x, digits = NULL, maxp = 1e9,
cld = getClassDef(class(x)), zero.print = ".",
col.names, note.dropping.colnames = TRUE, uniDiag = TRUE,
align = c("fancy", "right"))
printSpMatrix(x, digits = NULL, maxp = max(100L, getOption("max.print")),
cld = getClassDef(class(x)),
zero.print = ".", col.names, note.dropping.colnames = TRUE,
uniDiag = TRUE, col.trailer = "",
align = c("fancy", "right"))
printSpMatrix2(x, digits = NULL, maxp = max(100L, getOption("max.print")),
zero.print = ".", col.names, note.dropping.colnames = TRUE,
uniDiag = TRUE, suppRows = NULL, suppCols = NULL,
col.trailer = if(suppCols) "......" else "",
align = c("fancy", "right"),
width = getOption("width"), fitWidth = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x` | an **R** object inheriting from class `[sparseMatrix](sparsematrix-class)`. |
| `digits` | significant digits to use for printing, see `[print.default](../../base/html/print.default)`, the default, `[NULL](../../base/html/null)`, corresponds to using `[getOption](../../base/html/options)("digits")`. |
| `maxp` | integer, default from `[options](../../base/html/options)(max.print)`, influences how many entries of large matrices are printed at all. Typically should not be smaller than around 1000; values smaller than 100 are silently “rounded up” to 100. |
| | |
| --- | --- |
| `cld` | the class definition of `x`; must be equivalent to `[getClassDef](../../methods/html/getclass)(class(x))` and exists mainly for possible speedup. |
| `zero.print` | character which should be printed for *structural* zeroes. The default `"."` may occasionally be replaced by `" "` (blank); using `"0"` would look almost like `print()`ing of non-sparse matrices. |
| `col.names` | logical or string specifying if and how column names of `x` should be printed, possibly abbreviated. The default is taken from `[options](../../base/html/options)("sparse.colnames")` if that is set, otherwise `FALSE` unless there are less than ten columns. When `TRUE` the full column names are printed. When `col.names` is a string beginning with `"abb"` or `"sub"` and ending with an integer `n` (i.e., of the form `"abb... <n>"`), the column names are `[abbreviate](../../base/html/abbreviate)()`d or `[substring](../../base/html/substr)()`ed to (target) length `n`, see the examples. |
| `note.dropping.colnames` | logical specifying, when `col.names` is `FALSE` if the dropping of the column names should be noted, `TRUE` by default. |
| `uniDiag` | logical indicating if the diagonal entries of a sparse unit triangular or unit-diagonal matrix should be formatted as `"I"` instead of `"1"` (to emphasize that the 1's are “structural”). |
| `col.trailer` | a string to be appended to the right of each column; this is typically made use of by `[show](../../methods/html/show)(<sparseMatrix>)` only, when suppressing columns. |
| `suppRows, suppCols` | logicals or `NULL`, for `printSpMatrix2()` specifying if rows or columns should be suppressed in printing. If `NULL`, sensible defaults are determined from `[dim](../../base/html/dim)(x)` and `[options](../../base/html/options)(c("width", "max.print"))`. Setting both to `FALSE` may be a very bad idea. |
| `align` | a string specifying how the `zero.print` codes should be aligned, i.e., padded as strings. The default, `"fancy"`, takes some effort to align the typical `zero.print = "."` with the position of `0`, i.e., the first decimal (one left of decimal point) of the numbers printed, whereas `align = "right"` just makes use of `[print](../../base/html/print)(*, right = TRUE)`. |
| `width` | number, a positive integer, indicating the approximately desired (line) width of the output, see also `fitWidth`. |
| `fitWidth` | logical indicating if some effort should be made to match the desired `width` or temporarily enlarge that if deemed necessary. |
### Details
formatSpMatrix:
If `x` is large, only the first rows making up the approximately first `maxp` entries is used, otherwise all of `x`. `[.formatSparseSimple](formatsparsem)()` is applied to (a dense version of) the matrix. Then, `[formatSparseM](formatsparsem)` is used, unless in trivial cases or for sparse matrices without `x` slot.
### Value
| | |
| --- | --- |
| `formatSpMatrix()` | returns a character matrix with possibly empty column names, depending on `col.names` etc, see above. |
| `printSpMatrix*()` | return `x` *invisibly*, see `[invisible](../../base/html/invisible)`. |
### Author(s)
Martin Maechler
### See Also
the virtual class `[sparseMatrix](sparsematrix-class)` and the classes extending it; maybe `[sparseMatrix](sparsematrix)` or `[spMatrix](spmatrix)` as simple constructors of such matrices.
The underlying utilities `[formatSparseM](formatsparsem)` and `.formatSparseSimple()` (on the same page).
### Examples
```
f1 <- gl(5, 3, labels = LETTERS[1:5])
X <- as(f1, "sparseMatrix")
X ## <==> show(X) <==> print(X)
t(X) ## shows column names, since only 5 columns
X2 <- as(gl(12, 3, labels = paste(LETTERS[1:12],"c",sep=".")),
"sparseMatrix")
X2
## less nice, but possible:
print(X2, col.names = TRUE) # use [,1] [,2] .. => does not fit
## Possibilities with column names printing:
t(X2) # suppressing column names
print(t(X2), col.names=TRUE)
print(t(X2), zero.print = "", col.names="abbr. 1")
print(t(X2), zero.print = "-", col.names="substring 2")
```
r None
`norm` Matrix Norms
--------------------
### Description
Computes a matrix norm of `x`, using Lapack for dense matrices. The norm can be the one (`"O"`, or `"1"`) norm, the infinity (`"I"`) norm, the Frobenius (`"F"`) norm, the maximum modulus (`"M"`) among elements of a matrix, or the spectral norm or 2-norm (`"2"`), as determined by the value of `type`.
### Usage
```
norm(x, type, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a real or complex matrix. |
| `type` | A character indicating the type of norm desired.
`"O"`, `"o"` or `"1"`
specifies the one norm, (maximum absolute column sum);
`"I"` or `"i"`
specifies the infinity norm (maximum absolute row sum);
`"F"` or `"f"`
specifies the Frobenius norm (the Euclidean norm of `x` treated as if it were a vector);
`"M"` or `"m"`
specifies the maximum modulus of all the elements in `x`; and `"2"`
specifies the “spectral norm” or 2-norm, which is the largest singular value (`[svd](../../base/html/svd)`) of `x`. The default is `"O"`. Only the first character of `type[1]` is used. |
| `...` | further arguments passed to or from other methods. |
### Details
For dense matrices, the methods eventually call the Lapack functions `dlange`, `dlansy`, `dlantr`, `zlange`, `zlansy`, and `zlantr`.
### Value
A numeric value of class `"norm"`, representing the quantity chosen according to `type`.
### References
Anderson, E., et al. (1994). *LAPACK User's Guide,* 2nd edition, SIAM, Philadelphia.
### See Also
`[onenormest](condest)()`, an *approximate* randomized estimate of the 1-norm condition number, efficient for large sparse matrices.
The `[norm](../../base/html/norm)()` function from **R**'s base package.
### Examples
```
x <- Hilbert(9)
norm(x)# = "O" = "1"
stopifnot(identical(norm(x), norm(x, "1")))
norm(x, "I")# the same, because 'x' is symmetric
allnorms <- function(d) vapply(c("1","I","F","M","2"),
norm, x = d, double(1))
allnorms(x)
allnorms(Hilbert(10))
i <- c(1,3:8); j <- c(2,9,6:10); x <- 7 * (1:7)
A <- sparseMatrix(i, j, x = x) ## 8 x 10 "dgCMatrix"
(sA <- sparseMatrix(i, j, x = x, symmetric = TRUE)) ## 10 x 10 "dsCMatrix"
(tA <- sparseMatrix(i, j, x = x, triangular= TRUE)) ## 10 x 10 "dtCMatrix"
(allnorms(A) -> nA)
allnorms(sA)
allnorms(tA)
stopifnot(all.equal(nA, allnorms(as(A, "matrix"))),
all.equal(nA, allnorms(tA))) # because tA == rbind(A, 0, 0)
A. <- A; A.[1,3] <- NA
stopifnot(is.na(allnorms(A.))) # gave error
```
r None
`all-methods` "Matrix" Methods for Functions all() and any()
-------------------------------------------------------------
### Description
The basic **R** functions `[all](../../base/html/all)` and `[any](../../base/html/any)` now have methods for `[Matrix](matrix-class)` objects and should behave as for `[matrix](../../base/html/matrix)` ones.
### Methods
all
`signature(x = "Matrix", ..., na.rm = FALSE)`: ...
any
`signature(x = "Matrix", ..., na.rm = FALSE)`: ...
all
`signature(x = "ldenseMatrix", ..., na.rm = FALSE)`: ...
all
`signature(x = "lsparseMatrix", ..., na.rm = FALSE)`: ...
### Examples
```
M <- Matrix(1:12 +0, 3,4)
all(M >= 1) # TRUE
any(M < 0 ) # FALSE
MN <- M; MN[2,3] <- NA; MN
all(MN >= 0) # NA
any(MN < 0) # NA
any(MN < 0, na.rm = TRUE) # -> FALSE
```
r None
`ddenseMatrix-class` Virtual Class "ddenseMatrix" of Numeric Dense Matrices
----------------------------------------------------------------------------
### Description
This is the virtual class of all dense numeric (i.e., **d**ouble, hence *“ddense”*) S4 matrices.
Its most important subclass is the `[dgeMatrix](dgematrix-class)` class.
### Extends
Class `"dMatrix"` directly; class `"Matrix"`, by the above.
### Slots
the same slots at its subclass `[dgeMatrix](dgematrix-class)`, see there.
### Methods
Most methods are implemented via `as(*, "dgeMatrix")` and are mainly used as “fallbacks” when the subclass doesn't need its own specialized method.
Use `[showMethods](../../methods/html/showmethods)(class = "ddenseMatrix", where =
"package:Matrix")` for an overview.
### See Also
The virtual classes `[Matrix](matrix-class)`, `[dMatrix](dmatrix-class)`, and `[dsparseMatrix](dsparsematrix-class)`.
### Examples
```
showClass("ddenseMatrix")
showMethods(class = "ddenseMatrix", where = "package:Matrix")
```
r None
`sparseMatrix-class` Virtual Class "sparseMatrix" — Mother of Sparse Matrices
------------------------------------------------------------------------------
### Description
Virtual Mother Class of All Sparse Matrices
### Slots
`Dim`:
Object of class `"integer"` - the dimensions of the matrix - must be an integer vector with exactly two non-negative values.
`Dimnames`:
a list of length two - inherited from class `Matrix`, see `[Matrix](matrix-class)`.
### Extends
Class `"Matrix"`, directly.
### Methods
show
`(object = "sparseMatrix")`: The `[show](../../methods/html/show)` method for sparse matrices prints *“structural”* zeroes as `"."` using `[printSpMatrix](printspmatrix)()` which allows further customization.
print
`signature(x = "sparseMatrix")`, ....
The `[print](../../base/html/print)` method for sparse matrices by default is the same as `show()` but can be called with extra optional arguments, see `[printSpMatrix](printspmatrix)()`.
format
`signature(x = "sparseMatrix")`, ....
The `[format](../../base/html/format)` method for sparse matrices, see `[formatSpMatrix](printspmatrix)()` for details such as the extra optional arguments.
summary
`(object = "sparseMatrix", uniqT=FALSE)`: Returns an object of S3 class `"sparseSummary"` which is basically a `[data.frame](../../base/html/data.frame)` with columns `(i,j,x)` (or just `(i,j)` for `[nsparseMatrix](nsparsematrix-classes)` class objects) with the stored (typically non-zero) entries. The `[print](../../base/html/print)` method resembles Matlab's way of printing sparse matrices, and also the MatrixMarket format, see `[writeMM](externalformats)`.
cbind2
`(x = *, y = *)`: several methods for binding matrices together, column-wise, see the basic `[cbind](../../base/html/cbind)` and `[rbind](../../base/html/cbind)` functions.
Note that the result will typically be sparse, even when one argument is dense and larger than the sparse one.
rbind2
`(x = *, y = *)`: binding matrices together row-wise, see `cbind2` above.
determinant
`(x = "sparseMatrix", logarithm=TRUE)`: `[determinant](../../base/html/det)()` methods for sparse matrices typically work via `[Cholesky](cholesky)` or `<lu>` decompositions.
diag
`(x = "sparseMatrix")`: extracts the diagonal of a sparse matrix.
dim<-
`signature(x = "sparseMatrix", value = "ANY")`: allows to *reshape* a sparse matrix to a sparse matrix with the same entries but different dimensions. `value` must be of length two and fulfill `prod(value) == prod(dim(x))`.
coerce
`signature(from = "factor", to = "sparseMatrix")`: Coercion of a factor to `"sparseMatrix"` produces the matrix of indicator **rows** stored as an object of class `"dgCMatrix"`. To obtain columns representing the interaction of the factor and a numeric covariate, replace the `"x"` slot of the result by the numeric covariate then take the transpose. Missing values (`[NA](../../base/html/na)`) from the factor are translated to columns of all `0`s.
See also `[colSums](colsums)`, `<norm>`, ... for methods with separate help pages.
### Note
In method selection for multiplication operations (i.e. `%*%` and the two-argument form of `[crossprod](../../base/html/crossprod)`) the sparseMatrix class takes precedence in the sense that if one operand is a sparse matrix and the other is any type of dense matrix then the dense matrix is coerced to a `dgeMatrix` and the appropriate sparse matrix method is used.
### See Also
`[sparseMatrix](sparsematrix)`, and its references, such as `[xtabs](../../stats/html/xtabs)(*, sparse=TRUE)`, or `<sparse.model.matrix>()`, for constructing sparse matrices.
`[T2graph](graph2t)` for conversion of `"graph"` objects (package graph) to and from sparse matrices.
### Examples
```
showClass("sparseMatrix") ## and look at the help() of its subclasses
M <- Matrix(0, 10000, 100)
M[1,1] <- M[2,3] <- 3.14
M ## show(.) method suppresses printing of the majority of rows
data(CAex); dim(CAex) # 72 x 72 matrix
determinant(CAex) # works via sparse lu(.)
## factor -> t( <sparse design matrix> ) :
(fact <- gl(5, 3, 30, labels = LETTERS[1:5]))
(Xt <- as(fact, "sparseMatrix")) # indicator rows
## missing values --> all-0 columns:
f.mis <- fact
i.mis <- c(3:5, 17)
is.na(f.mis) <- i.mis
Xt != (X. <- as(f.mis, "sparseMatrix")) # differ only in columns 3:5,17
stopifnot(all(X.[,i.mis] == 0), all(Xt[,-i.mis] == X.[,-i.mis]))
```
r None
`abIseq` Sequence Generation of "abIndex", Abstract Index Vectors
------------------------------------------------------------------
### Description
Generation of abstract index vectors, i.e., objects of class `"[abIndex](abindex-class)"`.
`abIseq()` is designed to work entirely like `[seq](../../base/html/seq)`, but producing `"abIndex"` vectors.
`abIseq1()` is its basic building block, where `abIseq1(n,m)` corresponds to `n:m`.
`c(x, ...)` will return an `"abIndex"` vector, when `x` is one.
### Usage
```
abIseq1(from = 1, to = 1)
abIseq (from = 1, to = 1, by = ((to - from)/(length.out - 1)),
length.out = NULL, along.with = NULL)
## S3 method for class 'abIndex'
c(...)
```
### Arguments
| | |
| --- | --- |
| `from, to` | the starting and (maximal) end value of the sequence. |
| `by` | number: increment of the sequence. |
| `length.out` | desired length of the sequence. A non-negative number, which for `seq` and `seq.int` will be rounded up if fractional. |
| `along.with` | take the length from the length of this argument. |
| `...` | in general an arbitrary number of **R** objects; here, when the first is an `"[abIndex](abindex-class)"` vector, these arguments will be concatenated to a new `"abIndex"` object. |
### Value
An abstract index vector, i.e., object of class `"[abIndex](abindex-class)"`.
### See Also
the class `[abIndex](abindex-class)` documentation; `[rep2abI](rep2abi)()` for another constructor; `[rle](../../base/html/rle)` (base).
### Examples
```
stopifnot(identical(-3:20,
as(abIseq1(-3,20), "vector")))
try( ## (arithmetic) not yet implemented
abIseq(1, 50, by = 3)
)
```
| programming_docs |
r None
`Diagonal` Create Diagonal Matrix Object
-----------------------------------------
### Description
Create a diagonal matrix object, i.e., an object inheriting from `[diagonalMatrix](diagonalmatrix-class)` (or a “standard” `[CsparseMatrix](csparsematrix-class)` diagonal matrix in cases that is prefered).
### Usage
```
Diagonal(n, x = NULL)
.symDiagonal(n, x = rep.int(1,n), uplo = "U", kind)
.trDiagonal(n, x = 1, uplo = "U", unitri=TRUE, kind)
.sparseDiagonal(n, x = 1, uplo = "U",
shape = if(missing(cols)) "t" else "g",
unitri, kind, cols = if(n) 0:(n - 1L) else integer(0))
```
### Arguments
| | |
| --- | --- |
| `n` | integer specifying the dimension of the (square) matrix. If missing, `length(x)` is used. |
| `x` | numeric or logical; if missing, a *unit* diagonal *n x n* matrix is created. |
| `uplo` | for `.symDiagonal` (`.trDiagonal`), the resulting sparse `[symmetricMatrix](symmetricmatrix-class)` (or `[triangularMatrix](triangularmatrix-class)`) will have slot `uplo` set from this argument, either `"U"` or `"L"`. Only rarely will it make sense to change this from the default. |
| `shape` | string of 1 character, one of `c("t","s","g")`, to choose a triangular, symmetric or general result matrix. |
| `unitri` | optional logical indicating if a triangular result should be “unit-triangular”, i.e., with `diag = "U"` slot, if possible. The default, `[missing](../../base/html/missing)`, is the same as `[TRUE](../../base/html/logical)`. |
| `kind` | string of 1 character, one of `c("d","l","n")`, to choose the storage mode of the result, from classes `[dsparseMatrix](dsparsematrix-class)`, `[lsparseMatrix](lsparsematrix-classes)`, or `[nsparseMatrix](nsparsematrix-classes)`, respectively. |
| `cols` | integer vector with values from `0:(n-1)`, denoting the *columns* to subselect conceptually, i.e., get the equivalent of `Diagonal(n,*)[, cols + 1]`. |
### Value
`Diagonal()` returns an object of class `[ddiMatrix](ddimatrix-class)` or `[ldiMatrix](ldimatrix-class)` (with “superclass” `[diagonalMatrix](diagonalmatrix-class)`).
`.symDiagonal()` returns an object of class `[dsCMatrix](dscmatrix-class)` or `[lsCMatrix](lsparsematrix-classes)`, i.e., a *sparse* *symmetric* matrix. Analogously, `.triDiagonal` gives a sparse `[triangularMatrix](triangularmatrix-class)`. This can be more efficient than `Diagonal(n)` when the result is combined with further symmetric (sparse) matrices, e.g., in `[kronecker](../../base/html/kronecker)`, however *not* for matrix multiplications where `Diagonal()` is clearly preferred.
`.sparseDiagonal()`, the workhorse of `.symDiagonal` and `.trDiagonal` returns a `[CsparseMatrix](csparsematrix-class)` (the resulting class depending on `shape` and `kind`) representation of `Diagonal(n)`, or, when `cols` are specified, of `Diagonal(n)[, cols+1]`.
### Author(s)
Martin Maechler
### See Also
the generic function `[diag](../../base/html/diag)` for *extraction* of the diagonal from a matrix works for all “Matrices”.
`[bandSparse](bandsparse)` constructs a *banded* sparse matrix from its non-zero sub-/super - diagonals. `<band>(A)` returns a band matrix containing some sub-/super - diagonals of `A`.
`[Matrix](matrix)` for general matrix construction; further, class `[diagonalMatrix](diagonalmatrix-class)`.
### Examples
```
Diagonal(3)
Diagonal(x = 10^(3:1))
Diagonal(x = (1:4) >= 2)#-> "ldiMatrix"
## Use Diagonal() + kronecker() for "repeated-block" matrices:
M1 <- Matrix(0+0:5, 2,3)
(M <- kronecker(Diagonal(3), M1))
(S <- crossprod(Matrix(rbinom(60, size=1, prob=0.1), 10,6)))
(SI <- S + 10*.symDiagonal(6)) # sparse symmetric still
stopifnot(is(SI, "dsCMatrix"))
(I4 <- .sparseDiagonal(4, shape="t"))# now (2012-10) unitriangular
stopifnot(I4@diag == "U", all(I4 == diag(4)))
```
r None
`Xtrct-methods` Methods for "[": Extraction or Subsetting in Package 'Matrix'
------------------------------------------------------------------------------
### Description
Methods for `"["`, i.e., extraction or subsetting mostly of matrices, in package Matrix.
### Methods
There are more than these:
x = "Matrix", i = "missing", j = "missing", drop= "ANY"
...
x = "Matrix", i = "numeric", j = "missing", drop= "missing"
...
x = "Matrix", i = "missing", j = "numeric", drop= "missing"
...
x = "dsparseMatrix", i = "missing", j = "numeric", drop= "logical"
...
x = "dsparseMatrix", i = "numeric", j = "missing", drop= "logical"
...
x = "dsparseMatrix", i = "numeric", j = "numeric", drop= "logical"
...
### See Also
`[[<–methods](subassign-methods)` for sub*assign*ment to `"Matrix"` objects. `[Extract](../../base/html/extract)` about the standard extraction.
### Examples
```
str(m <- Matrix(round(rnorm(7*4),2), nrow = 7))
stopifnot(identical(m, m[]))
m[2, 3] # simple number
m[2, 3:4] # simple numeric of length 2
m[2, 3:4, drop=FALSE] # sub matrix of class 'dgeMatrix'
## rows or columns only:
m[1,] # first row, as simple numeric vector
m[,1:2] # sub matrix of first two columns
showMethods("[", inherited = FALSE)
```
r None
`denseMatrix-class` Virtual Class "denseMatrix" of All Dense Matrices
----------------------------------------------------------------------
### Description
This is the virtual class of all dense (S4) matrices. It is the direct superclass of `[ddenseMatrix](ddensematrix-class)`, `[ldenseMatrix](ldensematrix-class)`
### Extends
class `"Matrix"` directly.
### Slots
exactly those of its superclass `"[Matrix](matrix-class)"`.
### Methods
Use `[showMethods](../../methods/html/showmethods)(class = "denseMatrix", where =
"package:Matrix")` for an overview of methods.
Extraction (`"["`) methods, see `[[-methods](xtrct-methods)`.
### See Also
`[colSums](colsums)`, `[kronecker](../../base/html/kronecker)`, and other such methods with own help pages.
Its superclass `[Matrix](matrix-class)`, and main subclasses, `[ddenseMatrix](ddensematrix-class)` and `[sparseMatrix](sparsematrix-class)`.
### Examples
```
showClass("denseMatrix")
```
r None
`graph2T` Conversions "graph" (sparse) Matrix
----------------------------------------------
### Description
The Matrix package has supported conversion from and to `"[graph](../../graph/html/graph-class)"` objects from (Bioconductor) package graph since summer 2005, via the usual `[as](../../methods/html/as)(., "<class>")` coercion,
```
as(from, Class)
```
Since 2013, this functionality is further exposed as the `graph2T()` and `T2graph()` functions (with further arguments than just `from`), which convert graphs to and from the triplet form of sparse matrices (of class `"[TsparseMatrix](tsparsematrix-class)"`) .
### Usage
```
graph2T(from, use.weights = )
T2graph(from, need.uniq = is_not_uniqT(from), edgemode = NULL)
```
### Arguments
| | |
| --- | --- |
| `from` | for `graph2T()`, an **R** object of class `"graph"`; for `T2graph()`, a sparse matrix inheriting from `"[TsparseMatrix](tsparsematrix-class)"`. |
| `use.weights` | logical indicating if weights should be used, i.e., equivalently the result will be numeric, i.e. of class `[dgTMatrix](dgtmatrix-class)`; otherwise the result will be `[ngTMatrix](nsparsematrix-classes)` or `[nsTMatrix](nsparsematrix-classes)`, the latter if the graph is undirected. The default looks if there are weights in the graph, and if any differ from `1`, weights are used. |
| `need.uniq` | a logical indicating if `from` may need to be internally “uniqified”; do not set this and hence rather use the default, unless you know what you are doing! |
| `edgemode` | one of `NULL`, `"directed"`, or `"undirected"`. The default `NULL` looks if the matrix is symmetric and assumes `"undirected"` in that case. |
### Value
For `graph2T()`, a sparse matrix inheriting from `"[TsparseMatrix](tsparsematrix-class)"`.
For `T2graph()` an **R** object of class `"graph"`.
### See Also
Note that the CRAN package igraph also provides conversions from and to sparse matrices (of package Matrix) via its `[graph.adjacency](../../igraph/html/graph.adjacency)()` and `[get.adjacency](../../igraph/html/get.adjacency)()`.
### Examples
```
if(isTRUE(try(require(graph)))) { ## super careful .. for "checking reasons"
n4 <- LETTERS[1:4]; dns <- list(n4,n4)
show(a1 <- sparseMatrix(i= c(1:4), j=c(2:4,1), x = 2, dimnames=dns))
show(g1 <- as(a1, "graph")) # directed
unlist(edgeWeights(g1)) # all '2'
show(a2 <- sparseMatrix(i= c(1:4,4), j=c(2:4,1:2), x = TRUE, dimnames=dns))
show(g2 <- as(a2, "graph")) # directed
# now if you want it undirected:
show(g3 <- T2graph(as(a2,"TsparseMatrix"), edgemode="undirected"))
show(m3 <- as(g3,"Matrix"))
show( graph2T(g3) ) # a "pattern Matrix" (nsTMatrix)
a. <- sparseMatrix(i= 4:1, j=1:4, dimnames=list(n4,n4), giveC=FALSE) # no 'x'
show(a.) # "ngTMatrix"
show(g. <- as(a., "graph"))
}
```
r None
`MatrixFactorization-class` Class "MatrixFactorization" of Matrix Factorizations
---------------------------------------------------------------------------------
### Description
The class `"MatrixFactorization"` is the virtual (super) class of (potentially) all matrix factorizations of matrices from package Matrix.
The class `"CholeskyFactorization"` is the virtual class of all Cholesky decompositions from Matrix (and trivial sub class of `"MatrixFactorization"`).
### Objects from the Class
A virtual Class: No objects may be created from it.
### Slots
`Dim`:
Object of class `"integer"` - the dimensions of the original matrix - must be an integer vector with exactly two non-negative values.
### Methods
dim
`(x)` simply returns `x@Dim`, see above.
expand
`signature(x = "MatrixFactorization")`: this has not been implemented yet for all matrix factorizations. It should return a list whose components are matrices which when multiplied return the original `[Matrix](matrix-class)` object.
show
`signature(object = "MatrixFactorization")`: simple printing, see `[show](../../methods/html/show)`.
solve
`signature(a = "MatrixFactorization", b= .)`: solve *A x = b* for *x*; see `<solve-methods>`.
### See Also
classes inheriting from `"MatrixFactorization"`, such as `[LU](lu-class)`, `[Cholesky](cholesky-class)`, `[CHMfactor](chmfactor-class)`, and `[sparseQR](sparseqr-class)`.
### Examples
```
showClass("MatrixFactorization")
getClass("CholeskyFactorization")
```
r None
`Matrix` Construct a Classed Matrix
------------------------------------
### Description
Construct a Matrix of a class that inherits from `Matrix`.
### Usage
```
Matrix(data=NA, nrow=1, ncol=1, byrow=FALSE, dimnames=NULL,
sparse = NULL, doDiag = TRUE, forceCheck = FALSE)
```
### Arguments
| | |
| --- | --- |
| `data` | an optional numeric data vector or matrix. |
| `nrow` | when `data` is not a matrix, the desired number of rows |
| `ncol` | when `data` is not a matrix, the desired number of columns |
| `byrow` | logical. If `FALSE` (the default) the matrix is filled by columns, otherwise the matrix is filled by rows. |
| `dimnames` | a `[dimnames](../../base/html/dimnames)` attribute for the matrix: a `list` of two character components. They are set if not `[NULL](../../base/html/null)` (as per default). |
| `sparse` | logical or `NULL`, specifying if the result should be sparse or not. By default, it is made sparse when more than half of the entries are 0. |
| `doDiag` | logical indicating if a `[diagonalMatrix](diagonalmatrix-class)` object should be returned when the resulting matrix is diagonal (*mathematically*). As class `[diagonalMatrix](diagonalmatrix-class)` `[extends](../../methods/html/is)` `[sparseMatrix](sparsematrix-class)`, this is a natural default for all values of `sparse`. Otherwise, if `doDiag` is false, a dense or sparse (depending on `sparse`) *symmetric* matrix will be returned. |
| `forceCheck` | logical indicating if the checks for structure should even happen when `data` is already a `"Matrix"` object. |
### Details
If either of `nrow` or `ncol` is not given, an attempt is made to infer it from the length of `data` and the other parameter. Further, `Matrix()` makes efforts to keep `[logical](../../base/html/logical)` matrices logical, i.e., inheriting from class `[lMatrix](dmatrix-class)`, and to determine specially structured matrices such as symmetric, triangular or diagonal ones. Note that a *symmetric* matrix also needs symmetric `[dimnames](../../base/html/dimnames)`, e.g., by specifying `dimnames = list(NULL,NULL)`, see the examples.
Most of the time, the function works via a traditional (*full*) `[matrix](../../base/html/matrix)`. However, `Matrix(0, nrow,ncol)` directly constructs an “empty” [sparseMatrix](sparsematrix-class), as does `Matrix(FALSE, *)`.
Although it is sometime possible to mix unclassed matrices (created with `matrix`) with ones of class `"Matrix"`, it is much safer to always use carefully constructed ones of class `"Matrix"`.
### Value
Returns matrix of a class that inherits from `"Matrix"`. Only if `data` is not a `[matrix](../../base/html/matrix)` and does not already inherit from class `[Matrix](matrix-class)` are the arguments `nrow`, `ncol` and `byrow` made use of.
### See Also
The classes `[Matrix](matrix-class)`, `[symmetricMatrix](symmetricmatrix-class)`, `[triangularMatrix](triangularmatrix-class)`, and `[diagonalMatrix](diagonalmatrix-class)`; further, `[matrix](../../base/html/matrix)`.
Special matrices can be constructed, e.g., via `[sparseMatrix](sparsematrix)` (sparse), `<bdiag>` (block-diagonal), `[bandSparse](bandsparse)` (banded sparse), or `[Diagonal](diagonal)`.
### Examples
```
Matrix(0, 3, 2) # 3 by 2 matrix of zeros -> sparse
Matrix(0, 3, 2, sparse=FALSE)# -> 'dense'
## 4 cases - 3 different results :
Matrix(0, 2, 2) # diagonal !
Matrix(0, 2, 2, sparse=FALSE)# (ditto)
Matrix(0, 2, 2, doDiag=FALSE)# -> sparse symm. "dsCMatrix"
Matrix(0, 2, 2, sparse=FALSE, doDiag=FALSE)# -> dense symm. "dsyMatrix"
Matrix(1:6, 3, 2) # a 3 by 2 matrix (+ integer warning)
Matrix(1:6 + 1, nrow=3)
## logical ones:
Matrix(diag(4) > 0) # -> "ldiMatrix" with diag = "U"
Matrix(diag(4) > 0, sparse=TRUE) # (ditto)
Matrix(diag(4) >= 0) # -> "lsyMatrix" (of all 'TRUE')
## triangular
l3 <- upper.tri(matrix(,3,3))
(M <- Matrix(l3)) # -> "ltCMatrix"
Matrix(! l3) # -> "ltrMatrix"
as(l3, "CsparseMatrix")# "lgCMatrix"
Matrix(1:9, nrow=3,
dimnames = list(c("a", "b", "c"), c("A", "B", "C")))
(I3 <- Matrix(diag(3)))# identity, i.e., unit "diagonalMatrix"
str(I3) # note 'diag = "U"' and the empty 'x' slot
(A <- cbind(a=c(2,1), b=1:2))# symmetric *apart* from dimnames
Matrix(A) # hence 'dgeMatrix'
(As <- Matrix(A, dimnames = list(NULL,NULL)))# -> symmetric
forceSymmetric(A) # also symmetric, w/ symm. dimnames
stopifnot(is(As, "symmetricMatrix"),
is(Matrix(0, 3,3), "sparseMatrix"),
is(Matrix(FALSE, 1,1), "sparseMatrix"))
```
r None
`ldenseMatrix-class` Virtual Class "ldenseMatrix" of Dense Logical Matrices
----------------------------------------------------------------------------
### Description
`ldenseMatrix` is the virtual class of all dense **l**ogical (S4) matrices. It extends both `[denseMatrix](densematrix-class)` and `[lMatrix](dmatrix-class)` directly.
### Slots
`x`:
logical vector containing the entries of the matrix.
`Dim`, `Dimnames`:
see `[Matrix](matrix-class)`.
### Extends
Class `"lMatrix"`, directly. Class `"denseMatrix"`, directly. Class `"Matrix"`, by class `"lMatrix"`. Class `"Matrix"`, by class `"denseMatrix"`.
### Methods
coerce
`signature(from = "matrix", to = "ldenseMatrix")`: ...
coerce
`signature(from = "ldenseMatrix", to = "matrix")`: ...
as.vector
`signature(x = "ldenseMatrix", mode = "missing")`: ...
which
`signature(x = "ndenseMatrix")`, semantically equivalent to base function `[which](../../base/html/which)(x, arr.ind)`; for details, see the `[lMatrix](dmatrix-class)` class documentation.
### See Also
Class `[lgeMatrix](lgematrix-class)` and the other subclasses.
### Examples
```
showClass("ldenseMatrix")
as(diag(3) > 0, "ldenseMatrix")
```
r None
`generalMatrix-class` Class "generalMatrix" of General Matrices
----------------------------------------------------------------
### Description
Virtual class of “general” matrices; i.e., matrices that do not have a known property such as symmetric, triangular, or diagonal.
### Objects from the Class
A virtual Class: No objects may be created from it.
### Slots
`factors`
,
`Dim`
,
`Dimnames`:
all slots inherited from `[compMatrix](compmatrix-class)`; see its description.
### Extends
Class `"compMatrix"`, directly. Class `"Matrix"`, by class `"compMatrix"`.
### See Also
Classes `[compMatrix](compmatrix-class)`, and the non-general virtual classes: `[symmetricMatrix](symmetricmatrix-class)`, `[triangularMatrix](triangularmatrix-class)`, `[diagonalMatrix](diagonalmatrix-class)`.
r None
`dMatrix-class` (Virtual) Class "dMatrix" of "double" Matrices
---------------------------------------------------------------
### Description
The `dMatrix` class is a virtual class contained by all actual classes of numeric matrices in the Matrix package. Similarly, all the actual classes of logical matrices inherit from the `lMatrix` class.
### Slots
Common to *all* matrix object in the package:
`Dim`:
Object of class `"integer"` - the dimensions of the matrix - must be an integer vector with exactly two non-negative values.
`Dimnames`:
list of length two; each component containing NULL or a `[character](../../base/html/character)` vector length equal the corresponding `Dim` element.
### Methods
There are (relatively simple) group methods (see, e.g., `[Arith](../../methods/html/s4groupgeneric)`)
Arith
`signature(e1 = "dMatrix", e2 = "dMatrix")`: ...
Arith
`signature(e1 = "dMatrix", e2 = "numeric")`: ...
Arith
`signature(e1 = "numeric", e2 = "dMatrix")`: ...
Math
`signature(x = "dMatrix")`: ...
Math2
`signature(x = "dMatrix", digits = "numeric")`: this group contains `[round](../../base/html/round)()` and `[signif](../../base/html/round)()`.
Compare
`signature(e1 = "numeric", e2 = "dMatrix")`: ...
Compare
`signature(e1 = "dMatrix", e2 = "numeric")`: ...
Compare
`signature(e1 = "dMatrix", e2 = "dMatrix")`: ...
Summary
`signature(x = "dMatrix")`: The `"Summary"` group contains the seven functions `[max](../../base/html/extremes)()`, `[min](../../base/html/extremes)()`, `[range](../../base/html/range)()`, `[prod](../../base/html/prod)()`, `[sum](../../base/html/sum)()`, `[any](../../base/html/any)()`, and `[all](../../base/html/all)()`.
The following methods are also defined for all double matrices:
coerce
`signature(from = "dMatrix", to = "matrix")`: ...
expm
`signature(x = "dMatrix")`: computes the *“Matrix Exponential”*, see `<expm>`.
zapsmall
`signature(x = "dMatrix")`: ...
The following methods are defined for all logical matrices:
which
`signature(x = "lsparseMatrix")` and many other subclasses of `"lMatrix"`: as the base function `[which](../../base/html/which)(x, arr.ind)` returns the indices of the `[TRUE](../../base/html/logical)` entries in `x`; if `arr.ind` is true, as a 2-column matrix of row and column indices. Since Matrix version 1.2-9, if `useNames` is true, as by default, with `[dimnames](../../base/html/dimnames)`, the same as `base::which`.
### See Also
The nonzero-pattern matrix class `[nMatrix](nmatrix-class)`, which can be used to store non-`[NA](../../base/html/na)` `[logical](../../base/html/logical)` matrices even more compactly.
The numeric matrix classes `[dgeMatrix](dgematrix-class)`, `[dgCMatrix](dgcmatrix-class)`, and `[Matrix](matrix-class)`.
`<drop0>(x, tol=1e-10)` is sometimes preferable to (and more efficient than) `zapsmall(x, digits=10)`.
### Examples
```
showClass("dMatrix")
set.seed(101)
round(Matrix(rnorm(28), 4,7), 2)
M <- Matrix(rlnorm(56, sd=10), 4,14)
(M. <- zapsmall(M))
table(as.logical(M. == 0))
```
| programming_docs |
r None
`symmpart` Symmetric Part and Skew(symmetric) Part of a Matrix
---------------------------------------------------------------
### Description
`symmpart(x)` computes the symmetric part `(x + t(x))/2` and `skewpart(x)` the skew symmetric part `(x - t(x))/2` of a square matrix `x`, more efficiently for specific Matrix classes.
Note that `x == symmpart(x) + skewpart(x)` for all square matrices – apart from extraneous `[NA](../../base/html/na)` values in the RHS.
### Usage
```
symmpart(x)
skewpart(x)
```
### Arguments
| | |
| --- | --- |
| `x` | a *square* matrix; either “traditional” of class `"matrix"`, or typically, inheriting from the `[Matrix](matrix-class)` class. |
### Details
These are generic functions with several methods for different matrix classes, use e.g., `[showMethods](../../methods/html/showmethods)(symmpart)` to see them.
If the row and column names differ, the result will use the column names unless they are (partly) `NULL` where the row names are non-`NULL` (see also the examples).
### Value
`symmpart()` returns a symmetric matrix, inheriting from `[symmetricMatrix](symmetricmatrix-class)` iff `x` inherited from `Matrix`.
`skewpart()` returns a skew-symmetric matrix, typically of the same class as `x` (or the closest “general” one, see `[generalMatrix](generalmatrix-class)`).
### See Also
`[isSymmetric](../../base/html/issymmetric)`.
### Examples
```
m <- Matrix(1:4, 2,2)
symmpart(m)
skewpart(m)
stopifnot(all(m == symmpart(m) + skewpart(m)))
dn <- dimnames(m) <- list(row = c("r1", "r2"), col = c("var.1", "var.2"))
stopifnot(all(m == symmpart(m) + skewpart(m)))
colnames(m) <- NULL
stopifnot(all(m == symmpart(m) + skewpart(m)))
dimnames(m) <- unname(dn)
stopifnot(all(m == symmpart(m) + skewpart(m)))
## investigate the current methods:
showMethods(skewpart, include = TRUE)
```
r None
`triangularMatrix-class` Virtual Class of Triangular Matrices in Package Matrix
--------------------------------------------------------------------------------
### Description
The virtual class of triangular matrices,`"triangularMatrix"`, the package Matrix contains *square* (`[nrow](../../base/html/nrow) ==
[ncol](../../base/html/nrow)`) numeric and logical, dense and sparse matrices, e.g., see the examples. A main use of the virtual class is in methods (and C functions) that can deal with all triangular matrices.
### Slots
`uplo`:
String (of class `"character"`). Must be either "U", for upper triangular, and "L", for lower triangular.
`diag`:
String (of class `"character"`). Must be either `"U"`, for unit triangular (diagonal is all ones), or `"N"` for non-unit. The diagonal elements are not accessed internally when `diag` is `"U"`. For `[denseMatrix](densematrix-class)` classes, they need to be allocated though, i.e., the length of the `x` slot does not depend on `diag`.
`Dim`, `Dimnames`:
The dimension (a length-2 `"integer"`) and corresponding names (or `NULL`), inherited from the `[Matrix](matrix-class)`, see there.
### Extends
Class `"Matrix"`, directly.
### Methods
There's a C function `triangularMatrix_validity()` called by the internal validity checking functions.
Currently, `[Schur](schur)`, `[isSymmetric](../../base/html/issymmetric)` and `as()` (i.e. `[coerce](../../methods/html/setas)`) have methods with `triangularMatrix` in their signature.
### See Also
`[isTriangular](istriangular)()` for testing any matrix for triangularity; classes `[symmetricMatrix](symmetricmatrix-class)`, and, e.g., `[dtrMatrix](dtrmatrix-class)` for numeric *dense* matrices, or `[ltCMatrix](lsparsematrix-classes)` for a logical *sparse* matrix subclass of `"triangularMatrix"`.
### Examples
```
showClass("triangularMatrix")
## The names of direct subclasses:
scl <- getClass("triangularMatrix")@subclasses
directly <- sapply(lapply(scl, slot, "by"), length) == 0
names(scl)[directly]
(m <- matrix(c(5,1,0,3), 2))
as(m, "triangularMatrix")
```
r None
`wrld_1deg` World 1-degree grid contiguity matrix
--------------------------------------------------
### Description
This matrix represents the distance-based contiguities of 15260 one-degree grid cells of land areas. The representation is as a row standardised spatial weights matrix transformed to a symmetric matrix (see Ord (1975), p. 125).
### Usage
```
data(wrld_1deg)
```
### Format
A *15260 ^2* symmetric sparse matrix of class `[dsCMatrix](dscmatrix-class)` with 55973 non-zero entries.
### Details
The data were created into **R** using the coordinates of a ‘SpatialPixels’ object containing approximately one-degree grid cells for land areas only (world excluding Antarctica), using package [spdep](https://CRAN.R-project.org/package=spdep)'s `[dnearneigh](../../spdep/html/dnearneigh)` with a cutoff distance of `sqrt(2)`, and row-standardised and transformed to symmetry using `[nb2listw](../../spdep/html/nb2listw)` and `[similar.listw](../../spdep/html/similar.listw)`. This spatial weights object was converted to a `[dsTMatrix](dscmatrix-class)` using `[as\_dsTMatrix\_listw](../../spdep/html/as_dstmatrix_listw)` and then coerced (column-compressed).
### Source
The shoreline data was read into **R** using `[Rgshhs](../../maptools/html/rgshhs)` from the GSHHS coarse shoreline database distributed with the [maptools](https://CRAN.R-project.org/package=maptools) package, omitting Antarctica. A matching approximately one-degree grid was generated using `[Sobj\_SpatialGrid](../../maptools/html/sobj_spatialgrid)`, and the grids on land were found using the appropriate `[over](../../sp/html/over)` method for the ‘SpatialPolygons’ and ‘SpatialGrid’ objects, yielding a ‘SpatialPixels’ one containing only the grid cells with centres on land.
### References
Ord, J. K. (1975) Estimation methods for models of spatial interaction; *Journal of the American Statistical Association* **70**, 120–126.
### Examples
```
data(wrld_1deg)
(n <- ncol(wrld_1deg))
IM <- .symDiagonal(n)
doExtras <- interactive() || nzchar(Sys.getenv("R_MATRIX_CHECK_EXTRA")) ||
identical("true", unname(Sys.getenv("R_PKG_CHECKING_doExtras")))
nn <- if(doExtras) 20 else 3
set.seed(1)
rho <- runif(nn, 0, 1)
system.time(MJ <- sapply(rho,
function(x) determinant(IM - x * wrld_1deg,
logarithm = TRUE)$modulus))
nWC <- -wrld_1deg
C1 <- Cholesky(nWC, Imult = 2)
## Note that det(<CHMfactor>) = det(L) = sqrt(det(A))
## ====> log det(A) = log( det(L)^2 ) = 2 * log det(L) :
system.time(MJ1 <- n * log(rho) +
sapply(rho, function(x) c(2* determinant(update(C1, nWC, 1/x))$modulus))
)
stopifnot(all.equal(MJ, MJ1))
C2 <- Cholesky(nWC, super = TRUE, Imult = 2)
system.time(MJ2 <- n * log(rho) +
sapply(rho, function(x) c(2* determinant(update(C2, nWC, 1/x))$modulus))
)
system.time(MJ3 <- n * log(rho) + Matrix:::ldetL2up(C1, nWC, 1/rho))
system.time(MJ4 <- n * log(rho) + Matrix:::ldetL2up(C2, nWC, 1/rho))
stopifnot(all.equal(MJ, MJ2),
all.equal(MJ, MJ3),
all.equal(MJ, MJ4))
```
r None
`pMatrix-class` Permutation matrices
-------------------------------------
### Description
The `"pMatrix"` class is the class of permutation matrices, stored as 1-based integer permutation vectors.
Matrix (vector) multiplication with permutation matrices is equivalent to row or column permutation, and is implemented that way in the Matrix package, see the ‘Details’ below.
### Details
Matrix multiplication with permutation matrices is equivalent to row or column permutation. Here are the four different cases for an arbitrary matrix *M* and a permutation matrix *P* (where we assume matching dimensions):
| | | | | |
| --- | --- | --- | --- | --- |
| *MP* | = | `M %*% P` | = | `M[, i(p)]` |
| *PM* | = | `P %*% M` | = | `M[ p , ]` |
| *P'M* | = | `crossprod(P,M)` (*~=*`t(P) %*% M`) | = | `M[i(p), ]` |
| *MP'* | = | `tcrossprod(M,P)` (*~=*`M %*% t(P)`) | = | `M[ , p ]` |
| |
where `p` is the “permutation vector” corresponding to the permutation matrix `P` (see first note), and `i(p)` is short for `[invPerm](invperm)(p)`.
Also one could argue that these are really only two cases if you take into account that inversion (`[solve](solve-methods)`) and transposition (`[t](../../base/html/t)`) are the same for permutation matrices *P*.
### Objects from the Class
Objects can be created by calls of the form `new("pMatrix", ...)` or by coercion from an integer permutation vector, see below.
### Slots
`perm`:
An integer, 1-based permutation vector, i.e. an integer vector of length `Dim[1]` whose elements form a permutation of `1:Dim[1]`.
`Dim`:
Object of class `"integer"`. The dimensions of the matrix which must be a two-element vector of equal, non-negative integers.
`Dimnames`:
list of length two; each component containing NULL or a `[character](../../base/html/character)` vector length equal the corresponding `Dim` element.
### Extends
Class `"[indMatrix](indmatrix-class)"`, directly.
### Methods
%\*%
`signature(x = "matrix", y = "pMatrix")` and other signatures (use `showMethods("%*%", class="pMatrix")`): ...
coerce
`signature(from = "integer", to = "pMatrix")`: This is enables typical `"pMatrix"` construction, given a permutation vector of `1:n`, see the first example.
coerce
`signature(from = "numeric", to = "pMatrix")`: a user convenience, to allow `as(perm, "pMatrix")` for numeric `perm` with integer values.
coerce
`signature(from = "pMatrix", to = "matrix")`: coercion to a traditional FALSE/TRUE `[matrix](../../base/html/matrix)` of `[mode](../../base/html/mode)` `logical`. (in earlier version of Matrix, it resulted in a 0/1-integer matrix; `logical` makes slightly more sense, corresponding better to the “natural” sparseMatrix counterpart, `"ngTMatrix"`.)
coerce
`signature(from = "pMatrix", to = "ngTMatrix")`: coercion to sparse logical matrix of class `[ngTMatrix](nsparsematrix-classes)`.
determinant
`signature(x = "pMatrix", logarithm="logical")`: Since permutation matrices are orthogonal, the determinant must be +1 or -1. In fact, it is exactly the *sign of the permutation*.
solve
`signature(a = "pMatrix", b = "missing")`: return the inverse permutation matrix; note that `solve(P)` is identical to `t(P)` for permutation matrices. See `<solve-methods>` for other methods.
t
`signature(x = "pMatrix")`: return the transpose of the permutation matrix (which is also the inverse of the permutation matrix).
### Note
For every permutation matrix `P`, there is a corresponding permutation vector `p` (of indices, 1:n), and these are related by
```
P <- as(p, "pMatrix")
p <- P@perm
```
see also the ‘Examples’.
“Row-indexing” a permutation matrix typically returns an `"indMatrix"`. See `"[indMatrix](indmatrix-class)"` for all other subsetting/indexing and subassignment (`A[..] <- v`) operations.
### See Also
`[invPerm](invperm)(p)` computes the inverse permutation of an integer (index) vector `p`.
### Examples
```
(pm1 <- as(as.integer(c(2,3,1)), "pMatrix"))
t(pm1) # is the same as
solve(pm1)
pm1 %*% t(pm1) # check that the transpose is the inverse
stopifnot(all(diag(3) == as(pm1 %*% t(pm1), "matrix")),
is.logical(as(pm1, "matrix")))
set.seed(11)
## random permutation matrix :
(p10 <- as(sample(10),"pMatrix"))
## Permute rows / columns of a numeric matrix :
(mm <- round(array(rnorm(3 * 3), c(3, 3)), 2))
mm %*% pm1
pm1 %*% mm
try(as(as.integer(c(3,3,1)), "pMatrix"))# Error: not a permutation
as(pm1, "ngTMatrix")
p10[1:7, 1:4] # gives an "ngTMatrix" (most economic!)
## row-indexing of a <pMatrix> keeps it as an <indMatrix>:
p10[1:3, ]
```
r None
`sparseMatrix` General Sparse Matrix Construction from Nonzero Entries
-----------------------------------------------------------------------
### Description
User friendly construction of a compressed, column-oriented, sparse matrix, inheriting from `[class](../../base/html/class)` `[CsparseMatrix](csparsematrix-class)` (or `[TsparseMatrix](tsparsematrix-class)` if `giveCsparse` is false), from locations (and values) of its non-zero entries.
This is the recommended user interface rather than direct `[new](../../methods/html/new)("***Matrix", ....)` calls.
### Usage
```
sparseMatrix(i = ep, j = ep, p, x, dims, dimnames,
symmetric = FALSE, triangular = FALSE, index1 = TRUE,
repr = "C", giveCsparse = (repr == "C"),
check = TRUE, use.last.ij = FALSE)
```
### Arguments
| | |
| --- | --- |
| `i,j` | integer vectors of the same length specifying the locations (row and column indices) of the non-zero (or non-`TRUE`) entries of the matrix. Note that for *repeated* pairs *(i\_k,j\_k)*, when `x` is not missing, the corresponding *x\_k* are *added*, in consistency with the definition of the `"[TsparseMatrix](tsparsematrix-class)"` class, unless `use.last.ij` is true, in which case only the *last* of the corresponding *(i\_k, j\_k, x\_k)* triplet is used. |
| `p` | numeric (integer valued) vector of pointers, one for each column (or row), to the initial (zero-based) index of elements in the column (or row). Exactly one of `i`, `j` or `p` must be missing. |
| `x` | optional values of the matrix entries. If specified, must be of the same length as `i` / `j`, or of length one where it will be recycled to full length. If missing, the resulting matrix will be a 0/1 patter**n** matrix, i.e., extending class `[nsparseMatrix](nsparsematrix-classes)`. |
| `dims` | optional, non-negative, integer, dimensions vector of length 2. Defaults to `c(max(i), max(j))`. |
| `dimnames` | optional list of `[dimnames](../../base/html/dimnames)`; if not specified, none, i.e., `[NULL](../../base/html/null)` ones, are used. |
| `symmetric` | logical indicating if the resulting matrix should be symmetric. In that case, only the lower or upper triangle needs to be specified via *(i/j/p)*. |
| `triangular` | logical indicating if the resulting matrix should be triangular. In that case, the lower or upper triangle needs to be specified via *(i/j/p)*. |
| `index1` | logical scalar. If `TRUE`, the default, the index vectors `i` and/or `j` are 1-based, as is the convention in **R**. That is, counting of rows and columns starts at 1. If `FALSE` the index vectors are 0-based so counting of rows and columns starts at 0; this corresponds to the internal representation. |
| `repr` | `[character](../../base/html/character)` string, one of `"C"`, `"T"`, or `"R"`, specifying the sparse *repr*esentation to be used for the result, i.e., one from the super classes `[CsparseMatrix](csparsematrix-class)`, `[TsparseMatrix](tsparsematrix-class)`, or `[RsparseMatrix](rsparsematrix-class)`. |
| `giveCsparse` | (**deprecated**, replaced with `repr`): logical indicating if the result should be a `[CsparseMatrix](csparsematrix-class)` or a `[TsparseMatrix](tsparsematrix-class)`, where the default was `TRUE`, and now is determined from `repr`; very often Csparse matrices are more efficient subsequently, but not always. |
| `check` | logical indicating if a validity check is performed; do not set to `FALSE` unless you know what you're doing! |
| `use.last.ij` | logical indicating if in the case of repeated, i.e., duplicated pairs *(i\_k, j\_k)* only the last one should be used. The default, `FALSE`, corresponds to the `"[TsparseMatrix](tsparsematrix-class)"` definition. |
### Details
Exactly one of the arguments `i`, `j` and `p` must be missing.
In typical usage, `p` is missing, `i` and `j` are vectors of positive integers and `x` is a numeric vector. These three vectors, which must have the same length, form the triplet representation of the sparse matrix.
If `i` or `j` is missing then `p` must be a non-decreasing integer vector whose first element is zero. It provides the compressed, or “pointer” representation of the row or column indices, whichever is missing. The expanded form of `p`, `rep(seq_along(dp),dp)` where `dp <- diff(p)`, is used as the (1-based) row or column indices.
You cannot set both `singular` and `triangular` to true; rather use `[Diagonal](diagonal)()` (or its alternatives, see there).
The values of `i`, `j`, `p` and `index1` are used to create 1-based index vectors `i` and `j` from which a `[TsparseMatrix](tsparsematrix-class)` is constructed, with numerical values given by `x`, if non-missing. Note that in that case, when some pairs *(i\_k,j\_k)* are repeated (aka “duplicated”), the corresponding *x\_k* are *added*, in consistency with the definition of the `"[TsparseMatrix](tsparsematrix-class)"` class, unless `use.last.ij` is set to true. By default, when `repr = "C"`, the `[CsparseMatrix](csparsematrix-class)` derived from this triplet form is returned, where `repr = "R"` now allows to directly get an `[RsparseMatrix](rsparsematrix-class)` and `repr = "T"` leaves the result as `[TsparseMatrix](tsparsematrix-class)`.
The reason for returning a `[CsparseMatrix](csparsematrix-class)` object instead of the triplet format by default is that the compressed column form is easier to work with when performing matrix operations. In particular, if there are no zeros in `x` then a `[CsparseMatrix](csparsematrix-class)` is a unique representation of the sparse matrix.
### Value
A sparse matrix, by default (from `repr = "C"`) in compressed, column-oriented form, as an **R** object inheriting from both `[CsparseMatrix](csparsematrix-class)` and `[generalMatrix](generalmatrix-class)`.
### Note
You *do* need to use `index1 = FALSE` (or add `+ 1` to `i` and `j`) if you want use the 0-based `i` (and `j`) slots from existing sparse matrices.
### See Also
`[Matrix](matrix)(*, sparse=TRUE)` for the constructor of such matrices from a *dense* matrix. That is easier in small sample, but much less efficient (or impossible) for large matrices, where something like `sparseMatrix()` is needed. Further `<bdiag>` and `[Diagonal](diagonal)` for (block-)diagonal and `[bandSparse](bandsparse)` for banded sparse matrix constructors.
Random sparse matrices via `<rsparsematrix>()`.
The standard **R** `[xtabs](../../stats/html/xtabs)(*, sparse=TRUE)`, for sparse tables and `<sparse.model.matrix>()` for building sparse model matrices.
Consider `[CsparseMatrix](csparsematrix-class)` and similar class definition help files.
### Examples
```
## simple example
i <- c(1,3:8); j <- c(2,9,6:10); x <- 7 * (1:7)
(A <- sparseMatrix(i, j, x = x)) ## 8 x 10 "dgCMatrix"
summary(A)
str(A) # note that *internally* 0-based row indices are used
(sA <- sparseMatrix(i, j, x = x, symmetric = TRUE)) ## 10 x 10 "dsCMatrix"
(tA <- sparseMatrix(i, j, x = x, triangular= TRUE)) ## 10 x 10 "dtCMatrix"
stopifnot( all(sA == tA + t(tA)) ,
identical(sA, as(tA + t(tA), "symmetricMatrix")))
## dims can be larger than the maximum row or column indices
(AA <- sparseMatrix(c(1,3:8), c(2,9,6:10), x = 7 * (1:7), dims = c(10,20)))
summary(AA)
## i, j and x can be in an arbitrary order, as long as they are consistent
set.seed(1); (perm <- sample(1:7))
(A1 <- sparseMatrix(i[perm], j[perm], x = x[perm]))
stopifnot(identical(A, A1))
## The slots are 0-index based, so
try( sparseMatrix(i=A@i, p=A@p, x= seq_along(A@x)) )
## fails and you should say so: 1-indexing is FALSE:
sparseMatrix(i=A@i, p=A@p, x= seq_along(A@x), index1 = FALSE)
## the (i,j) pairs can be repeated, in which case the x's are summed
(args <- data.frame(i = c(i, 1), j = c(j, 2), x = c(x, 2)))
(Aa <- do.call(sparseMatrix, args))
## explicitly ask for elimination of such duplicates, so
## that the last one is used:
(A. <- do.call(sparseMatrix, c(args, list(use.last.ij = TRUE))))
stopifnot(Aa[1,2] == 9, # 2+7 == 9
A.[1,2] == 2) # 2 was *after* 7
## for a pattern matrix, of course there is no "summing":
(nA <- do.call(sparseMatrix, args[c("i","j")]))
dn <- list(LETTERS[1:3], letters[1:5])
## pointer vectors can be used, and the (i,x) slots are sorted if necessary:
m <- sparseMatrix(i = c(3,1, 3:2, 2:1), p= c(0:2, 4,4,6), x = 1:6, dimnames = dn)
m
str(m)
stopifnot(identical(dimnames(m), dn))
sparseMatrix(x = 2.72, i=1:3, j=2:4) # recycling x
sparseMatrix(x = TRUE, i=1:3, j=2:4) # recycling x, |--> "lgCMatrix"
## no 'x' --> patter*n* matrix:
(n <- sparseMatrix(i=1:6, j=rev(2:7)))# -> ngCMatrix
## an empty sparse matrix:
(e <- sparseMatrix(dims = c(4,6), i={}, j={}))
## a symmetric one:
(sy <- sparseMatrix(i= c(2,4,3:5), j= c(4,7:5,5), x = 1:5,
dims = c(7,7), symmetric=TRUE))
stopifnot(isSymmetric(sy),
identical(sy, ## switch i <-> j {and transpose }
t( sparseMatrix(j= c(2,4,3:5), i= c(4,7:5,5), x = 1:5,
dims = c(7,7), symmetric=TRUE))))
## rsparsematrix() calls sparseMatrix() :
M1 <- rsparsematrix(1000, 20, nnz = 200)
summary(M1)
## pointers example in converting from other sparse matrix representations.
if(require(SparseM) && packageVersion("SparseM") >= 0.87 &&
nzchar(dfil <- system.file("extdata", "rua_32_ax.rua", package = "SparseM"))) {
X <- model.matrix(read.matrix.hb(dfil))
XX <- sparseMatrix(j = X@ja, p = X@ia - 1L, x = X@ra, dims = X@dimension)
validObject(XX)
## Alternatively, and even more user friendly :
X. <- as(X, "Matrix") # or also
X2 <- as(X, "sparseMatrix")
stopifnot(identical(XX, X.), identical(X., X2))
}
```
| programming_docs |
r None
`TsparseMatrix-class` Class "TsparseMatrix" of Sparse Matrices in Triplet Form
-------------------------------------------------------------------------------
### Description
The `"TsparseMatrix"` class is the virtual class of all sparse matrices coded in triplet form. Since it is a virtual class, no objects may be created from it. See `showClass("TsparseMatrix")` for its subclasses.
### Slots
`Dim`, `Dimnames`:
from the `"[Matrix](matrix-class)"` class,
`i`:
Object of class `"integer"` - the row indices of non-zero entries *in 0-base*, i.e., must be in `0:(nrow(.)-1)`.
`j`:
Object of class `"integer"` - the column indices of non-zero entries. Must be the same length as slot `i` and *0-based* as well, i.e., in `0:(ncol(.)-1)`. For numeric Tsparse matrices, `(i,j)` pairs can occur more than once, see `[dgTMatrix](dgtmatrix-class)`.
### Extends
Class `"sparseMatrix"`, directly. Class `"Matrix"`, by class `"sparseMatrix"`.
### Methods
Extraction (`"["`) methods, see `[[-methods](xtrct-methods)`.
### Note
Most operations with sparse matrices are performed using the compressed, column-oriented or `[CsparseMatrix](csparsematrix-class)` representation. The triplet representation is convenient for creating a sparse matrix or for reading and writing such matrices. Once it is created, however, the matrix is generally coerced to a `[CsparseMatrix](csparsematrix-class)` for further operations.
Note that all `new(.)`, `[spMatrix](spmatrix)` and `[sparseMatrix](sparsematrix)(*, repr="T")` constructors for `"TsparseMatrix"` classes implicitly add (i.e., “sum up”) *x\_k*'s that belong to identical *(i\_k, j\_k)* pairs, see, the example below, or also `"[dgTMatrix](dgtmatrix-class)"`.
For convenience, methods for some operations such as `%*%` and `crossprod` are defined for `[TsparseMatrix](tsparsematrix-class)` objects. These methods simply coerce the `[TsparseMatrix](tsparsematrix-class)` object to a `[CsparseMatrix](csparsematrix-class)` object then perform the operation.
### See Also
its superclass, `[sparseMatrix](sparsematrix-class)`, and the `[dgTMatrix](dgtmatrix-class)` class, for the links to other classes.
### Examples
```
showClass("TsparseMatrix")
## or just the subclasses' names
names(getClass("TsparseMatrix")@subclasses)
T3 <- spMatrix(3,4, i=c(1,3:1), j=c(2,4:2), x=1:4)
T3 # only 3 non-zero entries, 5 = 1+4 !
```
r None
`nsyMatrix-class` Symmetric Dense Nonzero-Pattern Matrices
-----------------------------------------------------------
### Description
The `"nsyMatrix"` class is the class of symmetric, dense nonzero-pattern matrices in non-packed storage and `"nspMatrix"` is the class of of these in packed storage. Only the upper triangle or the lower triangle is stored.
### Objects from the Class
Objects can be created by calls of the form `new("nsyMatrix", ...)`.
### Slots
`uplo`:
Object of class `"character"`. Must be either "U", for upper triangular, and "L", for lower triangular.
`x`:
Object of class `"logical"`. The logical values that constitute the matrix, stored in column-major order.
`Dim`,`Dimnames`:
The dimension (a length-2 `"integer"`) and corresponding names (or `NULL`), see the `[Matrix](matrix-class)` class.
`factors`:
Object of class `"list"`. A named list of factorizations that have been computed for the matrix.
### Extends
`"nsyMatrix"` extends class `"ngeMatrix"`, directly, whereas
`"nspMatrix"` extends class `"ndenseMatrix"`, directly.
Both extend class `"symmetricMatrix"`, directly, and class `"Matrix"` and others, *in*directly, use `[showClass](../../methods/html/rclassutils)("nsyMatrix")`, e.g., for details.
### Methods
Currently, mainly `[t](../../base/html/t)()` and coercion methods (for `[as](../../methods/html/as)(.)`; use, e.g., `[showMethods](../../methods/html/showmethods)(class="dsyMatrix")` for details.
### See Also
`[ngeMatrix](ngematrix-class)`, `[Matrix](matrix-class)`, `[t](../../base/html/t)`
### Examples
```
(s0 <- new("nsyMatrix"))
(M2 <- Matrix(c(TRUE, NA,FALSE,FALSE), 2,2)) # logical dense (ltr)
(sM <- M2 & t(M2)) # "lge"
class(sM <- as(sM, "nMatrix")) # -> "nge"
(sM <- as(sM, "nsyMatrix")) # -> "nsy"
str ( sM <- as(sM, "nspMatrix")) # -> "nsp": packed symmetric
```
r None
`nsparseMatrix-classes` Sparse "pattern" Matrices
--------------------------------------------------
### Description
The `nsparseMatrix` class is a virtual class of sparse *“pattern”* matrices, i.e., binary matrices conceptually with `TRUE`/`FALSE` entries. Only the positions of the elements that are `TRUE` are stored.
These can be stored in the “triplet” form (`[TsparseMatrix](tsparsematrix-class)`, subclasses `ngTMatrix`, `nsTMatrix`, and `ntTMatrix` which really contain pairs, not triplets) or in compressed column-oriented form (class `[CsparseMatrix](csparsematrix-class)`, subclasses `ngCMatrix`, `nsCMatrix`, and `ntCMatrix`) or–*rarely*–in compressed row-oriented form (class `[RsparseMatrix](rsparsematrix-class)`, subclasses `ngRMatrix`, `nsRMatrix`, and `ntRMatrix`). The second letter in the name of these non-virtual classes indicates `g`eneral, `s`ymmetric, or `t`riangular.
### Objects from the Class
Objects can be created by calls of the form `new("ngCMatrix",
...)` and so on. More frequently objects are created by coercion of a numeric sparse matrix to the pattern form for use in the symbolic analysis phase of an algorithm involving sparse matrices. Such algorithms often involve two phases: a symbolic phase wherein the positions of the non-zeros in the result are determined and a numeric phase wherein the actual results are calculated. During the symbolic phase only the positions of the non-zero elements in any operands are of interest, hence numeric sparse matrices can be treated as sparse pattern matrices.
### Slots
`uplo`:
Object of class `"character"`. Must be either "U", for upper triangular, and "L", for lower triangular. Present in the triangular and symmetric classes but not in the general class.
`diag`:
Object of class `"character"`. Must be either `"U"`, for unit triangular (diagonal is all ones), or `"N"` for non-unit. The implicit diagonal elements are not explicitly stored when `diag` is `"U"`. Present in the triangular classes only.
`p`:
Object of class `"integer"` of pointers, one for each column (row), to the initial (zero-based) index of elements in the column. Present in compressed column-oriented and compressed row-oriented forms only.
`i`:
Object of class `"integer"` of length nnzero (number of non-zero elements). These are the row numbers for each TRUE element in the matrix. All other elements are FALSE. Present in triplet and compressed column-oriented forms only.
`j`:
Object of class `"integer"` of length nnzero (number of non-zero elements). These are the column numbers for each TRUE element in the matrix. All other elements are FALSE. Present in triplet and compressed column-oriented forms only.
`Dim`:
Object of class `"integer"` - the dimensions of the matrix.
### Methods
coerce
`signature(from = "dgCMatrix", to =
"ngCMatrix")`, and many similar ones; typically you should coerce to `"nsparseMatrix"` (or `"nMatrix"`). Note that coercion to a sparse pattern matrix records all the potential non-zero entries, i.e., explicit (“non-structural”) zeroes are coerced to `TRUE`, not `FALSE`, see the example.
t
`signature(x = "ngCMatrix")`: returns the transpose of `x`
which
`signature(x = "lsparseMatrix")`, semantically equivalent to base function `[which](../../base/html/which)(x, arr.ind)`; for details, see the `[lMatrix](dmatrix-class)` class documentation.
### See Also
the class `[dgCMatrix](dgcmatrix-class)`
### Examples
```
(m <- Matrix(c(0,0,2:0), 3,5, dimnames=list(LETTERS[1:3],NULL)))
## ``extract the nonzero-pattern of (m) into an nMatrix'':
nm <- as(m, "nsparseMatrix") ## -> will be a "ngCMatrix"
str(nm) # no 'x' slot
nnm <- !nm # no longer sparse
(nnm <- as(nnm, "sparseMatrix"))# "lgCMatrix"
## consistency check:
stopifnot(xor(as( nm, "matrix"),
as(nnm, "matrix")))
## low-level way of adding "non-structural zeros" :
nnm@x[2:4] <- c(FALSE,NA,NA)
nnm
as(nnm, "nMatrix") # NAs *and* non-structural 0 |---> 'TRUE'
data(KNex)
nmm <- as(KNex $ mm, "ngCMatrix")
str(xlx <- crossprod(nmm))# "nsCMatrix"
stopifnot(isSymmetric(xlx))
image(xlx, main=paste("crossprod(nmm) : Sparse", class(xlx)))
```
r None
`cBind` 'cbind()' and 'rbind()' recursively built on cbind2/rbind2
-------------------------------------------------------------------
### Description
The base functions `[cbind](../../base/html/cbind)` and `[rbind](../../base/html/cbind)` are defined for an arbitrary number of arguments and hence have the first formal argument `...`. Now, when S4 objects are found among the arguments, base `cbind()` and `rbind()` internally “dispatch” *recursively*, calling `[cbind2](../../methods/html/cbind2)` or `[rbind2](../../methods/html/cbind2)` respectively, where these have methods defined and so should dispatch appropriately.
`[cbind2](../../methods/html/cbind2)()` and `[rbind2](../../methods/html/cbind2)()` are from the methods package, i.e., standard **R**, and have been provided for binding together *two* matrices, where in Matrix, we have defined methods for these and the `'Matrix'` matrices.
### Usage
```
## cbind(..., deparse.level = 1)
## rbind(..., deparse.level = 1)
## and e.g.,
## S4 method for signature 'denseMatrix,sparseMatrix'
cbind2(x,y, sparse = NA, ...)
## S4 method for signature 'sparseMatrix,denseMatrix'
cbind2(x,y, sparse = NA, ...)
## S4 method for signature 'denseMatrix,sparseMatrix'
rbind2(x,y, sparse = NA, ...)
## S4 method for signature 'sparseMatrix,denseMatrix'
rbind2(x,y, sparse = NA, ...)
```
### Arguments
| | |
| --- | --- |
| `..., x, y` | matrix-like **R** objects to be bound together, see `[cbind](../../base/html/cbind)` and `[rbind](../../base/html/cbind)`. |
| `sparse` | option `[logical](../../base/html/logical)` indicating if the result should be sparse, i.e., formally inheriting from `"[sparseMatrix](sparsematrix-class)"`. The default, `[NA](../../base/html/na)`, decides from the “sparsity” of `x` and `y`, see e.g., the **R** code in `selectMethod(cbind2, c("sparseMatrix","denseMatrix"))`. |
| `deparse.level` | integer determining under which circumstances column and row names are built from the actual arguments' ‘expression’, see `[cbind](../../base/html/cbind)`. |
### Value
typically a ‘matrix-like’ object of a similar `[class](../../base/html/class)` as the first argument in `...`.
Note that sometimes by default, the result is a `[sparseMatrix](sparsematrix-class)` if one of the arguments is (even in the case where this is not efficient). In other cases, the result is chosen to be sparse when there are more zero entries is than non-zero ones (as the default `sparse` in `[Matrix](matrix)()`).
### Historical Remark
Before **R** version 3.2.0 (April 2015), we have needed a substitute for *S4-enabled* versions of `cbind` and `rbind`, and provided `cBind` and `rBind` with identical syntax and semantic in order to bind together multiple matrices (`"matrix"` or `"Matrix"` and vectors. With **R** version 3.2.0 and newer, `cBind` and `rBind` are *deprecated* and produce a deprecation warning (via `[.Deprecated](../../base/html/deprecated)`), and your code should start using `cbind()` and `rbind()` instead.
### Author(s)
Martin Maechler
### See Also
`[cbind2](../../methods/html/cbind2)`, `[cbind](../../base/html/cbind)`, Documentation in base **R**'s methods package.
Our class definition help pages mentioning `cbind2()` and `rbind2()` methods: `"[denseMatrix](densematrix-class)"`, `"[diagonalMatrix](diagonalmatrix-class)"`, `"[indMatrix](indmatrix-class)"`.
### Examples
```
(a <- matrix(c(2:1,1:2), 2,2))
(M1 <- cbind(0, rbind(a, 7))) # a traditional matrix
D <- Diagonal(2)
(M2 <- cbind(4, a, D, -1, D, 0)) # a sparse Matrix
stopifnot(validObject(M2), inherits(M2, "sparseMatrix"),
dim(M2) == c(2,9))
```
r None
`matrix-products` Matrix (Cross) Products (of Transpose)
---------------------------------------------------------
### Description
The basic matrix product, `%*%` is implemented for all our `[Matrix](matrix-class)` and also for `[sparseVector](sparsevector-class)` classes, fully analogously to **R**'s base `matrix` and vector objects.
The functions `[crossprod](matrix-products)` and `[tcrossprod](matrix-products)` are matrix products or “cross products”, ideally implemented efficiently without computing `[t](../../base/html/t)(.)`'s unnecessarily. They also return `[symmetricMatrix](symmetricmatrix-class)` classed matrices when easily detectable, e.g., in `crossprod(m)`, the one argument case.
`tcrossprod()` takes the cross-product of the transpose of a matrix. `tcrossprod(x)` is formally equivalent to, but faster than, the call `x %*% t(x)`, and so is `tcrossprod(x, y)` instead of `x %*% t(y)`.
*Boolean* matrix products are computed via either `[%&%](boolean-matprod)` or `boolArith = TRUE`.
### Usage
```
## S4 method for signature 'CsparseMatrix,diagonalMatrix'
x %*% y
## S4 method for signature 'dgeMatrix,missing'
crossprod(x, y = NULL, boolArith = NA, ...)
## S4 method for signature 'CsparseMatrix,diagonalMatrix'
crossprod(x, y = NULL, boolArith = NA, ...)
## .... and for many more signatures
## S4 method for signature 'CsparseMatrix,ddenseMatrix'
tcrossprod(x, y = NULL, boolArith = NA, ...)
## S4 method for signature 'TsparseMatrix,missing'
tcrossprod(x, y = NULL, boolArith = NA, ...)
## .... and for many more signatures
```
### Arguments
| | |
| --- | --- |
| `x` | a matrix-like object |
| `y` | a matrix-like object, or for `[t]crossprod()` `NULL` (by default); the latter case is formally equivalent to `y = x`. |
| `boolArith` | `[logical](../../base/html/logical)`, i.e., `NA`, `TRUE`, or `FALSE`. If true the result is (coerced to) a pattern matrix, i.e., `"[nMatrix](nmatrix-class)"`, unless there are `NA` entries and the result will be a `"[lMatrix](dmatrix-class)"`. If false the result is (coerced to) numeric. When `NA`, currently the default, the result is a pattern matrix when `x` and `y` are `"[nsparseMatrix](nsparsematrix-classes)"` and numeric otherwise. |
| `...` | potentially more arguments passed to and from methods. |
### Details
For some classes in the `Matrix` package, such as `[dgCMatrix](dgcmatrix-class)`, it is much faster to calculate the cross-product of the transpose directly instead of calculating the transpose first and then its cross-product.
`boolArith = TRUE` for regular (“non cross”) matrix products, `%*%` cannot be specified. Instead, we provide the `[%&%](boolean-matprod)` operator for *boolean* matrix products.
### Value
A `[Matrix](matrix-class)` object, in the one argument case of an appropriate *symmetric* matrix class, i.e., inheriting from `[symmetricMatrix](symmetricmatrix-class)`.
### Methods
%\*%
`signature(x = "dgeMatrix", y = "dgeMatrix")`: Matrix multiplication; ditto for several other signature combinations, see `showMethods("%*%", class = "dgeMatrix")`.
%\*%
`signature(x = "dtrMatrix", y = "matrix")` and other signatures (use `showMethods("%*%", class="dtrMatrix")`): matrix multiplication. Multiplication of (matching) triangular matrices now should remain triangular (in the sense of class [triangularMatrix](triangularmatrix-class)).
crossprod
`signature(x = "dgeMatrix", y = "dgeMatrix")`: ditto for several other signatures, use `showMethods("crossprod", class = "dgeMatrix")`, matrix crossproduct, an efficient version of `t(x) %*% y`.
crossprod
`signature(x = "CsparseMatrix", y = "missing")` returns `t(x) %*% x` as an `dsCMatrix` object.
crossprod
`signature(x = "TsparseMatrix", y = "missing")` returns `t(x) %*% x` as an `dsCMatrix` object.
crossprod,tcrossprod
`signature(x = "dtrMatrix", y =
"matrix")` and other signatures, see `"%*%"` above.
### Note
`boolArith = TRUE`, `FALSE` or `NA` has been newly introduced for Matrix 1.2.0 (March 2015). Its implementation may be incomplete and partly missing. Please report such omissions if detected!
Currently, `boolArith = TRUE` is implemented via `[CsparseMatrix](csparsematrix-class)` coercions which may be quite inefficient for dense matrices. Contributions for efficiency improvements are welcome.
### See Also
`[tcrossprod](../../base/html/crossprod)` in **R**'s base, `[crossprod](matrix-products)` and `[%\*%](matrix-products)`.
### Examples
```
## A random sparse "incidence" matrix :
m <- matrix(0, 400, 500)
set.seed(12)
m[runif(314, 0, length(m))] <- 1
mm <- as(m, "dgCMatrix")
object.size(m) / object.size(mm) # smaller by a factor of > 200
## tcrossprod() is very fast:
system.time(tCmm <- tcrossprod(mm))# 0 (PIII, 933 MHz)
system.time(cm <- crossprod(t(m))) # 0.16
system.time(cm. <- tcrossprod(m)) # 0.02
stopifnot(cm == as(tCmm, "matrix"))
## show sparse sub matrix
tCmm[1:16, 1:30]
```
r None
`dgRMatrix-class` Sparse Compressed, Row-oriented Numeric Matrices
-------------------------------------------------------------------
### Description
The `dgRMatrix` class is a class of sparse numeric matrices in the compressed, sparse, row-oriented format. In this implementation the non-zero elements in the rows are sorted into increasing column order.
**Note:** The column-oriented sparse classes, e.g., `[dgCMatrix](dgcmatrix-class)`, are preferred and better supported in the Matrix package.
### Objects from the Class
Objects can be created by calls of the form `new("dgRMatrix", ...)`.
### Slots
`j`:
Object of class `"integer"` of length nnzero (number of non-zero elements). These are the column numbers for each non-zero element in the matrix.
`p`:
Object of class `"integer"` of pointers, one for each row, to the initial (zero-based) index of elements in the row.
`x`:
Object of class `"numeric"` - the non-zero elements of the matrix.
`Dim`:
Object of class `"integer"` - the dimensions of the matrix.
### Methods
coerce
`signature(from = "matrix", to = "dgRMatrix")`
coerce
`signature(from = "dgRMatrix", to = "matrix")`
coerce
`signature(from = "dgRMatrix", to = "dgTMatrix")`
diag
`signature(x = "dgRMatrix")`: returns the diagonal of `x`
dim
`signature(x = "dgRMatrix")`: returns the dimensions of `x`
image
`signature(x = "dgRMatrix")`: plots an image of `x` using the `[levelplot](../../lattice/html/levelplot)` function
### See Also
the `[RsparseMatrix](rsparsematrix-class)` class, the virtual class of all sparse compressed **r**ow-oriented matrices, with its methods. The `[dgCMatrix](dgcmatrix-class)` class (**c**olumn compressed sparse) is really preferred.
r None
`unused-classes` Virtual Classes Not Yet Really Implemented and Used
---------------------------------------------------------------------
### Description
`iMatrix` is the virtual class of all **i**nteger (S4) matrices. It extends the `[Matrix](matrix-class)` class directly.
`zMatrix` is the virtual class of all `[complex](../../base/html/complex)` (S4) matrices. It extends the `[Matrix](matrix-class)` class directly.
### Examples
```
showClass("iMatrix")
showClass("zMatrix")
```
r None
`ldiMatrix-class` Class "ldiMatrix" of Diagonal Logical Matrices
-----------------------------------------------------------------
### Description
The class `"ldiMatrix"` of logical diagonal matrices.
### Objects from the Class
Objects can be created by calls of the form `new("ldiMatrix", ...)` but typically rather via `[Diagonal](diagonal)`.
### Slots
`x`:
`"logical"` vector.
`diag`:
`"character"` string, either "U" or "N", see `[ddiMatrix](ddimatrix-class)`.
`Dim`,`Dimnames`:
matrix dimension and `[dimnames](../../base/html/dimnames)`, see the `[Matrix](matrix-class)` class description.
### Extends
Class `"[diagonalMatrix](diagonalmatrix-class)"` and class `"[lMatrix](dmatrix-class)"`, directly.
Class `"[sparseMatrix](sparsematrix-class)"`, by class `"diagonalMatrix"`.
### See Also
Classes `[ddiMatrix](ddimatrix-class)` and `[diagonalMatrix](diagonalmatrix-class)`; function `[Diagonal](diagonal)`.
### Examples
```
(lM <- Diagonal(x = c(TRUE,FALSE,FALSE)))
str(lM)#> gory details (slots)
crossprod(lM) # numeric
(nM <- as(lM, "nMatrix"))# -> sparse (not formally ``diagonal'')
crossprod(nM) # logical sparse
```
| programming_docs |
r None
`ndenseMatrix-class` Virtual Class "ndenseMatrix" of Dense Logical Matrices
----------------------------------------------------------------------------
### Description
`ndenseMatrix` is the virtual class of all dense **l**ogical (S4) matrices. It extends both `[denseMatrix](densematrix-class)` and `[lMatrix](dmatrix-class)` directly.
### Slots
`x`:
logical vector containing the entries of the matrix.
`Dim`, `Dimnames`:
see `[Matrix](matrix-class)`.
### Extends
Class `"nMatrix"`, directly. Class `"denseMatrix"`, directly. Class `"Matrix"`, by class `"nMatrix"`. Class `"Matrix"`, by class `"denseMatrix"`.
### Methods
%\*%
`signature(x = "nsparseMatrix", y = "ndenseMatrix")`: ...
%\*%
`signature(x = "ndenseMatrix", y = "nsparseMatrix")`: ...
coerce
`signature(from = "matrix", to = "ndenseMatrix")`: ...
coerce
`signature(from = "ndenseMatrix", to = "matrix")`: ...
crossprod
`signature(x = "nsparseMatrix", y = "ndenseMatrix")`: ...
crossprod
`signature(x = "ndenseMatrix", y = "nsparseMatrix")`: ...
as.vector
`signature(x = "ndenseMatrix", mode = "missing")`: ...
diag
`signature(x = "ndenseMatrix")`: extracts the diagonal as for all matrices, see the generic `[diag](../../base/html/diag)()`.
which
`signature(x = "ndenseMatrix")`, semantically equivalent to base function `[which](../../base/html/which)(x, arr.ind)`; for details, see the `[lMatrix](dmatrix-class)` class documentation.
### See Also
Class `[ngeMatrix](ngematrix-class)` and the other subclasses.
### Examples
```
showClass("ndenseMatrix")
as(diag(3) > 0, "ndenseMatrix")# -> "nge"
```
r None
`facmul` Multiplication by Decomposition Factors
-------------------------------------------------
### Description
Performs multiplication by factors for certain decompositions (and allows explicit formation of those factors).
### Usage
```
facmul(x, factor, y, transpose, left, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a matrix decomposition. No missing values or IEEE special values are allowed. |
| `factor` | an indicator for selecting a particular factor for multiplication. |
| `y` | a matrix or vector to be multiplied by the factor or its transpose. No missing values or IEEE special values are allowed. |
| `transpose` | a logical value. When `FALSE` (the default) the factor is applied. When `TRUE` the transpose of the factor is applied. |
| `left` | a logical value. When `TRUE` (the default) the factor is applied from the left. When `FALSE` the factor is applied from the right. |
| `...` | the method for `"qr.Matrix"` has additional arguments. |
### Value
the product of the selected factor (or its transpose) and `y`
### NOTE
Factors for decompositions such as `lu` and `qr` can be stored in a compact form. The function `facmul` allows multiplication without explicit formation of the factors, saving both storage and operations.
### References
Golub, G., and Van Loan, C. F. (1989). *Matrix Computations,* 2nd edition, Johns Hopkins, Baltimore.
### Examples
```
library(Matrix)
x <- Matrix(rnorm(9), 3, 3)
## Not run:
qrx <- qr(x) # QR factorization of x
y <- rnorm(3)
facmul( qr(x), factor = "Q", y) # form Q y
## End(Not run)
```
r None
`rleDiff-class` Class "rleDiff" of rle(diff(.)) Stored Vectors
---------------------------------------------------------------
### Description
Class `"rleDiff"` is for compactly storing long vectors which mainly consist of *linear* stretches. For such a vector `x`, `[diff](../../base/html/diff)(x)` consists of *constant* stretches and is hence well compressable via `[rle](../../base/html/rle)()`.
### Objects from the Class
Objects can be created by calls of the form `new("rleDiff", ...)`.
Currently experimental, see below.
### Slots
`first`:
A single number (of class `"numLike"`, a class union of `"numeric"` and `"logical"`).
`rle`:
Object of class `"rle"`, basically a `[list](../../base/html/list)` with components `"lengths"` and `"values"`, see `[rle](../../base/html/rle)()`. As this is used to encode potentially huge index vectors, `lengths` may be of type `[double](../../base/html/double)` here.
### Methods
There is a simple `[show](../../methods/html/show)` method only.
### Note
This is currently an *experimental* auxiliary class for the class `[abIndex](abindex-class)`, see there.
### See Also
`[rle](../../base/html/rle)`, `[abIndex](abindex-class)`.
### Examples
```
showClass("rleDiff")
ab <- c(abIseq(2, 100), abIseq(20, -2))
ab@rleD # is "rleDiff"
```
r None
`Matrix-class` Virtual Class "Matrix" Class of Matrices
--------------------------------------------------------
### Description
The `Matrix` class is a class contained by all actual classes in the Matrix package. It is a “virtual” class.
### Slots
Common to *all* matrix objects in the package:
`Dim`:
Object of class `"integer"` - the dimensions of the matrix - must be an integer vector with exactly two non-negative values.
`Dimnames`:
list of length two; each component containing NULL or a `[character](../../base/html/character)` vector length equal the corresponding `Dim` element.
### Methods
determinant
`signature(x = "Matrix", logarithm = "missing")`: and
determinant
`signature(x = "Matrix", logarithm = "logical")`: compute the (*\log*) determinant of `x`. The method chosen depends on the actual Matrix class of `x`. Note that `[det](../../base/html/det)` also works for all our matrices, calling the appropriate `determinant()` method. The `Matrix::det` is an exact copy of `base::det`, but in the correct namespace, and hence calling the S4-aware version of `determinant()`.).
diff
`signature(x = "Matrix")`: As `[diff](../../base/html/diff)()` for traditional matrices, i.e., applying `diff()` to each column.
dim
`signature(x = "Matrix")`: extract matrix dimensions `[dim](../../base/html/dim)`.
dim<-
`signature(x = "Matrix", value = "ANY")`: where `value` is integer of length 2. Allows to *reshape* Matrix objects, but only when `prod(value) == prod(dim(x))`.
dimnames
`signature(x = "Matrix")`: extract `[dimnames](../../base/html/dimnames)`.
dimnames<-
`signature(x = "Matrix", value = "list")`: set the `dimnames` to a `[list](../../base/html/list)` of length 2, see `[dimnames<-](../../base/html/dimnames)`.
length
`signature(x = "Matrix")`: simply defined as `prod(dim(x))` (and hence of mode `"double"`).
show
`signature(object = "Matrix")`: `[show](../../methods/html/show)` method for `[print](../../base/html/print)`ing. For printing *sparse* matrices, see `[printSpMatrix](printspmatrix)`.
image
`signature(object = "Matrix")`: draws an `[image](../../graphics/html/image)` of the matrix entries, using `[levelplot](../../lattice/html/levelplot)()` from package lattice.
head
`signature(object = "Matrix")`: return only the *“head”*, i.e., the first few rows.
tail
`signature(object = "Matrix")`: return only the *“tail”*, i.e., the last few rows of the respective matrix.
as.matrix, as.array
`signature(x = "Matrix")`: the same as `as(x, "matrix")`; see also the note below.
as.vector
`signature(x = "Matrix", mode = "missing")`: `as.vector(m)` should be identical to `as.vector(as(m,
"matrix"))`, implemented more efficiently for some subclasses.
as(x, "vector"), as(x, "numeric")
etc, similarly.
coerce
`signature(from = "ANY", to = "Matrix")`: This relies on a correct `[as.matrix](../../base/html/matrix)()` method for `from`.
There are many more methods that (conceptually should) work for all `"Matrix"` objects, e.g., `[colSums](colsums)`, `[rowMeans](colsums)`. Even base functions may work automagically (if they first call `[as.matrix](../../base/html/matrix)()` on their principal argument), e.g., `[apply](../../base/html/apply)`, `[eigen](../../base/html/eigen)`, `[svd](../../base/html/svd)` or `[kappa](../../base/html/kappa)` all do work via coercion to a “traditional” (dense) `[matrix](../../base/html/matrix)`.
### Note
Loading the `Matrix` namespace “overloads” `[as.matrix](../../base/html/matrix)` and `[as.array](../../base/html/array)` in the base namespace by the equivalent of `function(x) as(x, "matrix")`. Consequently, `as.matrix(m)` or `as.array(m)` will properly work when `m` inherits from the `"Matrix"` class — *also* for functions in package base and other packages. E.g., `[apply](../../base/html/apply)` or `[outer](../../base/html/outer)` can therefore be applied to `"Matrix"` matrices.
### Author(s)
Douglas Bates [[email protected]](mailto:[email protected]) and Martin Maechler
### See Also
the classes `[dgeMatrix](dgematrix-class)`, `[dgCMatrix](dgcmatrix-class)`, and function `[Matrix](matrix)` for construction (and examples).
Methods, e.g., for `[kronecker](kronecker-methods)`.
### Examples
```
slotNames("Matrix")
cl <- getClass("Matrix")
names(cl@subclasses) # more than 40 ..
showClass("Matrix")#> output with slots and all subclasses
(M <- Matrix(c(0,1,0,0), 6, 4))
dim(M)
diag(M)
cm <- M[1:4,] + 10*Diagonal(4)
diff(M)
## can reshape it even :
dim(M) <- c(2, 12)
M
stopifnot(identical(M, Matrix(c(0,1,0,0), 2,12)),
all.equal(det(cm),
determinant(as(cm,"matrix"), log=FALSE)$modulus,
check.attributes=FALSE))
```
r None
`Hilbert` Generate a Hilbert matrix
------------------------------------
### Description
Generate the `n` by `n` symmetric Hilbert matrix. Because these matrices are ill-conditioned for moderate to large `n`, they are often used for testing numerical linear algebra code.
### Usage
```
Hilbert(n)
```
### Arguments
| | |
| --- | --- |
| `n` | a non-negative integer. |
### Value
the `n` by `n` symmetric Hilbert matrix as a `"dpoMatrix"` object.
### See Also
the class `[dpoMatrix](dpomatrix-class)`
### Examples
```
Hilbert(6)
```
r None
`dsCMatrix-class` Numeric Symmetric Sparse (column compressed) Matrices
------------------------------------------------------------------------
### Description
The `dsCMatrix` class is a class of symmetric, sparse numeric matrices in the compressed, **c**olumn-oriented format. In this implementation the non-zero elements in the columns are sorted into increasing row order.
The `dsTMatrix` class is the class of symmetric, sparse numeric matrices in **t**riplet format.
### Objects from the Class
Objects can be created by calls of the form `new("dsCMatrix",
...)` or `new("dsTMatrix", ...)`, or automatically via e.g., `as(*, "symmetricMatrix")`, or (for `dsCMatrix`) also from `[Matrix](matrix)(.)`.
Creation “from scratch” most efficiently happens via `[sparseMatrix](sparsematrix)(*, symmetric=TRUE)`.
### Slots
`uplo`:
A character object indicating if the upper triangle (`"U"`) or the lower triangle (`"L"`) is stored.
`i`:
Object of class `"integer"` of length nnZ (*half* number of non-zero elements). These are the row numbers for each non-zero element in the lower triangle of the matrix.
`p`:
(only in class `"dsCMatrix"`:) an `[integer](../../base/html/integer)` vector for providing pointers, one for each column, see the detailed description in `[CsparseMatrix](csparsematrix-class)`.
`j`:
(only in class `"dsTMatrix"`:) Object of class `"integer"` of length nnZ (as `i`). These are the column numbers for each non-zero element in the lower triangle of the matrix.
`x`:
Object of class `"numeric"` of length nnZ – the non-zero elements of the matrix (to be duplicated for full matrix).
`factors`:
Object of class `"list"` - a list of factorizations of the matrix.
`Dim`:
Object of class `"integer"` - the dimensions of the matrix - must be an integer vector with exactly two non-negative values.
### Extends
Both classes extend classes and `[symmetricMatrix](symmetricmatrix-class)` `[dsparseMatrix](dsparsematrix-class)` directly; `dsCMatrix` further directly extends `[CsparseMatrix](csparsematrix-class)`, where `dsTMatrix` does `[TsparseMatrix](tsparsematrix-class)`.
### Methods
solve
`signature(a = "dsCMatrix", b = "....")`: `x
<- solve(a,b)` solves *A x = b* for *x*; see `<solve-methods>`.
chol
`signature(x = "dsCMatrix", pivot = "logical")`: Returns (and stores) the Cholesky decomposition of `x`, see `<chol>`.
Cholesky
`signature(A = "dsCMatrix",...)`: Computes more flexibly Cholesky decompositions, see `[Cholesky](cholesky)`.
determinant
`signature(x = "dsCMatrix", logarithm =
"missing")`: Evaluate the determinant of `x` on the logarithm scale. This creates and stores the Cholesky factorization.
determinant
`signature(x = "dsCMatrix", logarithm =
"logical")`: Evaluate the determinant of `x` on the logarithm scale or not, according to the `logarithm` argument. This creates and stores the Cholesky factorization.
t
`signature(x = "dsCMatrix")`: Transpose. As for all symmetric matrices, a matrix for which the upper triangle is stored produces a matrix for which the lower triangle is stored and vice versa, i.e., the `uplo` slot is swapped, and the row and column indices are interchanged.
t
`signature(x = "dsTMatrix")`: Transpose. The `uplo` slot is swapped from `"U"` to `"L"` or vice versa, as for a `"dsCMatrix"`, see above.
coerce
`signature(from = "dsCMatrix", to = "dgTMatrix")`
coerce
`signature(from = "dsCMatrix", to = "dgeMatrix")`
coerce
`signature(from = "dsCMatrix", to = "matrix")`
coerce
`signature(from = "dsTMatrix", to = "dgeMatrix")`
coerce
`signature(from = "dsTMatrix", to = "dsCMatrix")`
coerce
`signature(from = "dsTMatrix", to = "dsyMatrix")`
coerce
`signature(from = "dsTMatrix", to = "matrix")`
### See Also
Classes `[dgCMatrix](dgcmatrix-class)`, `[dgTMatrix](dgtmatrix-class)`, `[dgeMatrix](dgematrix-class)` and those mentioned above.
### Examples
```
mm <- Matrix(toeplitz(c(10, 0, 1, 0, 3)), sparse = TRUE)
mm # automatically dsCMatrix
str(mm)
## how would we go from a manually constructed Tsparse* :
mT <- as(mm, "dgTMatrix")
## Either
(symM <- as(mT, "symmetricMatrix"))# dsT
(symC <- as(symM, "CsparseMatrix"))# dsC
## or
sC <- Matrix(mT, sparse=TRUE, forceCheck=TRUE)
sym2 <- as(symC, "TsparseMatrix")
## --> the same as 'symM', a "dsTMatrix"
```
r None
`colSums` Form Row and Column Sums and Means
---------------------------------------------
### Description
Form row and column sums and means for objects, for `[sparseMatrix](sparsematrix-class)` the result may optionally be sparse (`[sparseVector](sparsevector-class)`), too. Row or column names are kept respectively as for base matrices and `[colSums](colsums)` methods, when the result is `[numeric](../../base/html/numeric)` vector.
### Usage
```
colSums (x, na.rm = FALSE, dims = 1, ...)
rowSums (x, na.rm = FALSE, dims = 1, ...)
colMeans(x, na.rm = FALSE, dims = 1, ...)
rowMeans(x, na.rm = FALSE, dims = 1, ...)
## S4 method for signature 'CsparseMatrix'
colSums(x, na.rm = FALSE,
dims = 1, sparseResult = FALSE)
## S4 method for signature 'CsparseMatrix'
rowSums(x, na.rm = FALSE,
dims = 1, sparseResult = FALSE)
## S4 method for signature 'CsparseMatrix'
colMeans(x, na.rm = FALSE,
dims = 1, sparseResult = FALSE)
## S4 method for signature 'CsparseMatrix'
rowMeans(x, na.rm = FALSE,
dims = 1, sparseResult = FALSE)
```
### Arguments
| | |
| --- | --- |
| `x` | a Matrix, i.e., inheriting from `[Matrix](matrix-class)`. |
| `na.rm` | logical. Should missing values (including `NaN`) be omitted from the calculations? |
| `dims` | completely ignored by the `Matrix` methods. |
| `...` | potentially further arguments, for method `<->` generic compatibility. |
| `sparseResult` | logical indicating if the result should be sparse, i.e., inheriting from class `[sparseVector](sparsevector-class)`. Only applicable when `x` is inheriting from a `[sparseMatrix](sparsematrix-class)` class. |
### Value
returns a numeric vector if `sparseResult` is `FALSE` as per default. Otherwise, returns a `[sparseVector](sparsevector-class)`.
`[dimnames](../../base/html/dimnames)(x)` are only kept (as `[names](../../base/html/names)(v)`) when the resulting `v` is `[numeric](../../base/html/numeric)`, since `[sparseVector](sparsevector)`s do not have names.
### See Also
`[colSums](../../base/html/colsums)` and the `[sparseVector](sparsevector-class)` classes.
### Examples
```
(M <- bdiag(Diagonal(2), matrix(1:3, 3,4), diag(3:2))) # 7 x 8
colSums(M)
d <- Diagonal(10, c(0,0,10,0,2,rep(0,5)))
MM <- kronecker(d, M)
dim(MM) # 70 80
length(MM@x) # 160, but many are '0' ; drop those:
MM <- drop0(MM)
length(MM@x) # 32
cm <- colSums(MM)
(scm <- colSums(MM, sparseResult = TRUE))
stopifnot(is(scm, "sparseVector"),
identical(cm, as.numeric(scm)))
rowSums (MM, sparseResult = TRUE) # 14 of 70 are not zero
colMeans(MM, sparseResult = TRUE) # 16 of 80 are not zero
## Since we have no 'NA's, these two are equivalent :
stopifnot(identical(rowMeans(MM, sparseResult = TRUE),
rowMeans(MM, sparseResult = TRUE, na.rm = TRUE)),
rowMeans(Diagonal(16)) == 1/16,
colSums(Diagonal(7)) == 1)
## dimnames(x) --> names( <value> ) :
dimnames(M) <- list(paste0("r", 1:7), paste0("V",1:8))
M
colSums(M)
rowMeans(M)
## Assertions :
stopifnot(all.equal(colSums(M),
setNames(c(1,1,6,6,6,6,3,2), colnames(M))),
all.equal(rowMeans(M), structure(c(1,1,4,8,12,3,2) / 8,
.Names = paste0("r", 1:7))))
```
r None
`kronecker-methods` Methods for Function 'kronecker()' in Package 'Matrix'
---------------------------------------------------------------------------
### Description
Computes Kronecker products for objects inheriting from `"[Matrix](matrix-class)"`.
In order to preserver sparseness, we treat `0 * NA` as `0`, not as `[NA](../../base/html/na)` as usually in **R** (and as used for the base function `[kronecker](../../base/html/kronecker)`).
### Methods
kronecker
`signature(X = "Matrix", Y = "ANY")` .......
kronecker
`signature(X = "ANY", Y = "Matrix")` .......
kronecker
`signature(X = "diagonalMatrix", Y = "ANY")` .......
kronecker
`signature(X = "sparseMatrix", Y = "ANY")` .......
kronecker
`signature(X = "TsparseMatrix", Y = "TsparseMatrix")` .......
kronecker
`signature(X = "dgTMatrix", Y = "dgTMatrix")` .......
kronecker
`signature(X = "dtTMatrix", Y = "dtTMatrix")` .......
kronecker
`signature(X = "indMatrix", Y = "indMatrix")` .......
### Examples
```
(t1 <- spMatrix(5,4, x= c(3,2,-7,11), i= 1:4, j=4:1)) # 5 x 4
(t2 <- kronecker(Diagonal(3, 2:4), t1)) # 15 x 12
## should also work with special-cased logical matrices
l3 <- upper.tri(matrix(,3,3))
M <- Matrix(l3)
(N <- as(M, "nsparseMatrix")) # "ntCMatrix" (upper triangular)
N2 <- as(N, "generalMatrix") # (lost "t"riangularity)
MM <- kronecker(M,M)
NN <- kronecker(N,N) # "dtTMatrix" i.e. did keep
NN2 <- kronecker(N2,N2)
stopifnot(identical(NN,MM),
is(NN2, "sparseMatrix"), all(NN2 == NN),
is(NN, "triangularMatrix"))
```
r None
`sparseLU-class` Sparse LU decomposition of a square sparse matrix
-------------------------------------------------------------------
### Description
Objects of this class contain the components of the LU decomposition of a sparse square matrix.
### Objects from the Class
Objects can be created by calls of the form `new("sparseLU",
...)` but are more commonly created by function `<lu>()` applied to a sparse matrix, such as a matrix of class `[dgCMatrix](dgcmatrix-class)`.
### Slots
`L`:
Object of class `"[dtCMatrix](dtcmatrix-class)"`, the lower triangular factor from the left.
`U`:
Object of class `"[dtCMatrix](dtcmatrix-class)"`, the upper triangular factor from the right.
`p`:
Object of class `"integer"`, permutation applied from the left.
`q`:
Object of class `"integer"`, permutation applied from the right.
`Dim`:
the dimension of the original matrix; inherited from class `[MatrixFactorization](matrixfactorization-class)`.
### Extends
Class `"[LU](lu-class)"`, directly. Class `"[MatrixFactorization](matrixfactorization-class)"`, by class `"LU"`.
### Methods
expand
`signature(x = "sparseLU")` Returns a list with components `P`, `L`, `U`, and `Q`, where *P* and *Q* represent fill-reducing permutations, and *L*, and *U* the lower and upper triangular matrices of the decomposition. The original matrix corresponds to the product *P'LUQ*.
### Note
The decomposition is of the form
*A = P'LUQ,*
or equivalently *PAQ' = LU*, where all matrices are sparse and of size *n by n*. The matrices *P* and *Q*, and their transposes *P'* and *Q'* are permutation matrices, *L* is lower triangular and *U* is upper triangular.
### See Also
`<lu>`, `[solve](../../base/html/solve)`, `[dgCMatrix](dgcmatrix-class)`
### Examples
```
## Extending the one in examples(lu), calling the matrix A,
## and confirming the factorization identities :
A <- as(readMM(system.file("external/pores_1.mtx",
package = "Matrix")),
"CsparseMatrix")
## with dimnames(.) - to see that they propagate to L, U :
dimnames(A) <- dnA <- list(paste0("r", seq_len(nrow(A))),
paste0("C", seq_len(ncol(A))))
str(luA <- lu(A)) # p is a 0-based permutation of the rows
# q is a 0-based permutation of the columns
xA <- expand(luA)
## which is simply doing
stopifnot(identical(xA$ L, luA@L),
identical(xA$ U, luA@U),
identical(xA$ P, as(luA@p +1L, "pMatrix")),
identical(xA$ Q, as(luA@q +1L, "pMatrix")))
P.LUQ <- with(xA, t(P) %*% L %*% U %*% Q)
stopifnot(all.equal(A, P.LUQ, tolerance = 1e-12),
identical(dimnames(P.LUQ), dnA))
## permute rows and columns of original matrix
pA <- A[luA@p + 1L, luA@q + 1L]
stopifnot(identical(pA, with(xA, P %*% A %*% t(Q))))
pLU <- drop0(luA@L %*% luA@U) # L %*% U -- dropping extra zeros
stopifnot(all.equal(pA, pLU, tolerance = 1e-12))
```
| programming_docs |
r None
`bdiag` Construct a Block Diagonal Matrix
------------------------------------------
### Description
Build a block diagonal matrix given several building block matrices.
### Usage
```
bdiag(...)
.bdiag(lst)
```
### Arguments
| | |
| --- | --- |
| `...` | individual matrices or a `[list](../../base/html/list)` of matrices. |
| `lst` | non-empty `[list](../../base/html/list)` of matrices. |
### Details
For non-trivial argument list, `bdiag()` calls `.bdiag()`. The latter maybe useful to programmers.
### Value
A *sparse* matrix obtained by combining the arguments into a block diagonal matrix.
The value of `bdiag()` inherits from class `[CsparseMatrix](csparsematrix-class)`, whereas `.bdiag()` returns a `[TsparseMatrix](tsparsematrix-class)`.
### Note
This function has been written and is efficient for the case of relatively few block matrices which are typically sparse themselves.
It is currently *inefficient* for the case of many small dense block matrices. For the case of *many* dense *k \* k* matrices, the `bdiag_m()` function in the ‘Examples’ is an order of magnitude faster.
### Author(s)
Martin Maechler, built on a version posted by Berton Gunter to R-help; earlier versions have been posted by other authors, notably Scott Chasalow to S-news. Doug Bates's faster implementation builds on `[TsparseMatrix](tsparsematrix-class)` objects.
### See Also
`[Diagonal](diagonal)` for constructing matrices of class `[diagonalMatrix](diagonalmatrix-class)`, or `[kronecker](../../base/html/kronecker)` which also works for `"Matrix"` inheriting matrices.
`[bandSparse](bandsparse)` constructs a *banded* sparse matrix from its non-zero sub-/super - diagonals.
Note that other CRAN **R** packages have own versions of `bdiag()` which return traditional matrices.
### Examples
```
bdiag(matrix(1:4, 2), diag(3))
## combine "Matrix" class and traditional matrices:
bdiag(Diagonal(2), matrix(1:3, 3,4), diag(3:2))
mlist <- list(1, 2:3, diag(x=5:3), 27, cbind(1,3:6), 100:101)
bdiag(mlist)
stopifnot(identical(bdiag(mlist),
bdiag(lapply(mlist, as.matrix))))
ml <- c(as(matrix((1:24)%% 11 == 0, 6,4),"nMatrix"),
rep(list(Diagonal(2, x=TRUE)), 3))
mln <- c(ml, Diagonal(x = 1:3))
stopifnot(is(bdiag(ml), "lsparseMatrix"),
is(bdiag(mln),"dsparseMatrix") )
## random (diagonal-)block-triangular matrices:
rblockTri <- function(nb, max.ni, lambda = 3) {
.bdiag(replicate(nb, {
n <- sample.int(max.ni, 1)
tril(Matrix(rpois(n*n, lambda=lambda), n,n)) }))
}
(T4 <- rblockTri(4, 10, lambda = 1))
image(T1 <- rblockTri(12, 20))
##' Fast version of Matrix :: .bdiag() -- for the case of *many* (k x k) matrices:
##' @param lmat list(<mat1>, <mat2>, ....., <mat_N>) where each mat_j is a k x k 'matrix'
##' @return a sparse (N*k x N*k) matrix of class \code{"\linkS4class{dgCMatrix}"}.
bdiag_m <- function(lmat) {
## Copyright (C) 2016 Martin Maechler, ETH Zurich
if(!length(lmat)) return(new("dgCMatrix"))
stopifnot(is.list(lmat), is.matrix(lmat[[1]]),
(k <- (d <- dim(lmat[[1]]))[1]) == d[2], # k x k
all(vapply(lmat, dim, integer(2)) == k)) # all of them
N <- length(lmat)
if(N * k > .Machine$integer.max)
stop("resulting matrix too large; would be M x M, with M=", N*k)
M <- as.integer(N * k)
## result: an M x M matrix
new("dgCMatrix", Dim = c(M,M),
## 'i :' maybe there's a faster way (w/o matrix indexing), but elegant?
i = as.vector(matrix(0L:(M-1L), nrow=k)[, rep(seq_len(N), each=k)]),
p = k * 0L:M,
x = as.double(unlist(lmat, recursive=FALSE, use.names=FALSE)))
}
l12 <- replicate(12, matrix(rpois(16, lambda = 6.4), 4,4), simplify=FALSE)
dim(T12 <- bdiag_m(l12))# 48 x 48
T12[1:20, 1:20]
```
r None
`sparseVector-class` Sparse Vector Classes
-------------------------------------------
### Description
Sparse Vector Classes: The virtual mother class `"sparseVector"` has the five actual daughter classes `"dsparseVector"`, `"isparseVector"`, `"lsparseVector"`, `"nsparseVector"`, and `"zsparseVector"`, where we've mainly implemented methods for the `d*`, `l*` and `n*` ones.
### Slots
`length`:
class `"numeric"` - the `[length](../../base/html/length)` of the sparse vector. Note that `"numeric"` can be considerably larger than the maximal `"integer"`, `[.Machine](../../base/html/zmachine)$integer.max`, on purpose.
`i`:
class `"numeric"` - the (1-based) indices of the non-zero entries. Must *not* be `NA` and strictly sorted increasingly.
Note that `"integer"` is “part of” `"numeric"`, and can (and often will) be used for non-huge sparseVectors.
`x`:
(for all but `"nsparseVector"`): the non-zero entries. This is of class `"numeric"` for class `"dsparseVector"`, `"logical"` for class `"lsparseVector"`, etc.
Note that `"nsparseVector"`s have no `x` slot. Further, mainly for ease of method definitions, we've defined the class union (see `[setClassUnion](../../methods/html/setclassunion)`) of all sparse vector classes which *have* an `x` slot, as class `"xsparseVector"`.
### Methods
length
`signature(x = "sparseVector")`: simply extracts the `length` slot.
show
`signature(object = "sparseVector")`: The `[show](../../methods/html/show)` method for sparse vectors prints *“structural”* zeroes as `"."` using the non-exported `prSpVector` function which allows further customization such as replacing `"."` by `" "` (blank).
Note that `[options](../../base/html/options)(max.print)` will influence how many entries of large sparse vectors are printed at all.
as.vector
`signature(x = "sparseVector", mode = "character")` coerces sparse vectors to “regular”, i.e., atomic vectors. This is the same as `as(x, "vector")`.
as
..: see `coerce` below
coerce
`signature(from = "sparseVector", to = "sparseMatrix")`, and
coerce
`signature(from = "sparseMatrix", to = "sparseVector")`, etc: coercions to and from sparse matrices (`[sparseMatrix](sparsematrix-class)`) are provided and work analogously as in standard **R**, i.e., a vector is coerced to a 1-column matrix.
dim<-
`signature(x = "sparseVector", value = "integer")` coerces a sparse vector to a sparse Matrix, i.e., an object inheriting from `[sparseMatrix](sparsematrix-class)`, of the appropriate dimension.
head
`signature(x = "sparseVector")`: as with **R**'s (package util) `[head](../../utils/html/head)`, `head(x,n)` (for *n >= 1*) is equivalent to `x[1:n]`, but here can be much more efficient, see the example.
tail
`signature(x = "sparseVector")`: analogous to `[head](../../utils/html/head)`, see above.
toeplitz
`signature(x = "sparseVector")`: as `[toeplitz](../../stats/html/toeplitz)(x)`, produce the *n \times n* Toeplitz matrix from `x`, where `n = length(x)`.
rep
`signature(x = "sparseVector")` repeat `x`, with the same argument list `(x, times, length.out, each,
...)` as the default method for rep().
which
`signature(x = "nsparseVector")` and
which
`signature(x = "lsparseVector")` return the indices of the non-zero entries (which is trivial for sparse vectors).
Ops
`signature(e1 = "sparseVector", e2 = "*")`: define arithmetic, compare and logic operations, (see `[Ops](../../methods/html/s4groupgeneric)`).
Summary
`signature(x = "sparseVector")`: define all the `[Summary](../../methods/html/s4groupgeneric)` methods.
[
`signature(x = "atomicVector", i = ...)`: not only can you subset (aka *“index into”*) sparseVectors `x[i]` using sparseVectors `i`, but we also support efficient subsetting of traditional vectors `x` by logical sparse vectors (i.e., `i` of class `"nsparseVector"` or `"lsparseVector"`).
is.na, is.finite, is.infinite
`(x = "sparseVector")`, and
is.na, is.finite, is.infinite
`(x = "nsparseVector")`: return `[logical](../../base/html/logical)` or `"nsparseVector"` of the same length as `x`, indicating if/where `x` is `[NA](../../base/html/na)` (or `NaN`), finite or infinite, entirely analogously to the corresponding base **R** functions.
`c.sparseVector()` is an S3 method for all `"sparseVector"`s, but automatic dispatch only happens for the first argument, so it is useful also as regular **R** function, see the examples.
### See Also
`[sparseVector](sparsevector)()` for friendly construction of sparse vectors (apart from `as(*, "sparseVector")`).
### Examples
```
getClass("sparseVector")
getClass("dsparseVector")
getClass("xsparseVector")# those with an 'x' slot
sx <- c(0,0,3, 3.2, 0,0,0,-3:1,0,0,2,0,0,5,0,0)
(ss <- as(sx, "sparseVector"))
ix <- as.integer(round(sx))
(is <- as(ix, "sparseVector")) ## an "isparseVector" (!)
(ns <- sparseVector(i= c(7, 3, 2), length = 10)) # "nsparseVector"
## rep() works too:
(ri <- rep(is, length.out= 25))
## Using `dim<-` as in base R :
r <- ss
dim(r) <- c(4,5) # becomes a sparse Matrix:
r
## or coercion (as as.matrix() in base R):
as(ss, "Matrix")
stopifnot(all(ss == print(as(ss, "CsparseMatrix"))))
## currently has "non-structural" FALSE -- printing as ":"
(lis <- is & FALSE)
(nn <- is[is == 0]) # all "structural" FALSE
## NA-case
sN <- sx; sN[4] <- NA
(svN <- as(sN, "sparseVector"))
v <- as(c(0,0,3, 3.2, rep(0,9),-3,0,-1, rep(0,20),5,0),
"sparseVector")
v <- rep(rep(v, 50), 5000)
set.seed(1); v[sample(v@i, 1e6)] <- 0
str(v)
system.time(for(i in 1:4) hv <- head(v, 1e6))
## user system elapsed
## 0.033 0.000 0.032
system.time(for(i in 1:4) h2 <- v[1:1e6])
## user system elapsed
## 1.317 0.000 1.319
stopifnot(identical(hv, h2),
identical(is | FALSE, is != 0),
validObject(svN), validObject(lis), as.logical(is.na(svN[4])),
identical(is^2 > 0, is & TRUE),
all(!lis), !any(lis), length(nn@i) == 0, !any(nn), all(!nn),
sum(lis) == 0, !prod(lis), range(lis) == c(0,0))
## create and use the t(.) method:
t(x20 <- sparseVector(c(9,3:1), i=c(1:2,4,7), length=20))
(T20 <- toeplitz(x20))
stopifnot(is(T20, "symmetricMatrix"), is(T20, "sparseMatrix"),
identical(unname(as.matrix(T20)),
toeplitz(as.vector(x20))))
## c() method for "sparseVector" - also available as regular function
(c1 <- c(x20, 0,0,0, -10*x20))
(c2 <- c(ns, is, FALSE))
(c3 <- c(ns, !ns, TRUE, NA, FALSE))
(c4 <- c(ns, rev(ns)))
## here, c() would produce a list {not dispatching to c.sparseVector()}
(c5 <- c.sparseVector(0,0, x20))
## checking (consistency)
.v <- as.vector
.s <- function(v) as(v, "sparseVector")
stopifnot(
all.equal(c1, .s(c(.v(x20), 0,0,0, -10*.v(x20))), tol=0),
all.equal(c2, .s(c(.v(ns), .v(is), FALSE)), tol=0),
all.equal(c3, .s(c(.v(ns), !.v(ns), TRUE, NA, FALSE)), tol=0),
all.equal(c4, .s(c(.v(ns), rev(.v(ns)))), tol=0),
all.equal(c5, .s(c(0,0, .v(x20))), tol=0)
)
```
r None
`bandSparse` Construct Sparse Banded Matrix from (Sup-/Super-) Diagonals
-------------------------------------------------------------------------
### Description
Construct a sparse banded matrix by specifying its non-zero sup- and super-diagonals.
### Usage
```
bandSparse(n, m = n, k, diagonals, symmetric = FALSE,
repr = "C", giveCsparse = (repr == "C"))
```
### Arguments
| | |
| --- | --- |
| `n,m` | the matrix dimension *(n,m) = (nrow, ncol)*. |
| `k` | integer vector of “diagonal numbers”, with identical meaning as in `<band>(*, k)`, i.e., relative to the main diagonal, which is `k=0`. |
| `diagonals` | optional list of sub-/super- diagonals; if missing, the result will be a patter**n** matrix, i.e., inheriting from class `[nMatrix](nmatrix-class)`. `diagonals` can also be *n' x d* matrix, where `d <- length(k)` and *n' >= min(n,m)*. In that case, the sub-/super- diagonals are taken from the columns of `diagonals`, where only the first several rows will be used (typically) for off-diagonals. |
| `symmetric` | logical; if true the result will be symmetric (inheriting from class `[symmetricMatrix](symmetricmatrix-class)`) and only the upper or lower triangle must be specified (via `k` and `diagonals`). |
| `repr` | `[character](../../base/html/character)` string, one of `"C"`, `"T"`, or `"R"`, specifying the sparse *repr*esentation to be used for the result, i.e., one from the super classes `[CsparseMatrix](csparsematrix-class)`, `[TsparseMatrix](tsparsematrix-class)`, or `[RsparseMatrix](rsparsematrix-class)`. |
| `giveCsparse` | (**deprecated**, replaced with `repr`): logical indicating if the result should be a `[CsparseMatrix](csparsematrix-class)` or a `[TsparseMatrix](tsparsematrix-class)`, where the default was `TRUE`, and now is determined from `repr`; very often Csparse matrices are more efficient subsequently, but not always. |
### Value
a sparse matrix (of `[class](../../base/html/class)` `[CsparseMatrix](csparsematrix-class)`) of dimension *n x m* with diagonal “bands” as specified.
### See Also
`<band>`, for *extraction* of matrix bands; `<bdiag>`, `[diag](../../base/html/diag)`, `[sparseMatrix](sparsematrix)`, `[Matrix](matrix)`.
### Examples
```
diags <- list(1:30, 10*(1:20), 100*(1:20))
s1 <- bandSparse(13, k = -c(0:2, 6), diag = c(diags, diags[2]), symm=TRUE)
s1
s2 <- bandSparse(13, k = c(0:2, 6), diag = c(diags, diags[2]), symm=TRUE)
stopifnot(identical(s1, t(s2)), is(s1,"dsCMatrix"))
## a pattern Matrix of *full* (sub-)diagonals:
bk <- c(0:4, 7,9)
(s3 <- bandSparse(30, k = bk, symm = TRUE))
## If you want a pattern matrix, but with "sparse"-diagonals,
## you currently need to go via logical sparse:
lLis <- lapply(list(rpois(20, 2), rpois(20,1), rpois(20,3))[c(1:3,2:3,3:2)],
as.logical)
(s4 <- bandSparse(20, k = bk, symm = TRUE, diag = lLis))
(s4. <- as(drop0(s4), "nsparseMatrix"))
n <- 1e4
bk <- c(0:5, 7,11)
bMat <- matrix(1:8, n, 8, byrow=TRUE)
bLis <- as.data.frame(bMat)
B <- bandSparse(n, k = bk, diag = bLis)
Bs <- bandSparse(n, k = bk, diag = bLis, symmetric=TRUE)
B [1:15, 1:30]
Bs[1:15, 1:30]
## can use a list *or* a matrix for specifying the diagonals:
stopifnot(identical(B, bandSparse(n, k = bk, diag = bMat)),
identical(Bs, bandSparse(n, k = bk, diag = bMat, symmetric=TRUE))
, inherits(B, "dtCMatrix") # triangular!
)
```
r None
`number-class` Class "number" of Possibly Complex Numbers
----------------------------------------------------------
### Description
The class `"number"` is a virtual class, currently used for vectors of eigen values which can be `"numeric"` or `"complex"`.
It is a simple class union (`[setClassUnion](../../methods/html/setclassunion)`) of `"numeric"` and `"complex"`.
### Objects from the Class
Since it is a virtual Class, no objects may be created from it.
### Examples
```
showClass("number")
stopifnot( is(1i, "number"), is(pi, "number"), is(1:3, "number") )
```
r None
`rep2abI` Replicate Vectors into 'abIndex' Result
--------------------------------------------------
### Description
`rep2abI(x, times)` conceptually computes `[rep.int](../../base/html/rep)(x, times)` but with an `[abIndex](abindex-class)` class result.
### Usage
```
rep2abI(x, times)
```
### Arguments
| | |
| --- | --- |
| `x` | numeric vector |
| `times` | integer (valued) scalar: the number of repetitions |
### Value
a vector of `[class](../../base/html/class)` `[abIndex](abindex-class)`
### See Also
`[rep.int](../../base/html/rep)()`, the base function; `[abIseq](abiseq)`, `[abIndex](abindex-class)`.
### Examples
```
(ab <- rep2abI(2:7, 4))
stopifnot(identical(as(ab, "numeric"),
rep(2:7, 4)))
```
r None
`externalFormats` Read and write external matrix formats
---------------------------------------------------------
### Description
Read matrices stored in the Harwell-Boeing or MatrixMarket formats or write `[sparseMatrix](sparsematrix-class)` objects to one of these formats.
### Usage
```
readHB(file)
readMM(file)
writeMM(obj, file, ...)
```
### Arguments
| | |
| --- | --- |
| `obj` | a real sparse matrix |
| `file` | for `writeMM` - the name of the file to be written. For `readHB` and `readMM` the name of the file to read, as a character scalar. The names of files storing matrices in the Harwell-Boeing format usually end in `".rua"` or `".rsa"`. Those storing matrices in the MatrixMarket format usually end in `".mtx"`. Alternatively, `readHB` and `readMM` accept connection objects. |
| `...` | optional additional arguments. Currently none are used in any methods. |
### Value
The `readHB` and `readMM` functions return an object that inherits from the `"[Matrix](matrix-class)"` class. Methods for the `writeMM` generic functions usually return `[NULL](../../base/html/null)` and, as a side effect, the matrix `obj` is written to `file` in the MatrixMarket format (writeMM).
### Note
The Harwell-Boeing format is older and less flexible than the MatrixMarket format. The function `writeHB` was deprecated and has now been removed. Please use `writeMM` instead.
A very simple way to export small sparse matrices `S`, is to use `summary(S)` which returns a `[data.frame](../../base/html/data.frame)` with columns `i`, `j`, and possibly `x`, see `summary` in `[sparseMatrix-class](sparsematrix-class)`, and an example below.
### References
<https://math.nist.gov/MatrixMarket/>
<https://sparse.tamu.edu/>
### Examples
```
str(pores <- readMM(system.file("external/pores_1.mtx",
package = "Matrix")))
str(utm <- readHB(system.file("external/utm300.rua",
package = "Matrix")))
str(lundA <- readMM(system.file("external/lund_a.mtx",
package = "Matrix")))
str(lundA <- readHB(system.file("external/lund_a.rsa",
package = "Matrix")))
str(jgl009 <- ## https://math.nist.gov/MatrixMarket/data/Harwell-Boeing/counterx/counterx.html
readMM(system.file("external/jgl009.mtx", package = "Matrix")))
## Not run:
## NOTE: The following examples take quite some time
## ---- even on a fast internet connection:
if(FALSE) # the URL has been corrected, but we need an un-tar step!
str(sm <-
readHB(gzcon(url("https://www.cise.ufl.edu/research/sparse/RB/Boeing/msc00726.tar.gz"))))
## End(Not run)
data(KNex)
## Store as MatrixMarket (".mtx") file, here inside temporary dir./folder:
(MMfile <- file.path(tempdir(), "mmMM.mtx"))
writeMM(KNex$mm, file=MMfile)
file.info(MMfile)[,c("size", "ctime")] # (some confirmation of the file's)
## very simple export - in triplet format - to text file:
data(CAex)
s.CA <- summary(CAex)
s.CA # shows (i, j, x) [columns of a data frame]
message("writing to ", outf <- tempfile())
write.table(s.CA, file = outf, row.names=FALSE)
## and read it back -- showing off sparseMatrix():
str(dd <- read.table(outf, header=TRUE))
## has columns (i, j, x) -> we can use via do.call() as arguments to sparseMatrix():
mm <- do.call(sparseMatrix, dd)
stopifnot(all.equal(mm, CAex, tolerance=1e-15))
```
r None
`KNex` Koenker-Ng Example Sparse Model Matrix and Response Vector
------------------------------------------------------------------
### Description
A model matrix `mm` and corresponding response vector `y` used in an example by Koenker and Ng. The matrix `mm` is a sparse matrix with 1850 rows and 712 columns but only 8758 non-zero entries. It is a `"dgCMatrix"` object. The vector `y` is just `[numeric](../../base/html/numeric)` of length 1850.
### Usage
```
data(KNex)
```
### References
Roger Koenker and Pin Ng (2003). SparseM: A sparse matrix package for R; *J. of Statistical Software*, **8** (6), doi: [10.18637/jss.v008.i06](https://doi.org/10.18637/jss.v008.i06)
### Examples
```
data(KNex)
class(KNex$mm)
dim(KNex$mm)
image(KNex$mm)
str(KNex)
system.time( # a fraction of a second
sparse.sol <- with(KNex, solve(crossprod(mm), crossprod(mm, y))))
head(round(sparse.sol,3))
## Compare with QR-based solution ("more accurate, but slightly slower"):
system.time(
sp.sol2 <- with(KNex, qr.coef(qr(mm), y) ))
all.equal(sparse.sol, sp.sol2, tolerance = 1e-13) # TRUE
```
| programming_docs |
r None
`dgeMatrix-class` Class "dgeMatrix" of Dense Numeric (S4 Class) Matrices
-------------------------------------------------------------------------
### Description
A general numeric dense matrix in the S4 Matrix representation. `dgeMatrix` is the *“standard”* class for dense numeric matrices in the Matrix package.
### Objects from the Class
Objects can be created by calls of the form `new("dgeMatrix", ...)` or, more commonly, by coercion from the `Matrix` class (see [Matrix](matrix-class)) or by `[Matrix](matrix)(..)`.
### Slots
`x`:
Object of class `"numeric"` - the numeric values contained in the matrix, in column-major order.
`Dim`:
Object of class `"integer"` - the dimensions of the matrix - must be an integer vector with exactly two non-negative values.
`Dimnames`:
a list of length two - inherited from class `[Matrix](matrix-class)`.
`factors`:
Object of class `"list"` - a list of factorizations of the matrix.
### Methods
The are group methods (see, e.g., `[Arith](../../methods/html/s4groupgeneric)`)
Arith
`signature(e1 = "dgeMatrix", e2 = "dgeMatrix")`: ...
Arith
`signature(e1 = "dgeMatrix", e2 = "numeric")`: ...
Arith
`signature(e1 = "numeric", e2 = "dgeMatrix")`: ...
Math
`signature(x = "dgeMatrix")`: ...
Math2
`signature(x = "dgeMatrix", digits = "numeric")`: ...
matrix products `[%\*%](matrix-products)`, `[crossprod](matrix-products)()` and `tcrossprod()`, several `[solve](solve-methods)` methods, and other matrix methods available:
Schur
`signature(x = "dgeMatrix", vectors = "logical")`: ...
Schur
`signature(x = "dgeMatrix", vectors = "missing")`: ...
chol
`signature(x = "dgeMatrix")`: see `<chol>`.
coerce
`signature(from = "dgeMatrix", to = "lgeMatrix")`: ...
coerce
`signature(from = "dgeMatrix", to = "matrix")`: ...
coerce
`signature(from = "matrix", to = "dgeMatrix")`: ...
colMeans
`signature(x = "dgeMatrix")`: columnwise means (averages)
colSums
`signature(x = "dgeMatrix")`: columnwise sums
diag
`signature(x = "dgeMatrix")`: ...
dim
`signature(x = "dgeMatrix")`: ...
dimnames
`signature(x = "dgeMatrix")`: ...
eigen
`signature(x = "dgeMatrix", only.values= "logical")`: ...
eigen
`signature(x = "dgeMatrix", only.values= "missing")`: ...
norm
`signature(x = "dgeMatrix", type = "character")`: ...
norm
`signature(x = "dgeMatrix", type = "missing")`: ...
rcond
`signature(x = "dgeMatrix", norm = "character")` or `norm = "missing"`: the reciprocal condition number, `<rcond>()`.
rowMeans
`signature(x = "dgeMatrix")`: rowwise means (averages)
rowSums
`signature(x = "dgeMatrix")`: rowwise sums
t
`signature(x = "dgeMatrix")`: matrix transpose
### See Also
Classes `[Matrix](matrix-class)`, `[dtrMatrix](dtrmatrix-class)`, and `[dsyMatrix](dsymatrix-class)`.
r None
`dtrMatrix-class` Triangular, dense, numeric matrices
------------------------------------------------------
### Description
The `"dtrMatrix"` class is the class of triangular, dense, numeric matrices in nonpacked storage. The `"dtpMatrix"` class is the same except in packed storage.
### Objects from the Class
Objects can be created by calls of the form `new("dtrMatrix", ...)`.
### Slots
`uplo`:
Object of class `"character"`. Must be either "U", for upper triangular, and "L", for lower triangular.
`diag`:
Object of class `"character"`. Must be either `"U"`, for unit triangular (diagonal is all ones), or `"N"`; see `[triangularMatrix](triangularmatrix-class)`.
`x`:
Object of class `"numeric"`. The numeric values that constitute the matrix, stored in column-major order.
`Dim`:
Object of class `"integer"`. The dimensions of the matrix which must be a two-element vector of non-negative integers.
### Extends
Class `"ddenseMatrix"`, directly. Class `"triangularMatrix"`, directly. Class `"Matrix"` and others, by class `"ddenseMatrix"`.
### Methods
Among others (such as matrix products, e.g. `?[crossprod-methods](matrix-products)`),
coerce
`signature(from = "dgeMatrix", to = "dtrMatrix")`
coerce
`signature(from = "dtrMatrix", to = "matrix")`
coerce
`signature(from = "dtrMatrix", to = "ltrMatrix")`
coerce
`signature(from = "dtrMatrix", to = "matrix")`
coerce
`signature(from = "matrix", to = "dtrMatrix")`
norm
`signature(x = "dtrMatrix", type = "character")`
rcond
`signature(x = "dtrMatrix", norm = "character")`
solve
`signature(a = "dtrMatrix", b = "....")`
efficientely use a “forwardsolve” or `backsolve` for a lower or upper triangular matrix, respectively, see also `<solve-methods>`.
+, -, \*, ..., ==, >=, ...
all the `[Ops](../../methods/html/s4groupgeneric)` group methods are available. When applied to two triangular matrices, these return a triangular matrix when easily possible.
### See Also
Classes `[ddenseMatrix](ddensematrix-class)`, `[dtpMatrix](dtpmatrix-class)`, `[triangularMatrix](triangularmatrix-class)`
### Examples
```
(m <- rbind(2:3, 0:-1))
(M <- as(m, "dgeMatrix"))
(T <- as(M, "dtrMatrix")) ## upper triangular is default
(T2 <- as(t(M), "dtrMatrix"))
stopifnot(T@uplo == "U", T2@uplo == "L", identical(T2, t(T)))
```
r None
`lsparseMatrix-classes` Sparse logical matrices
------------------------------------------------
### Description
The `lsparseMatrix` class is a virtual class of sparse matrices with `TRUE`/`FALSE` or `NA` entries. Only the positions of the elements that are `TRUE` are stored.
These can be stored in the “triplet” form (class `[TsparseMatrix](tsparsematrix-class)`, subclasses `lgTMatrix`, `lsTMatrix`, and `ltTMatrix`) or in compressed column-oriented form (class `[CsparseMatrix](csparsematrix-class)`, subclasses `lgCMatrix`, `lsCMatrix`, and `ltCMatrix`) or–*rarely*–in compressed row-oriented form (class `[RsparseMatrix](rsparsematrix-class)`, subclasses `lgRMatrix`, `lsRMatrix`, and `ltRMatrix`). The second letter in the name of these non-virtual classes indicates `g`eneral, `s`ymmetric, or `t`riangular.
### Details
Note that triplet stored (`[TsparseMatrix](tsparsematrix-class)`) matrices such as `lgTMatrix` may contain duplicated pairs of indices *(i,j)* as for the corresponding numeric class `[dgTMatrix](dgtmatrix-class)` where for such pairs, the corresponding `x` slot entries are added. For logical matrices, the `x` entries corresponding to duplicated index pairs *(i,j)* are “added” as well if the addition is defined as logical *or*, i.e., “`TRUE + TRUE |-> TRUE`” and “`TRUE + FALSE |-> TRUE`”. Note the use of `[uniqTsparse](uniqtsparse)()` for getting an internally unique representation without duplicated *(i,j)* entries.
### Objects from the Class
Objects can be created by calls of the form `new("lgCMatrix",
...)` and so on. More frequently objects are created by coercion of a numeric sparse matrix to the logical form, e.g. in an expression `x != 0`.
The logical form is also used in the symbolic analysis phase of an algorithm involving sparse matrices. Such algorithms often involve two phases: a symbolic phase wherein the positions of the non-zeros in the result are determined and a numeric phase wherein the actual results are calculated. During the symbolic phase only the positions of the non-zero elements in any operands are of interest, hence any numeric sparse matrices can be treated as logical sparse matrices.
### Slots
`x`:
Object of class `"logical"`, i.e., either `TRUE`, `[NA](../../base/html/na)`, or `FALSE`.
`uplo`:
Object of class `"character"`. Must be either "U", for upper triangular, and "L", for lower triangular. Present in the triangular and symmetric classes but not in the general class.
`diag`:
Object of class `"character"`. Must be either `"U"`, for unit triangular (diagonal is all ones), or `"N"` for non-unit. The implicit diagonal elements are not explicitly stored when `diag` is `"U"`. Present in the triangular classes only.
`p`:
Object of class `"integer"` of pointers, one for each column (row), to the initial (zero-based) index of elements in the column. Present in compressed column-oriented and compressed row-oriented forms only.
`i`:
Object of class `"integer"` of length nnzero (number of non-zero elements). These are the row numbers for each TRUE element in the matrix. All other elements are FALSE. Present in triplet and compressed column-oriented forms only.
`j`:
Object of class `"integer"` of length nnzero (number of non-zero elements). These are the column numbers for each TRUE element in the matrix. All other elements are FALSE. Present in triplet and compressed row-oriented forms only.
`Dim`:
Object of class `"integer"` - the dimensions of the matrix.
### Methods
coerce
`signature(from = "dgCMatrix", to = "lgCMatrix")`
t
`signature(x = "lgCMatrix")`: returns the transpose of `x`
which
`signature(x = "lsparseMatrix")`, semantically equivalent to base function `[which](../../base/html/which)(x, arr.ind)`; for details, see the `[lMatrix](dmatrix-class)` class documentation.
### See Also
the class `[dgCMatrix](dgcmatrix-class)` and `[dgTMatrix](dgtmatrix-class)`
### Examples
```
(m <- Matrix(c(0,0,2:0), 3,5, dimnames=list(LETTERS[1:3],NULL)))
(lm <- (m > 1)) # lgC
!lm # no longer sparse
stopifnot(is(lm,"lsparseMatrix"),
identical(!lm, m <= 1))
data(KNex)
str(mmG.1 <- (KNex $ mm) > 0.1)# "lgC..."
table(mmG.1@x)# however with many ``non-structural zeros''
## from logical to nz_pattern -- okay when there are no NA's :
nmG.1 <- as(mmG.1, "nMatrix") # <<< has "TRUE" also where mmG.1 had FALSE
## from logical to "double"
dmG.1 <- as(mmG.1, "dMatrix") # has '0' and back:
lmG.1 <- as(dmG.1, "lMatrix") # has no extra FALSE, i.e. drop0() included
stopifnot(identical(nmG.1, as((KNex $ mm) != 0,"nMatrix")),
validObject(lmG.1), all(lmG.1@x),
# same "logical" but lmG.1 has no 'FALSE' in x slot:
all(lmG.1 == mmG.1))
class(xnx <- crossprod(nmG.1))# "nsC.."
class(xlx <- crossprod(mmG.1))# "dsC.." : numeric
is0 <- (xlx == 0)
mean(as.vector(is0))# 99.3% zeros: quite sparse, but
table(xlx@x == 0)# more than half of the entries are (non-structural!) 0
stopifnot(isSymmetric(xlx), isSymmetric(xnx),
## compare xnx and xlx : have the *same* non-structural 0s :
sapply(slotNames(xnx),
function(n) identical(slot(xnx, n), slot(xlx, n))))
```
r None
`Cholesky` Cholesky Decomposition of a Sparse Matrix
-----------------------------------------------------
### Description
Computes the Cholesky (aka “Choleski”) decomposition of a sparse, symmetric, positive-definite matrix. However, typically `<chol>()` should rather be used unless you are interested in the different kinds of sparse Cholesky decompositions.
### Usage
```
Cholesky(A, perm = TRUE, LDL = !super, super = FALSE, Imult = 0, ...)
```
### Arguments
| | |
| --- | --- |
| `A` | sparse symmetric matrix. No missing values or IEEE special values are allowed. |
| `perm` | logical scalar indicating if a fill-reducing permutation should be computed and applied to the rows and columns of `A`. Default is `TRUE`. |
| | |
| --- | --- |
| `LDL` | logical scalar indicating if the decomposition should be computed as LDL' where `L` is a unit lower triangular matrix. The alternative is LL' where `L` is lower triangular with arbitrary diagonal elements. Default is `TRUE`. Setting it to `[NA](../../base/html/na)` leaves the choice to a CHOLMOD-internal heuristic. |
| `super` | logical scalar indicating if a supernodal decomposition should be created. The alternative is a simplicial decomposition. Default is `FALSE`. Setting it to `[NA](../../base/html/na)` leaves the choice to a CHOLMOD-internal heuristic. |
| `Imult` | numeric scalar which defaults to zero. The matrix that is decomposed is *A+m\*I* where *m* is the value of `Imult` and `I` is the identity matrix of order `ncol(A)`. |
| `...` | further arguments passed to or from other methods. |
### Details
This is a generic function with special methods for different types of matrices. Use `[showMethods](../../methods/html/showmethods)("Cholesky")` to list all the methods for the `[Cholesky](cholesky)` generic.
The method for class `[dsCMatrix](dscmatrix-class)` of sparse matrices — the only one available currently — is based on functions from the CHOLMOD library.
Again: If you just want the Cholesky decomposition of a matrix in a straightforward way, you should probably rather use `<chol>(.)`.
Note that if `perm=TRUE` (default), the decomposition is
*A = P' L~ D L~' P = P' L L' P,*
where *L* can be extracted by `as(*, "Matrix")`, *P* by `as(*, "pMatrix")` and both by `<expand>(*)`, see the class `[CHMfactor](chmfactor-class)` documentation.
Note that consequently, you cannot easily get the “traditional” cholesky factor *R*, from this decomposition, as
*R'R = A = P'LL'P = P' R~' R~ P = (R~ P)' (R~ P),*
but *R~ P* is *not* triangular even though *R~* is.
### Value
an object inheriting from either `"[CHMsuper](chmfactor-class)"`, or `"[CHMsimpl](chmfactor-class)"`, depending on the `super` argument; both classes extend `"[CHMfactor](chmfactor-class)"` which extends `"[MatrixFactorization](matrixfactorization-class)"`.
In other words, the result of `Cholesky()` is *not* a matrix, and if you want one, you should probably rather use `<chol>()`, see Details.
### References
Yanqing Chen, Timothy A. Davis, William W. Hager, and Sivasankaran Rajamanickam (2008) Algorithm 887: CHOLMOD, Supernodal Sparse Cholesky Factorization and Update/Downdate. *ACM Trans. Math. Softw.* **35**, 3, Article 22, 14 pages. doi: [10.1145/1391989.1391995](https://doi.org/10.1145/1391989.1391995)
Timothy A. Davis (2006) *Direct Methods for Sparse Linear Systems*, SIAM Series “Fundamentals of Algorithms”.
### See Also
Class definitions `[CHMfactor](chmfactor-class)` and `[dsCMatrix](dscmatrix-class)` and function `<expand>`. Note the extra `[solve](solve-methods)(*, system = . )` options in `[CHMfactor](chmfactor-class)`.
Note that `<chol>()` returns matrices (inheriting from `"[Matrix](matrix-class)"`) whereas `Cholesky()` returns a `"[CHMfactor](chmfactor-class)"` object, and hence a typical user will rather use `chol(A)`.
### Examples
```
data(KNex)
mtm <- with(KNex, crossprod(mm))
str(mtm@factors) # empty list()
(C1 <- Cholesky(mtm)) # uses show(<MatrixFactorization>)
str(mtm@factors) # 'sPDCholesky' (simpl)
(Cm <- Cholesky(mtm, super = TRUE))
c(C1 = isLDL(C1), Cm = isLDL(Cm))
str(mtm@factors) # 'sPDCholesky' *and* 'SPdCholesky'
str(cm1 <- as(C1, "sparseMatrix"))
str(cmat <- as(Cm, "sparseMatrix"))# hmm: super is *less* sparse here
cm1[1:20, 1:20]
b <- matrix(c(rep(0, 711), 1), nc = 1)
## solve(Cm, b) by default solves Ax = b, where A = Cm'Cm (= mtm)!
## hence, the identical() check *should* work, but fails on some GOTOblas:
x <- solve(Cm, b)
stopifnot(identical(x, solve(Cm, b, system = "A")),
all.equal(x, solve(mtm, b)))
Cn <- Cholesky(mtm, perm = FALSE)# no permutation -- much worse:
sizes <- c(simple = object.size(C1),
super = object.size(Cm),
noPerm = object.size(Cn))
## simple is 100, super= 137, noPerm= 812 :
noquote(cbind(format(100 * sizes / sizes[1], digits=4)))
## Visualize the sparseness:
dq <- function(ch) paste('"',ch,'"', sep="") ## dQuote(<UTF-8>) gives bad plots
image(mtm, main=paste("crossprod(mm) : Sparse", dq(class(mtm))))
image(cm1, main= paste("as(Cholesky(crossprod(mm)),\"sparseMatrix\"):",
dq(class(cm1))))
## Smaller example, with same matrix as in help(chol) :
(mm <- Matrix(toeplitz(c(10, 0, 1, 0, 3)), sparse = TRUE)) # 5 x 5
(opts <- expand.grid(perm = c(TRUE,FALSE), LDL = c(TRUE,FALSE), super = c(FALSE,TRUE)))
rr <- lapply(seq_len(nrow(opts)), function(i)
do.call(Cholesky, c(list(A = mm), opts[i,])))
nn <- do.call(expand.grid, c(attr(opts, "out.attr")$dimnames,
stringsAsFactors=FALSE,KEEP.OUT.ATTRS=FALSE))
names(rr) <- apply(nn, 1, function(r)
paste(sub("(=.).*","\\1", r), collapse=","))
str(rr, max=1)
str(re <- lapply(rr, expand), max=2) ## each has a 'P' and a 'L' matrix
R0 <- chol(mm, pivot=FALSE)
R1 <- chol(mm, pivot=TRUE )
stopifnot(all.equal(t(R1), re[[1]]$L),
all.equal(t(R0), re[[2]]$L),
identical(as(1:5, "pMatrix"), re[[2]]$P), # no pivoting
TRUE)
# Version of the underlying SuiteSparse library by Tim Davis :
.SuiteSparse_version()
```
r None
`nnzero` The Number of Non-Zero Values of a Matrix
---------------------------------------------------
### Description
Returns the number of non-zero values of a numeric-like **R** object, and in particular an object `x` inheriting from class `[Matrix](matrix-class)`.
### Usage
```
nnzero(x, na.counted = NA)
```
### Arguments
| | |
| --- | --- |
| `x` | an **R** object, typically inheriting from class `[Matrix](matrix-class)` or `[numeric](../../base/html/numeric)`. |
| `na.counted` | a `[logical](../../base/html/logical)` describing how `[NA](../../base/html/na)`s should be counted. There are three possible settings for `na.counted`: TRUE
`NA`s *are* counted as non-zero (since “they are not zero”). NA
(default)the result will be `NA` if there are `NA`'s in `x` (since “NA's are not known, i.e., *may be* zero”). FALSE
`NA`s are *omitted* from `x` before the non-zero entries are counted. For sparse matrices, you may often want to use `na.counted = TRUE`. |
### Value
the number of non zero entries in `x` (typically `[integer](../../base/html/integer)`).
Note that for a *symmetric* sparse matrix `S` (i.e., inheriting from class `[symmetricMatrix](symmetricmatrix-class)`), `nnzero(S)` is typically *twice* the `length(S@x)`.
### Methods
`signature(x = "ANY")`
the default method for non-`[Matrix](matrix-class)` class objects, simply counts the number `0`s in `x`, counting `NA`'s depending on the `na.counted` argument, see above.
`signature(x = "denseMatrix")`
conceptually the same as for traditional `[matrix](../../base/html/matrix)` objects, care has to be taken for `"[symmetricMatrix](symmetricmatrix-class)"` objects.
`signature(x = "diagonalMatrix")`, and `signature(x = "indMatrix")`
fast simple methods for these special `"sparseMatrix"` classes.
`signature(x = "sparseMatrix")`
typically, the most interesting method, also carefully taking `"[symmetricMatrix](symmetricmatrix-class)"` objects into account.
### See Also
The `[Matrix](matrix-class)` class also has a `[length](../../base/html/length)` method; typically, `length(M)` is much larger than `nnzero(M)` for a sparse matrix M, and the latter is a better indication of the *size* of `M`.
`<drop0>`, `[zapsmall](../../base/html/zapsmall)`.
### Examples
```
m <- Matrix(0+1:28, nrow = 4)
m[-3,c(2,4:5,7)] <- m[ 3, 1:4] <- m[1:3, 6] <- 0
(mT <- as(m, "dgTMatrix"))
nnzero(mT)
(S <- crossprod(mT))
nnzero(S)
str(S) # slots are smaller than nnzero()
stopifnot(nnzero(S) == sum(as.matrix(S) != 0))# failed earlier
data(KNex)
M <- KNex$mm
class(M)
dim(M)
length(M); stopifnot(length(M) == prod(dim(M)))
nnzero(M) # more relevant than length
## the above are also visible from
str(M)
```
r None
`lsyMatrix-class` Symmetric Dense Logical Matrices
---------------------------------------------------
### Description
The `"lsyMatrix"` class is the class of symmetric, dense logical matrices in non-packed storage and `"lspMatrix"` is the class of of these in packed storage. In the packed form, only the upper triangle or the lower triangle is stored.
### Objects from the Class
Objects can be created by calls of the form `new("lsyMatrix", ...)`.
### Slots
`uplo`:
Object of class `"character"`. Must be either "U", for upper triangular, and "L", for lower triangular.
`x`:
Object of class `"logical"`. The logical values that constitute the matrix, stored in column-major order.
`Dim`,`Dimnames`:
The dimension (a length-2 `"integer"`) and corresponding names (or `NULL`), see the `[Matrix](matrix-class)` class.
`factors`:
Object of class `"list"`. A named list of factorizations that have been computed for the matrix.
### Extends
Both extend classes `"[ldenseMatrix](ldensematrix-class)"` and `"[symmetricMatrix](symmetricmatrix-class)"`, directly; further, class `"Matrix"` and others, *in*directly. Use `[showClass](../../methods/html/rclassutils)("lsyMatrix")`, e.g., for details.
### Methods
Currently, mainly `[t](../../base/html/t)()` and coercion methods (for `[as](../../methods/html/as)(.)`; use, e.g., `[showMethods](../../methods/html/showmethods)(class="dsyMatrix")` for details.
### See Also
`[lgeMatrix](lgematrix-class)`, `[Matrix](matrix-class)`, `[t](../../base/html/t)`
### Examples
```
(M2 <- Matrix(c(TRUE, NA,FALSE,FALSE), 2,2)) # logical dense (ltr)
str(M2)
# can
(sM <- M2 | t(M2)) # "lge"
as(sM, "lsyMatrix")
str(sM <- as(sM, "lspMatrix")) # packed symmetric
```
| programming_docs |
r None
`LU-class` LU (dense) Matrix Decompositions
--------------------------------------------
### Description
The `"LU"` class is the *virtual* class of LU decompositions of real matrices. `"denseLU"` the class of LU decompositions of dense real matrices.
### Details
The decomposition is of the form
*A = P L U*
where typically all matrices are of size *n by n*, and the matrix *P* is a permutation matrix, *L* is lower triangular and *U* is upper triangular (both of class `[dtrMatrix](dtrmatrix-class)`).
Note that the *dense* decomposition is also implemented for a *m by n* matrix *A*, when *m != n*.
If *m < n* (“wide case”), *U* is *m by n*, and hence not triangular.
If *m > n* (“long case”), *L* is *m by n*, and hence not triangular.
### Objects from the Class
Objects can be created by calls of the form `new("denseLU", ...)`. More commonly the objects are created explicitly from calls of the form `<lu>(mm)` where `mm` is an object that inherits from the `"dgeMatrix"` class or as a side-effect of other functions applied to `"dgeMatrix"` objects.
### Extends
`"LU"` directly extends the virtual class `"[MatrixFactorization](matrixfactorization-class)"`.
`"denseLU"` directly extends `"LU"`.
### Slots
`x`:
object of class `"numeric"`. The `"L"` (unit lower triangular) and `"U"` (upper triangular) factors of the original matrix. These are stored in a packed format described in the Lapack manual, and can retrieved by the `expand()` method, see below.
`perm`:
Object of class `"integer"` - a vector of length `min(Dim)` that describes the permutation applied to the rows of the original matrix. The contents of this vector are described in the Lapack manual.
`Dim`:
the dimension of the original matrix; inherited from class `[MatrixFactorization](matrixfactorization-class)` .
### Methods
expand
`signature(x = "denseLU")`: Produce the `"L"` and `"U"` (and `"P"`) factors as a named list of matrices, see also the example below.
solve
`signature(a = "denseLU", b = "missing")`: Compute the inverse of A, *A^(-1)*, `solve(A)` using the LU decomposition, see also `<solve-methods>`.
### See Also
class `[sparseLU](sparselu-class)` for LU decompositions of *sparse* matrices; further, class `[dgeMatrix](dgematrix-class)` and functions `<lu>`, `<expand>`.
### Examples
```
set.seed(1)
mm <- Matrix(round(rnorm(9),2), nrow = 3)
mm
str(lum <- lu(mm))
elu <- expand(lum)
elu # three components: "L", "U", and "P", the permutation
elu$L %*% elu$U
(m2 <- with(elu, P %*% L %*% U)) # the same as 'mm'
stopifnot(all.equal(as(mm, "matrix"),
as(m2, "matrix")))
```
r None
`spMatrix` Sparse Matrix Constructor From Triplet
--------------------------------------------------
### Description
User friendly construction of a sparse matrix (inheriting from class `[TsparseMatrix](tsparsematrix-class)`) from the triplet representation.
This is much less flexible than `[sparseMatrix](sparsematrix)()` and hence somewhat *deprecated*.
### Usage
```
spMatrix(nrow, ncol, i = integer(), j = integer(), x = numeric())
```
### Arguments
| | |
| --- | --- |
| `nrow, ncol` | integers specifying the desired number of rows and columns. |
| `i,j` | integer vectors of the same length specifying the locations of the non-zero (or non-`TRUE`) entries of the matrix. |
| `x` | atomic vector of the same length as `i` and `j`, specifying the values of the non-zero entries. |
### Value
A sparse matrix in triplet form, as an **R** object inheriting from both `[TsparseMatrix](tsparsematrix-class)` and `[generalMatrix](generalmatrix-class)`.
The matrix *M* will have `M[i[k], j[k]] == x[k]`, for *k = 1,2,…, n*, where `n = length(i)` and `M[ i', j' ] == 0` for all other pairs *(i',j')*.
### See Also
`[Matrix](matrix)(*, sparse=TRUE)` for the more usual constructor of such matrices. Then, `[sparseMatrix](sparsematrix)` is more general and flexible than `spMatrix()` and by default returns a `[CsparseMatrix](csparsematrix-class)` which is often slightly more desirable. Further, `<bdiag>` and `[Diagonal](diagonal)` for (block-)diagonal matrix constructors.
Consider `[TsparseMatrix](tsparsematrix-class)` and similar class definition help files.
### Examples
```
## simple example
A <- spMatrix(10,20, i = c(1,3:8),
j = c(2,9,6:10),
x = 7 * (1:7))
A # a "dgTMatrix"
summary(A)
str(A) # note that *internally* 0-based indices (i,j) are used
L <- spMatrix(9, 30, i = rep(1:9, 3), 1:27,
(1:27) %% 4 != 1)
L # an "lgTMatrix"
## A simplified predecessor of Matrix' rsparsematrix() function :
rSpMatrix <- function(nrow, ncol, nnz,
rand.x = function(n) round(rnorm(nnz), 2))
{
## Purpose: random sparse matrix
## --------------------------------------------------------------
## Arguments: (nrow,ncol): dimension
## nnz : number of non-zero entries
## rand.x: random number generator for 'x' slot
## --------------------------------------------------------------
## Author: Martin Maechler, Date: 14.-16. May 2007
stopifnot((nnz <- as.integer(nnz)) >= 0,
nrow >= 0, ncol >= 0, nnz <= nrow * ncol)
spMatrix(nrow, ncol,
i = sample(nrow, nnz, replace = TRUE),
j = sample(ncol, nnz, replace = TRUE),
x = rand.x(nnz))
}
M1 <- rSpMatrix(100000, 20, nnz = 200)
summary(M1)
```
r None
`USCounties` USCounties Contiguity Matrix
------------------------------------------
### Description
This matrix represents the contiguities of 3111 US counties using the Queen criterion of at least a single shared boundary point. The representation is as a row standardised spatial weights matrix transformed to a symmetric matrix (see Ord (1975), p. 125).
### Usage
```
data(USCounties)
```
### Format
A *3111 ^2* symmetric sparse matrix of class `[dsCMatrix](dscmatrix-class)` with 9101 non-zero entries.
### Details
The data were read into **R** using `[read.gal](../../spdep/html/read.gal)`, and row-standardised and transformed to symmetry using `[nb2listw](../../spdep/html/nb2listw)` and `[similar.listw](../../spdep/html/similar.listw)`. This spatial weights object was converted to class `[dsCMatrix](dscmatrix-class)` using `[as\_dsTMatrix\_listw](../../spdep/html/as_dstmatrix_listw)` and coercion.
### Source
The data were retrieved from `http://sal.uiuc.edu/weights/zips/usc.zip`, files “usc.txt” and “usc\\_q.GAL”, with permission for use and distribution from Luc Anselin (in early 2008).
### References
Ord, J. K. (1975) Estimation methods for models of spatial interaction; *Journal of the American Statistical Association* **70**, 120–126.
### Examples
```
data(USCounties)
(n <- ncol(USCounties))
IM <- .symDiagonal(n)
nn <- 50
set.seed(1)
rho <- runif(nn, 0, 1)
system.time(MJ <- sapply(rho, function(x)
determinant(IM - x * USCounties, logarithm = TRUE)$modulus))
## can be done faster, by update()ing the Cholesky factor:
nWC <- -USCounties
C1 <- Cholesky(nWC, Imult = 2)
system.time(MJ1 <- n * log(rho) +
sapply(rho, function(x)
2 * c(determinant(update(C1, nWC, 1/x))$modulus)))
all.equal(MJ, MJ1)
C2 <- Cholesky(nWC, super = TRUE, Imult = 2)
system.time(MJ2 <- n * log(rho) +
sapply(rho, function(x)
2 * c(determinant(update(C2, nWC, 1/x))$modulus)))
all.equal(MJ, MJ2)
system.time(MJ3 <- n * log(rho) + Matrix:::ldetL2up(C1, nWC, 1/rho))
stopifnot(all.equal(MJ, MJ3))
system.time(MJ4 <- n * log(rho) + Matrix:::ldetL2up(C2, nWC, 1/rho))
stopifnot(all.equal(MJ, MJ4))
```
r None
`dtCMatrix-class` Triangular, (compressed) sparse column matrices
------------------------------------------------------------------
### Description
The `"dtCMatrix"` class is a class of triangular, sparse matrices in the compressed, column-oriented format. In this implementation the non-zero elements in the columns are sorted into increasing row order.
The `"dtTMatrix"` class is a class of triangular, sparse matrices in triplet format.
### Objects from the Class
Objects can be created by calls of the form `new("dtCMatrix",
...)` or calls of the form `new("dtTMatrix", ...)`, but more typically automatically via `[Matrix](matrix)()` or coercion such as `as(x, "triangularMatrix")`, or `as(x, "dtCMatrix")`.
### Slots
`uplo`:
Object of class `"character"`. Must be either "U", for upper triangular, and "L", for lower triangular.
`diag`:
Object of class `"character"`. Must be either `"U"`, for unit triangular (diagonal is all ones), or `"N"`; see `[triangularMatrix](triangularmatrix-class)`.
`p`:
(only present in `"dtCMatrix"`:) an `[integer](../../base/html/integer)` vector for providing pointers, one for each column, see the detailed description in `[CsparseMatrix](csparsematrix-class)`.
`i`:
Object of class `"integer"` of length nnzero (number of non-zero elements). These are the row numbers for each non-zero element in the matrix.
`j`:
Object of class `"integer"` of length nnzero (number of non-zero elements). These are the column numbers for each non-zero element in the matrix. (Only present in the `dtTMatrix` class.)
`x`:
Object of class `"numeric"` - the non-zero elements of the matrix.
`Dim`,`Dimnames`:
The dimension (a length-2 `"integer"`) and corresponding names (or `NULL`), inherited from the `[Matrix](matrix-class)`, see there.
### Extends
Class `"dgCMatrix"`, directly. Class `"triangularMatrix"`, directly. Class `"dMatrix"`, `"sparseMatrix"`, and more by class `"dgCMatrix"` etc, see the examples.
### Methods
coerce
`signature(from = "dtCMatrix", to = "dgTMatrix")`
coerce
`signature(from = "dtCMatrix", to = "dgeMatrix")`
coerce
`signature(from = "dtTMatrix", to = "dgeMatrix")`
coerce
`signature(from = "dtTMatrix", to = "dtrMatrix")`
coerce
`signature(from = "dtTMatrix", to = "matrix")`
solve
`signature(a = "dtCMatrix", b = "....")`: sparse triangular solve (aka “backsolve” or “forwardsolve”), see `<solve-methods>`.
t
`signature(x = "dtCMatrix")`: returns the transpose of `x`
t
`signature(x = "dtTMatrix")`: returns the transpose of `x`
### See Also
Classes `[dgCMatrix](dgcmatrix-class)`, `[dgTMatrix](dgtmatrix-class)`, `[dgeMatrix](dgematrix-class)`, and `[dtrMatrix](dtrmatrix-class)`.
### Examples
```
showClass("dtCMatrix")
showClass("dtTMatrix")
t1 <- new("dtTMatrix", x= c(3,7), i= 0:1, j=3:2, Dim= as.integer(c(4,4)))
t1
## from 0-diagonal to unit-diagonal {low-level step}:
tu <- t1 ; tu@diag <- "U"
tu
(cu <- as(tu, "dtCMatrix"))
str(cu)# only two entries in @i and @x
stopifnot(cu@i == 1:0,
all(2 * symmpart(cu) == Diagonal(4) + forceSymmetric(cu)))
t1[1,2:3] <- -1:-2
diag(t1) <- 10*c(1:2,3:2)
t1 # still triangular
(it1 <- solve(t1))
t1. <- solve(it1)
all(abs(t1 - t1.) < 10 * .Machine$double.eps)
## 2nd example
U5 <- new("dtCMatrix", i= c(1L, 0:3), p=c(0L,0L,0:2, 5L), Dim = c(5L, 5L),
x = rep(1, 5), diag = "U")
U5
(iu <- solve(U5)) # contains one '0'
validObject(iu2 <- solve(U5, Diagonal(5)))# failed in earlier versions
I5 <- iu %*% U5 # should equal the identity matrix
i5 <- iu2 %*% U5
m53 <- matrix(1:15, 5,3, dimnames=list(NULL,letters[1:3]))
asDiag <- function(M) as(drop0(M), "diagonalMatrix")
stopifnot(
all.equal(Diagonal(5), asDiag(I5), tolerance=1e-14) ,
all.equal(Diagonal(5), asDiag(i5), tolerance=1e-14) ,
identical(list(NULL, dimnames(m53)[[2]]), dimnames(solve(U5, m53)))
)
```
r None
`isTriangular` isTriangular() and isDiagonal() Checking if Matrix is Triangular or Diagonal
--------------------------------------------------------------------------------------------
### Description
`isTriangular(M)` returns a `[logical](../../base/html/logical)` indicating if `M` is a triangular matrix. Analogously, `isDiagonal(M)` is true iff `M` is a diagonal matrix.
Contrary to `[isSymmetric](../../base/html/issymmetric)()`, these two functions are generically from package Matrix, and hence also define methods for traditional (`[class](../../base/html/class)` `"matrix"`) matrices.
By our definition, triangular, diagonal and symmetric matrices are all *square*, i.e. have the same number of rows and columns.
### Usage
```
isDiagonal(object)
isTriangular(object, upper = NA, ...)
```
### Arguments
| | |
| --- | --- |
| `object` | any **R** object, typically a matrix (traditional or Matrix package). |
| `upper` | logical, one of `NA` (default), `FALSE`, or `TRUE` where the last two cases require a lower or upper triangular `object` to result in `TRUE`. |
| `...` | potentially further arguments for other methods. |
### Value
a (“scalar”) logical, `TRUE` or `FALSE`, never `[NA](../../base/html/na)`. For `isTriangular()`, if the result is `TRUE`, it may contain an attribute (see `[attributes](../../base/html/attributes)` `"kind"`, either `"L"` or `"U"` indicating if it is a **l**ower or **u**pper triangular matrix.
### See Also
`[isSymmetric](../../base/html/issymmetric)`; formal class (and subclasses) `"[triangularMatrix](triangularmatrix-class)"` and `"[diagonalMatrix](diagonalmatrix-class)"`.
### Examples
```
isTriangular(Diagonal(4))
## is TRUE: a diagonal matrix is also (both upper and lower) triangular
(M <- Matrix(c(1,2,0,1), 2,2))
isTriangular(M) # TRUE (*and* of formal class "dtrMatrix")
isTriangular(as(M, "dgeMatrix")) # still triangular, even if not "formally"
isTriangular(crossprod(M)) # FALSE
isDiagonal(matrix(c(2,0,0,1), 2,2)) # TRUE
```
r None
`image-methods` Methods for image() in Package 'Matrix'
--------------------------------------------------------
### Description
Methods for function `[image](../../graphics/html/image)` in package Matrix. An image of a matrix simply color codes all matrix entries and draws the *n x m* matrix using an *n x m* grid of (colored) rectangles.
The Matrix package `image` methods are based on `[levelplot](../../lattice/html/levelplot)()` from package lattice; hence these methods return an “object” of class `"trellis"`, producing a graphic when (auto-) `[print](../../base/html/print)()`ed.
### Usage
```
## S4 method for signature 'dgTMatrix'
image(x,
xlim = c(1, di[2]),
ylim = c(di[1], 1), aspect = "iso",
sub = sprintf("Dimensions: %d x %d", di[1], di[2]),
xlab = "Column", ylab = "Row", cuts = 15,
useRaster = FALSE,
useAbs = NULL, colorkey = !useAbs,
col.regions = NULL,
lwd = NULL, border.col = NULL, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a Matrix object, i.e., fulfilling `[is](../../methods/html/is)(x, "Matrix")`. |
| `xlim, ylim` | x- and y-axis limits; may be used to “zoom into” matrix. Note that *x,y* “feel reversed”: `ylim` is for the rows (= 1st index) and `xlim` for the columns (= 2nd index). For convenience, when the limits are integer valued, they are both extended by `0.5`; also, `ylim` is always used decreasingly. |
| `aspect` | aspect ratio specified as number (y/x) or string; see `[levelplot](../../lattice/html/levelplot)`. |
| `sub, xlab, ylab` | axis annotation with sensible defaults; see `[plot.default](../../graphics/html/plot.default)`. |
| `cuts` | number of levels the range of matrix values would be divided into. |
| `useRaster` | logical indicating if raster graphics should be used (instead of the tradition rectangle vector drawing). If true, `[panel.levelplot.raster](../../lattice/html/panel.levelplot)` (from lattice package) is used, and the colorkey is also done via rasters, see also `[levelplot](../../lattice/html/levelplot)` and possibly `[grid.raster](../../grid/html/grid.raster)`. Note that using raster graphics may often be faster, but can be slower, depending on the matrix dimensions and the graphics device (dimensions). |
| `useAbs` | logical indicating if `[abs](../../base/html/mathfun)(x)` should be shown; if `TRUE`, the former (implicit) default, the default `col.regions` will be `[grey](../../grdevices/html/gray)` colors (and no `colorkey` drawn). The default is `FALSE` unless the matrix has no negative entries. |
| `colorkey` | logical indicating if a color key aka ‘legend’ should be produced. Default is to draw one, unless `useAbs` is true. You can also specify a `[list](../../base/html/list)`, see `[levelplot](../../lattice/html/levelplot)`, such as`list(raster=TRUE)` in the case of rastering. |
| `col.regions` | vector of gradually varying colors; see `[levelplot](../../lattice/html/levelplot)`. |
| `lwd` | (only used when `useRaster` is false:) non-negative number or `NULL` (default), specifying the line-width of the rectangles of each non-zero matrix entry (drawn by `[grid.rect](../../grid/html/grid.rect)`). The default depends on the matrix dimension and the device size. |
| `border.col` | color for the border of each rectangle. `NA` means no border is drawn. When `NULL` as by default, `border.col <- if(lwd < .01) NA else NULL` is used. Consider using an opaque color instead of `NULL` which corresponds to `grid::[get.gpar](../../grid/html/gpar)("col")`. |
| `...` | further arguments passed to methods and `[levelplot](../../lattice/html/levelplot)`, notably `at` for specifying (possibly non equidistant) cut values for dividing the matrix values (superseding `cuts` above). |
### Value
as all lattice graphics functions, `image(<Matrix>)` returns a `"trellis"` object, effectively the result of `[levelplot](../../lattice/html/levelplot)()`.
### Methods
All methods currently end up calling the method for the `[dgTMatrix](dgtmatrix-class)` class. Use `showMethods(image)` to list them all.
### See Also
`[levelplot](../../lattice/html/levelplot)`, and `[print.trellis](../../lattice/html/print.trellis)` from package lattice.
### Examples
```
showMethods(image)
## If you want to see all the methods' implementations:
showMethods(image, incl=TRUE, inherit=FALSE)
data(CAex)
image(CAex, main = "image(CAex)")
image(CAex, useAbs=TRUE, main = "image(CAex, useAbs=TRUE)")
cCA <- Cholesky(crossprod(CAex), Imult = .01)
## See ?print.trellis --- place two image() plots side by side:
print(image(cCA, main="Cholesky(crossprod(CAex), Imult = .01)"),
split=c(x=1,y=1,nx=2, ny=1), more=TRUE)
print(image(cCA, useAbs=TRUE),
split=c(x=2,y=1,nx=2,ny=1))
data(USCounties)
image(USCounties)# huge
image(sign(USCounties))## just the pattern
# how the result looks, may depend heavily on
# the device, screen resolution, antialiasing etc
# e.g. x11(type="Xlib") may show very differently than cairo-based
## Drawing borders around each rectangle;
# again, viewing depends very much on the device:
image(USCounties[1:400,1:200], lwd=.1)
## Using (xlim,ylim) has advantage : matrix dimension and (col/row) indices:
image(USCounties, c(1,200), c(1,400), lwd=.1)
image(USCounties, c(1,300), c(1,200), lwd=.5 )
image(USCounties, c(1,300), c(1,200), lwd=.01)
## These 3 are all equivalent :
(I1 <- image(USCounties, c(1,100), c(1,100), useAbs=FALSE))
I2 <- image(USCounties, c(1,100), c(1,100), useAbs=FALSE, border.col=NA)
I3 <- image(USCounties, c(1,100), c(1,100), useAbs=FALSE, lwd=2, border.col=NA)
stopifnot(all.equal(I1, I2, check.environment=FALSE),
all.equal(I2, I3, check.environment=FALSE))
## using an opaque border color
image(USCounties, c(1,100), c(1,100), useAbs=FALSE, lwd=3, border.col = adjustcolor("skyblue", 1/2))
if(doExtras <- interactive() || nzchar(Sys.getenv("R_MATRIX_CHECK_EXTRA")) ||
identical("true", unname(Sys.getenv("R_PKG_CHECKING_doExtras")))) {
## Using raster graphics: For PDF this would give a 77 MB file,
## however, for such a large matrix, this is typically considerably
## *slower* (than vector graphics rectangles) in most cases :
if(doPNG <- !dev.interactive())
png("image-USCounties-raster.png", width=3200, height=3200)
image(USCounties, useRaster = TRUE) # should not suffer from anti-aliasing
if(doPNG)
dev.off()
## and now look at the *.png image in a viewer you can easily zoom in and out
}#only if(doExtras)
```
| programming_docs |
r None
`Schur-class` Class "Schur" of Schur Matrix Factorizations
-----------------------------------------------------------
### Description
Class `"Schur"` is the class of Schur matrix factorizations. These are a generalization of eigen value (or “spectral”) decompositions for general (possibly asymmmetric) square matrices, see the `[Schur](schur)()` function.
### Objects from the Class
Objects of class `"Schur"` are typically created by `[Schur](schur)()`.
### Slots
`"Schur"` has slots
`T`:
Upper Block-triangular `[Matrix](matrix-class)` object.
`Q`:
Square *orthogonal* `"Matrix"`.
`EValues`:
numeric or complex vector of eigenvalues of `T`.
`Dim`:
the matrix dimension: equal to `c(n,n)` of class `"integer"`.
### Extends
Class `"[MatrixFactorization](matrixfactorization-class)"`, directly.
### See Also
`[Schur](schur)()` for object creation; `[MatrixFactorization](matrixfactorization-class)`.
### Examples
```
showClass("Schur")
Schur(M <- Matrix(c(1:7, 10:2), 4,4))
## Trivial, of course:
str(Schur(Diagonal(5)))
## for more examples, see Schur()
```
r None
`lu` (Generalized) Triangular Decomposition of a Matrix
--------------------------------------------------------
### Description
Computes (generalized) triangular decompositions of square (sparse or dense) and non-square dense matrices.
### Usage
```
lu(x, ...)
## S4 method for signature 'matrix'
lu(x, warnSing = TRUE, ...)
## S4 method for signature 'dgeMatrix'
lu(x, warnSing = TRUE, ...)
## S4 method for signature 'dgCMatrix'
lu(x, errSing = TRUE, order = TRUE, tol = 1,
keep.dimnames = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a dense or sparse matrix, in the latter case of square dimension. No missing values or IEEE special values are allowed. |
| `warnSing` | (when `x` is a `"[denseMatrix](densematrix-class)"`) logical specifying if a `[warning](../../base/html/warning)` should be signalled when `x` is singular. |
| `errSing` | (when `x` is a `"[sparseMatrix](sparsematrix-class)"`) logical specifying if an error (see `[stop](../../base/html/stop)`) should be signalled when `x` is singular. When `x` is singular, `lu(x, errSing=FALSE)` returns `[NA](../../base/html/na)` instead of an LU decomposition. No warning is signalled and the useR should be careful in that case. |
| `order` | logical or integer, used to choose which fill-reducing permutation technique will be used internally. Do not change unless you know what you are doing. |
| `tol` | positive number indicating the pivoting tolerance used in `cs_lu`. Do only change with much care. |
| `keep.dimnames` | logical indicating that `[dimnames](../../base/html/dimnames)` should be propagated to the result, i.e., “kept”. This was hardcoded to `FALSE` in upto Matrix version 1.2-0. Setting to `FALSE` may gain some performance. |
| `...` | further arguments passed to or from other methods. |
### Details
`lu()` is a generic function with special methods for different types of matrices. Use `[showMethods](../../methods/html/showmethods)("lu")` to list all the methods for the `<lu>` generic.
The method for class `[dgeMatrix](dgematrix-class)` (and all dense matrices) is based on LAPACK's `"dgetrf"` subroutine. It returns a decomposition also for singular and non-square matrices.
The method for class `[dgCMatrix](dgcmatrix-class)` (and all sparse matrices) is based on functions from the CSparse library. It signals an error (or returns `NA`, when `errSing = FALSE`, see above) when the decomposition algorithm fails, as when `x` is (too close to) singular.
### Value
An object of class `"LU"`, i.e., `"[denseLU](lu-class)"` (see its separate help page), or `"sparseLU"`, see `[sparseLU](sparselu-class)`; this is a representation of a triangular decomposition of `x`.
### Note
Because the underlying algorithm differ entirely, in the *dense* case (class `[denseLU](lu-class)`), the decomposition is
*A = P L U,*
where as in the *sparse* case (class `[sparseLU](sparselu-class)`), it is
*A = P' L U Q.*
### References
Golub, G., and Van Loan, C. F. (1989). *Matrix Computations,* 2nd edition, Johns Hopkins, Baltimore.
Timothy A. Davis (2006) *Direct Methods for Sparse Linear Systems*, SIAM Series “Fundamentals of Algorithms”.
### See Also
Class definitions `[denseLU](lu-class)` and `[sparseLU](sparselu-class)` and function `<expand>`; `[qr](qr-methods)`, `<chol>`.
### Examples
```
##--- Dense -------------------------
x <- Matrix(rnorm(9), 3, 3)
lu(x)
dim(x2 <- round(10 * x[,-3]))# non-square
expand(lu2 <- lu(x2))
##--- Sparse (see more in ?"sparseLU-class")----- % ./sparseLU-class.Rd
pm <- as(readMM(system.file("external/pores_1.mtx",
package = "Matrix")),
"CsparseMatrix")
str(pmLU <- lu(pm)) # p is a 0-based permutation of the rows
# q is a 0-based permutation of the columns
## permute rows and columns of original matrix
ppm <- pm[pmLU@p + 1L, pmLU@q + 1L]
pLU <- drop0(pmLU@L %*% pmLU@U) # L %*% U -- dropping extra zeros
## equal up to "rounding"
ppm[1:14, 1:5]
pLU[1:14, 1:5]
```
r None
`Schur` Schur Decomposition of a Matrix
----------------------------------------
### Description
Computes the Schur decomposition and eigenvalues of a square matrix; see the BACKGROUND information below.
### Usage
```
Schur(x, vectors, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | numeric square Matrix (inheriting from class `"Matrix"`) or traditional `[matrix](../../base/html/matrix)`. Missing values (NAs) are not allowed. |
| `vectors` | logical. When `TRUE` (the default), the Schur vectors are computed, and the result is a proper `[MatrixFactorization](matrixfactorization-class)` of class `[Schur](schur-class)`. |
| `...` | further arguments passed to or from other methods. |
### Details
Based on the Lapack subroutine `dgees`.
### Value
If `vectors` are `TRUE`, as per default: If `x` is a `[Matrix](matrix-class)` an object of class `[Schur](schur-class)`, otherwise, for a traditional `[matrix](../../base/html/matrix)` `x`, a `[list](../../base/html/list)` with components `T`, `Q`, and `EValues`.
If `vectors` are `FALSE`, a list with components
| | |
| --- | --- |
| `T` | the upper quasi-triangular (square) matrix of the Schur decomposition. |
| `EValues` | the vector of `[numeric](../../base/html/numeric)` or `[complex](../../base/html/complex)` eigen values of *T* or *A*. |
### BACKGROUND
If `A` is a square matrix, then `A = Q T t(Q)`, where `Q` is orthogonal, and `T` is upper block-triangular (nearly triangular with either 1 by 1 or 2 by 2 blocks on the diagonal) where the 2 by 2 blocks correspond to (non-real) complex eigenvalues. The eigenvalues of `A` are the same as those of `T`, which are easy to compute. The Schur form is used most often for computing non-symmetric eigenvalue decompositions, and for computing functions of matrices such as matrix exponentials.
### References
Anderson, E., et al. (1994). *LAPACK User's Guide,* 2nd edition, SIAM, Philadelphia.
### Examples
```
Schur(Hilbert(9)) # Schur factorization (real eigenvalues)
(A <- Matrix(round(rnorm(5*5, sd = 100)), nrow = 5))
(Sch.A <- Schur(A))
eTA <- eigen(Sch.A@T)
str(SchA <- Schur(A, vectors=FALSE))# no 'T' ==> simple list
stopifnot(all.equal(eTA$values, eigen(A)$values, tolerance = 1e-13),
all.equal(eTA$values,
local({z <- Sch.A@EValues
z[order(Mod(z), decreasing=TRUE)]}), tolerance = 1e-13),
identical(SchA$T, Sch.A@T),
identical(SchA$EValues, Sch.A@EValues))
## For the faint of heart, we provide Schur() also for traditional matrices:
a.m <- function(M) unname(as(M, "matrix"))
a <- a.m(A)
Sch.a <- Schur(a)
stopifnot(identical(Sch.a, list(Q = a.m(Sch.A @ Q),
T = a.m(Sch.A @ T),
EValues = Sch.A@EValues)),
all.equal(a, with(Sch.a, Q %*% T %*% t(Q)))
)
```
r None
`dgTMatrix-class` Sparse matrices in triplet form
--------------------------------------------------
### Description
The `"dgTMatrix"` class is the class of sparse matrices stored as (possibly redundant) triplets. The internal representation is not at all unique, contrary to the one for class `[dgCMatrix](dgcmatrix-class)`.
### Objects from the Class
Objects can be created by calls of the form `new("dgTMatrix",
...)`, but more typically via `as(*, "dgTMatrix")`, `[spMatrix](spmatrix)()`, or `[sparseMatrix](sparsematrix)(*, repr = "T")`.
### Slots
`i`:
`[integer](../../base/html/integer)` row indices of non-zero entries *in 0-base*, i.e., must be in `0:(nrow(.)-1)`.
`j`:
`[integer](../../base/html/integer)` column indices of non-zero entries. Must be the same length as slot `i` and *0-based* as well, i.e., in `0:(ncol(.)-1)`.
`x`:
`[numeric](../../base/html/numeric)` vector - the (non-zero) entry at position `(i,j)`. Must be the same length as slot `i`. If an index pair occurs more than once, the corresponding values of slot `x` are added to form the element of the matrix.
`Dim`:
Object of class `"integer"` of length 2 - the dimensions of the matrix.
### Methods
+
`signature(e1 = "dgTMatrix", e2 = "dgTMatrix")`
coerce
`signature(from = "dgTMatrix", to = "dgCMatrix")`
coerce
`signature(from = "dgTMatrix", to = "dgeMatrix")`
coerce
`signature(from = "dgTMatrix", to = "matrix")`, and typically coercion methods for more specific signatures, we are not mentioning here.
Note that these are not guaranteed to continue to exist, but rather you should use calls like `as(x,
"CsparseMatrix")`, `as(x, "generalMatrix")`, `as(x, "dMatrix")`, i.e. coercion to higher level virtual classes.
coerce
`signature(from = "matrix", to = "dgTMatrix")`, (direct coercion from tradition matrix).
image
`signature(x = "dgTMatrix")`: plots an image of `x` using the `[levelplot](../../lattice/html/levelplot)` function
t
`signature(x = "dgTMatrix")`: returns the transpose of `x`
### Note
Triplet matrices are a convenient form in which to construct sparse matrices after which they can be coerced to `[dgCMatrix](dgcmatrix-class)` objects.
Note that both `new(.)` and `[spMatrix](spmatrix)` constructors for `"dgTMatrix"` (and other `"[TsparseMatrix](tsparsematrix-class)"` classes) implicitly add *x\_k*'s that belong to identical *(i\_k, j\_k)* pairs.
However this means that a matrix typically can be stored in more than one possible `"[TsparseMatrix](tsparsematrix-class)"` representations. Use `[uniqTsparse](uniqtsparse)()` in order to ensure uniqueness of the internal representation of such a matrix.
### See Also
Class `[dgCMatrix](dgcmatrix-class)` or the superclasses `[dsparseMatrix](dsparsematrix-class)` and `[TsparseMatrix](tsparsematrix-class)`; `[uniqTsparse](uniqtsparse)`.
### Examples
```
m <- Matrix(0+1:28, nrow = 4)
m[-3,c(2,4:5,7)] <- m[ 3, 1:4] <- m[1:3, 6] <- 0
(mT <- as(m, "dgTMatrix"))
str(mT)
mT[1,]
mT[4, drop = FALSE]
stopifnot(identical(mT[lower.tri(mT)],
m [lower.tri(m) ]))
mT[lower.tri(mT,diag=TRUE)] <- 0
mT
## Triplet representation with repeated (i,j) entries
## *adds* the corresponding x's:
T2 <- new("dgTMatrix",
i = as.integer(c(1,1,0,3,3)),
j = as.integer(c(2,2,4,0,0)), x=10*1:5, Dim=4:5)
str(T2) # contains (i,j,x) slots exactly as above, but
T2 ## has only three non-zero entries, as for repeated (i,j)'s,
## the corresponding x's are "implicitly" added
stopifnot(nnzero(T2) == 3)
```
r None
`is.na-methods` is.na(), is.infinite() Methods for 'Matrix' Objects
--------------------------------------------------------------------
### Description
Methods for function `[is.na](../../base/html/na)()`, `[is.finite](../../base/html/is.finite)()`, and `[is.infinite](../../base/html/is.finite)()` for all Matrices (objects extending the `[Matrix](matrix-class)` class):
x = "denseMatrix"
returns a `"nMatrix"` object of same dimension as `x`, with TRUE's whenever `x` is `[NA](../../base/html/na)`, finite, or infinite, respectively.
x = "sparseMatrix"
ditto.
### Usage
```
## S4 method for signature 'sparseMatrix'
is.na(x)
## S4 method for signature 'dsparseMatrix'
is.finite(x)
## S4 method for signature 'ddenseMatrix'
is.infinite(x)
## ...
## and for other classes
## S4 method for signature 'xMatrix'
anyNA(x)
## S4 method for signature 'nsparseMatrix'
anyNA(x)
## S4 method for signature 'sparseVector'
anyNA(x)
## S4 method for signature 'nsparseVector'
anyNA(x)
```
### Arguments
| | |
| --- | --- |
| `x` | sparse or dense matrix or sparse vector (here; any **R** object in general). |
### See Also
`[NA](../../base/html/na)`, `[is.na](../../base/html/na)`; `[is.finite](../../base/html/is.finite)`, `[is.infinite](../../base/html/is.finite)`; `[nMatrix](nmatrix-class)`, `[denseMatrix](densematrix-class)`, `[sparseMatrix](sparsematrix-class)`.
The `[sparseVector](sparsevector-class)` class.
### Examples
```
M <- Matrix(1:6, nrow=4, ncol=3,
dimnames = list(c("a", "b", "c", "d"), c("A", "B", "C")))
stopifnot(all(!is.na(M)))
M[2:3,2] <- NA
is.na(M)
if(exists("anyNA", mode="function"))
anyNA(M)
A <- spMatrix(10,20, i = c(1,3:8),
j = c(2,9,6:10),
x = 7 * (1:7))
stopifnot(all(!is.na(A)))
A[2,3] <- A[1,2] <- A[5, 5:9] <- NA
inA <- is.na(A)
stopifnot(sum(inA) == 1+1+5)
```
r None
`qr-methods` QR Decomposition – S4 Methods and Generic
-------------------------------------------------------
### Description
The Matrix package provides methods for the QR decomposition of special classes of matrices. There is a generic function which uses `[qr](../../base/html/qr)` as default, but methods defined in this package can take extra arguments. In particular there is an option for determining a fill-reducing permutation of the columns of a sparse, rectangular matrix.
### Usage
```
qr(x, ...)
qrR(qr, complete=FALSE, backPermute=TRUE, row.names = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x` | a numeric or complex matrix whose QR decomposition is to be computed. Logical matrices are coerced to numeric. |
| `qr` | a QR decomposition of the type computed by `qr`. |
| `complete` | logical indicating whether the *\bold{R}* matrix is to be completed by binding zero-value rows beneath the square upper triangle. |
| `backPermute` | logical indicating if the rows of the *\bold{R}* matrix should be back permuted such that `qrR()`'s result can be used directly to reconstruct the original matrix *\bold{X}*. |
| `row.names` | logical indicating if `[rownames](../../base/html/colnames)` should propagated to the result. |
| `...` | further arguments passed to or from other methods |
### Methods
x = "dgCMatrix"
QR decomposition of a general sparse double-precision matrix with `nrow(x) >= ncol(x)`. Returns an object of class `"[sparseQR](sparseqr-class)"`.
x = "sparseMatrix"
works via `"dgCMatrix"`.
### See Also
`[qr](../../base/html/qr)`; then, the class documentations, mainly `[sparseQR](sparseqr-class)`, and also `[dgCMatrix](dgcmatrix-class)`.
### Examples
```
##------------- example of pivoting -- from base' qraux.Rd -------------
X <- cbind(int = 1,
b1=rep(1:0, each=3), b2=rep(0:1, each=3),
c1=rep(c(1,0,0), 2), c2=rep(c(0,1,0), 2), c3=rep(c(0,0,1),2))
rownames(X) <- paste0("r", seq_len(nrow(X)))
dnX <- dimnames(X)
bX <- X # [b]ase version of X
X <- as(bX, "sparseMatrix")
X # is singular, columns "b2" and "c3" are "extra"
stopifnot(identical(dimnames(X), dnX))# some versions changed X's dimnames!
c(rankMatrix(X)) # = 4 (not 6)
m <- function(.) as(., "matrix")
##----- regular case ------------------------------------------
Xr <- X[ , -c(3,6)] # the "regular" (non-singular) version of X
stopifnot(rankMatrix(Xr) == ncol(Xr))
Y <- cbind(y <- setNames(1:6, paste0("y", 1:6)))
## regular case:
qXr <- qr( Xr)
qxr <- qr(m(Xr))
qxrLA <- qr(m(Xr), LAPACK=TRUE) # => qr.fitted(), qr.resid() not supported
qcfXy <- qr.coef (qXr, y) # vector
qcfXY <- qr.coef (qXr, Y) # 4x1 dgeMatrix
cf <- c(int=6, b1=-3, c1=-2, c2=-1)
doExtras <- interactive() || nzchar(Sys.getenv("R_MATRIX_CHECK_EXTRA")) ||
identical("true", unname(Sys.getenv("R_PKG_CHECKING_doExtras")))
tolE <- if(doExtras) 1e-15 else 1e-13
stopifnot(exprs = {
all.equal(qr.coef(qxr, y), cf, tol=tolE)
all.equal(qr.coef(qxrLA,y), cf, tol=tolE)
all.equal(qr.coef(qxr, Y), m(cf), tol=tolE)
all.equal( qcfXy, cf, tol=tolE)
all.equal(m(qcfXY), m(cf), tol=tolE)
all.equal(y, qr.fitted(qxr, y), tol=2*tolE)
all.equal(y, qr.fitted(qXr, y), tol=2*tolE)
all.equal(m(qr.fitted(qXr, Y)), qr.fitted(qxr, Y), tol=tolE)
all.equal( qr.resid (qXr, y), qr.resid (qxr, y), tol=tolE)
all.equal(m(qr.resid (qXr, Y)), qr.resid (qxr, Y), tol=tolE)
})
##----- rank-deficient ("singular") case ------------------------------------
(qX <- qr( X)) # both @p and @q are non-trivial permutations
qx <- qr(m(X)) ; str(qx) # $pivot is non-trivial, too
drop0(R. <- qr.R(qX), tol=tolE) # columns *permuted*: c3 b1 ..
Q. <- qr.Q(qX)
qI <- sort.list(qX@q) # the inverse 'q' permutation
(X. <- drop0(Q. %*% R.[, qI], tol=tolE))## just = X, incl. correct colnames
stopifnot(all(X - X.) < 8*.Machine$double.eps,
## qrR(.) returns R already "back permuted" (as with qI):
identical(R.[, qI], qrR(qX)) )
##
## In this sense, classical qr.coef() is fine:
cfqx <- qr.coef(qx, y) # quite different from
nna <- !is.na(cfqx)
stopifnot(all.equal(unname(qr.fitted(qx,y)),
as.numeric(X[,nna] %*% cfqx[nna])))
## FIXME: do these make *any* sense? --- should give warnings !
qr.coef(qX, y)
qr.coef(qX, Y)
rm(m)
```
r None
`KhatriRao` Khatri-Rao Matrix Product
--------------------------------------
### Description
Computes Khatri-Rao products for any kind of matrices.
The Khatri-Rao product is a column-wise Kronecker product. Originally introduced by Khatri and Rao (1968), it has many different applications, see Liu and Trenkler (2008) for a survey. Notably, it is used in higher-dimensional tensor decompositions, see Bader and Kolda (2008).
### Usage
```
KhatriRao(X, Y = X, FUN = "*", make.dimnames = FALSE)
```
### Arguments
| | |
| --- | --- |
| `X,Y` | matrices of with the same number of columns. |
| `FUN` | the (name of the) `[function](../../base/html/function)` to be used for the column-wise Kronecker products, see `[kronecker](../../base/html/kronecker)`, defaulting to the usual multiplication. |
| `make.dimnames` | logical indicating if the result should inherit `[dimnames](../../base/html/dimnames)` from `X` and `Y` in a simple way. |
### Value
a `"[CsparseMatrix](csparsematrix-class)"`, say `R`, the Khatri-Rao product of `X` (*n x k*) and `Y` (*m x k*), is of dimension *(n\*m) x k*, where the j-th column, `R[,j]` is the kronecker product `[kronecker](../../base/html/kronecker)(X[,j], Y[,j])`.
### Note
The current implementation is efficient for large sparse matrices.
### Author(s)
Original by Michael Cysouw, Univ. Marburg; minor tweaks, bug fixes etc, by Martin Maechler.
### References
Khatri, C. G., and Rao, C. Radhakrishna (1968) Solutions to Some Functional Equations and Their Applications to Characterization of Probability Distributions. *Sankhya: Indian J. Statistics, Series A* **30**, 167–180.
Liu, Shuangzhe, and Gõtz Trenkler (2008) Hadamard, Khatri-Rao, Kronecker and Other Matrix Products. *International J. Information and Systems Sciences* **4**, 160–177.
Bader, Brett W, and Tamara G Kolda (2008) Efficient MATLAB Computations with Sparse and Factored Tensors. *SIAM J. Scientific Computing* **30**, 205–231.
### See Also
`[kronecker](../../base/html/kronecker)`.
### Examples
```
## Example with very small matrices:
m <- matrix(1:12,3,4)
d <- diag(1:4)
KhatriRao(m,d)
KhatriRao(d,m)
dimnames(m) <- list(LETTERS[1:3], letters[1:4])
KhatriRao(m,d, make.dimnames=TRUE)
KhatriRao(d,m, make.dimnames=TRUE)
dimnames(d) <- list(NULL, paste0("D", 1:4))
KhatriRao(m,d, make.dimnames=TRUE)
KhatriRao(d,m, make.dimnames=TRUE)
dimnames(d) <- list(paste0("d", 10*1:4), paste0("D", 1:4))
(Kmd <- KhatriRao(m,d, make.dimnames=TRUE))
(Kdm <- KhatriRao(d,m, make.dimnames=TRUE))
nm <- as(m,"nMatrix")
nd <- as(d,"nMatrix")
KhatriRao(nm,nd, make.dimnames=TRUE)
KhatriRao(nd,nm, make.dimnames=TRUE)
stopifnot(dim(KhatriRao(m,d)) == c(nrow(m)*nrow(d), ncol(d)))
## border cases / checks:
zm <- nm; zm[] <- 0 # all 0 matrix
stopifnot(all(K1 <- KhatriRao(nd, zm) == 0), identical(dim(K1), c(12L, 4L)),
all(K2 <- KhatriRao(zm, nd) == 0), identical(dim(K2), c(12L, 4L)))
d0 <- d; d0[] <- 0; m0 <- Matrix(d0[-1,])
stopifnot(all(K3 <- KhatriRao(d0, m) == 0), identical(dim(K3), dim(Kdm)),
all(K4 <- KhatriRao(m, d0) == 0), identical(dim(K4), dim(Kmd)),
all(KhatriRao(d0, d0) == 0), all(KhatriRao(m0, d0) == 0),
all(KhatriRao(d0, m0) == 0), all(KhatriRao(m0, m0) == 0),
identical(dimnames(KhatriRao(m, d0, make.dimnames=TRUE)), dimnames(Kmd)))
```
| programming_docs |
r None
`compMatrix-class` Class "compMatrix" of Composite (Factorizable) Matrices
---------------------------------------------------------------------------
### Description
Virtual class of *composite* matrices; i.e., matrices that can be *factorized*, typically as a product of simpler matrices.
### Objects from the Class
A virtual Class: No objects may be created from it.
### Slots
`factors`:
Object of class `"list"` - a list of factorizations of the matrix. Note that this is typically empty, i.e., `list()`, initially and is *updated **automagically*** whenever a matrix factorization is computed.
`Dim`, `Dimnames`:
inherited from the `[Matrix](matrix-class)` class, see there.
### Extends
Class `"Matrix"`, directly.
### Methods
dimnames<-
`signature(x = "compMatrix", value = "list")`: set the `dimnames` to a `[list](../../base/html/list)` of length 2, see `[dimnames<-](../../base/html/dimnames)`. The `factors` slot is currently reset to empty, as the factorization `dimnames` would have to be adapted, too.
### See Also
The matrix factorization classes `"[MatrixFactorization](matrixfactorization-class)"` and their generators, `<lu>()`, `[qr](qr-methods)()`, `<chol>()` and `[Cholesky](cholesky)()`, `[BunchKaufman](bunchkaufman-methods)()`, `[Schur](schur)()`.
r None
`lgeMatrix-class` Class "lgeMatrix" of General Dense Logical Matrices
----------------------------------------------------------------------
### Description
This is the class of general dense `[logical](../../base/html/logical)` matrices.
### Slots
`x`:
Object of class `"logical"`. The logical values that constitute the matrix, stored in column-major order.
`Dim`,`Dimnames`:
The dimension (a length-2 `"integer"`) and corresponding names (or `NULL`), see the `[Matrix](matrix-class)` class.
`factors`:
Object of class `"list"`. A named list of factorizations that have been computed for the matrix.
### Extends
Class `"ldenseMatrix"`, directly. Class `"lMatrix"`, by class `"ldenseMatrix"`. Class `"denseMatrix"`, by class `"ldenseMatrix"`. Class `"Matrix"`, by class `"ldenseMatrix"`. Class `"Matrix"`, by class `"ldenseMatrix"`.
### Methods
Currently, mainly `[t](../../base/html/t)()` and coercion methods (for `[as](../../methods/html/as)(.)`); use, e.g., `[showMethods](../../methods/html/showmethods)(class="lgeMatrix")` for details.
### See Also
Non-general logical dense matrix classes such as `[ltrMatrix](ltrmatrix-class)`, or `[lsyMatrix](lsymatrix-class)`; *sparse* logical classes such as `[lgCMatrix](lsparsematrix-classes)`.
### Examples
```
showClass("lgeMatrix")
str(new("lgeMatrix"))
set.seed(1)
(lM <- Matrix(matrix(rnorm(28), 4,7) > 0))# a simple random lgeMatrix
set.seed(11)
(lC <- Matrix(matrix(rnorm(28), 4,7) > 0))# a simple random lgCMatrix
as(lM, "lgCMatrix")
```
r None
`dsRMatrix-class` Symmetric Sparse Compressed Row Matrices
-----------------------------------------------------------
### Description
The `dsRMatrix` class is a class of symmetric, sparse matrices in the compressed, row-oriented format. In this implementation the non-zero elements in the rows are sorted into increasing column order.
### Objects from the Class
These `"..RMatrix"` classes are currently still mostly unimplemented!
Objects can be created by calls of the form `new("dsRMatrix", ...)`.
### Slots
`uplo`:
A character object indicating if the upper triangle (`"U"`) or the lower triangle (`"L"`) is stored. At present only the lower triangle form is allowed.
`j`:
Object of class `"integer"` of length `nnzero` (number of non-zero elements). These are the row numbers for each non-zero element in the matrix.
`p`:
Object of class `"integer"` of pointers, one for each row, to the initial (zero-based) index of elements in the row.
`factors`:
Object of class `"list"` - a list of factorizations of the matrix.
`x`:
Object of class `"numeric"` - the non-zero elements of the matrix.
`Dim`:
Object of class `"integer"` - the dimensions of the matrix - must be an integer vector with exactly two non-negative values.
`Dimnames`:
List of length two, see `[Matrix](matrix)`.
### Extends
Classes `[RsparseMatrix](rsparsematrix-class)`, `[dsparseMatrix](dsparsematrix-class)` and `[symmetricMatrix](symmetricmatrix-class)`, directly.
Class `"dMatrix"`, by class `"dsparseMatrix"`, class `"sparseMatrix"`, by class `"dsparseMatrix"` or `"RsparseMatrix"`; class `"compMatrix"` by class `"symmetricMatrix"` and of course, class `"Matrix"`.
### Methods
forceSymmetric
`signature(x = "dsRMatrix", uplo = "missing")`: a trivial method just returning `x`
forceSymmetric
`signature(x = "dsRMatrix", uplo = "character")`: if `uplo == x@uplo`, this trivially returns `x`; otherwise `t(x)`.
coerce
`signature(from = "dsCMatrix", to = "dsRMatrix")`
### See Also
the classes `[dgCMatrix](dgcmatrix-class)`, `[dgTMatrix](dgtmatrix-class)`, and `[dgeMatrix](dgematrix-class)`.
### Examples
```
(m0 <- new("dsRMatrix"))
m2 <- new("dsRMatrix", Dim = c(2L,2L),
x = c(3,1), j = c(1L,1L), p = 0:2)
m2
stopifnot(colSums(as(m2, "TsparseMatrix")) == 3:4)
str(m2)
(ds2 <- forceSymmetric(diag(2))) # dsy*
dR <- as(ds2, "RsparseMatrix")
dR # dsRMatrix
```
r None
`rcond` Estimate the Reciprocal Condition Number
-------------------------------------------------
### Description
Estimate the reciprocal of the condition number of a matrix.
This is a generic function with several methods, as seen by `[showMethods](../../methods/html/showmethods)(rcond)`.
### Usage
```
rcond(x, norm, ...)
## S4 method for signature 'sparseMatrix,character'
rcond(x, norm, useInv=FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | an **R** object that inherits from the `Matrix` class. |
| `norm` | character string indicating the type of norm to be used in the estimate. The default is `"O"` for the 1-norm (`"O"` is equivalent to `"1"`). For sparse matrices, when `useInv=TRUE`, `norm` can be any of the `kind`s allowed for `<norm>`; otherwise, the other possible value is `"I"` for the infinity norm, see also `<norm>`. |
| `useInv` | logical (or `"Matrix"` containing `[solve](solve-methods)(x)`). If not false, compute the reciprocal condition number as *1/(||x|| \* ||x^(-1)||)*, where *x^(-1)* is the inverse of *x*, `solve(x)`. This may be an efficient alternative (only) in situations where `solve(x)` is fast (or known), e.g., for (very) sparse or triangular matrices. Note that the *result* may differ depending on `useInv`, as per default, when it is false, an *approximation* is computed. |
| `...` | further arguments passed to or from other methods. |
### Value
An estimate of the reciprocal condition number of `x`.
### BACKGROUND
The condition number of a regular (square) matrix is the product of the `<norm>` of the matrix and the norm of its inverse (or pseudo-inverse).
More generally, the condition number is defined (also for non-square matrices *A*) as
*kappa(A) = (max\_(||v|| = 1; || Av ||)) /(min\_(||v|| = 1; || Av ||)).*
Whenever `x` is *not* a square matrix, in our method definitions, this is typically computed via `rcond(qr.R(qr(X)), ...)` where `X` is `x` or `t(x)`.
The condition number takes on values between 1 and infinity, inclusive, and can be viewed as a factor by which errors in solving linear systems with this matrix as coefficient matrix could be magnified.
`rcond()` computes the *reciprocal* condition number *1/κ* with values in *[0,1]* and can be viewed as a scaled measure of how close a matrix is to being rank deficient (aka “singular”).
Condition numbers are usually estimated, since exact computation is costly in terms of floating-point operations. An (over) estimate of reciprocal condition number is given, since by doing so overflow is avoided. Matrices are well-conditioned if the reciprocal condition number is near 1 and ill-conditioned if it is near zero.
### References
Golub, G., and Van Loan, C. F. (1989). *Matrix Computations,* 2nd edition, Johns Hopkins, Baltimore.
### See Also
`<norm>`, `[kappa](../../base/html/kappa)()` from package base computes an *approximate* condition number of a “traditional” matrix, even non-square ones, with respect to the *p=2* (Euclidean) `<norm>`. `[solve](../../base/html/solve)`.
`<condest>`, a newer *approximate* estimate of the (1-norm) condition number, particularly efficient for large sparse matrices.
### Examples
```
x <- Matrix(rnorm(9), 3, 3)
rcond(x)
## typically "the same" (with more computational effort):
1 / (norm(x) * norm(solve(x)))
rcond(Hilbert(9)) # should be about 9.1e-13
## For non-square matrices:
rcond(x1 <- cbind(1,1:10))# 0.05278
rcond(x2 <- cbind(x1, 2:11))# practically 0, since x2 does not have full rank
## sparse
(S1 <- Matrix(rbind(0:1,0, diag(3:-2))))
rcond(S1)
m1 <- as(S1, "denseMatrix")
all.equal(rcond(S1), rcond(m1))
## wide and sparse
rcond(Matrix(cbind(0, diag(2:-1))))
## Large sparse example ----------
m <- Matrix(c(3,0:2), 2,2)
M <- bdiag(kronecker(Diagonal(2), m), kronecker(m,m))
36*(iM <- solve(M)) # still sparse
MM <- kronecker(Diagonal(10), kronecker(Diagonal(5),kronecker(m,M)))
dim(M3 <- kronecker(bdiag(M,M),MM)) # 12'800 ^ 2
if(interactive()) ## takes about 2 seconds if you have >= 8 GB RAM
system.time(r <- rcond(M3))
## whereas this is *fast* even though it computes solve(M3)
system.time(r. <- rcond(M3, useInv=TRUE))
if(interactive()) ## the values are not the same
c(r, r.) # 0.05555 0.013888
## for all 4 norms available for sparseMatrix :
cbind(rr <- sapply(c("1","I","F","M"),
function(N) rcond(M3, norm=N, useInv=TRUE)))
```
r None
`nearPD` Nearest Positive Definite Matrix
------------------------------------------
### Description
Compute the nearest positive definite matrix to an approximate one, typically a correlation or variance-covariance matrix.
### Usage
```
nearPD(x, corr = FALSE, keepDiag = FALSE, base.matrix = FALSE,
do2eigen = TRUE, doSym = FALSE,
doDykstra = TRUE, only.values = FALSE,
ensureSymmetry = !isSymmetric(x),
eig.tol = 1e-06, conv.tol = 1e-07, posd.tol = 1e-08,
maxit = 100, conv.norm.type = "I", trace = FALSE)
```
### Arguments
| | |
| --- | --- |
| `x` | numeric *n \* n* approximately positive definite matrix, typically an approximation to a correlation or covariance matrix. If `x` is not symmetric (and `ensureSymmetry` is not false), `<symmpart>(x)` is used. |
| `corr` | logical indicating if the matrix should be a *correlation* matrix. |
| `keepDiag` | logical, generalizing `corr`: if `TRUE`, the resulting matrix should have the same diagonal (`[diag](../../base/html/diag)(x)`) as the input matrix. |
| `base.matrix` | logical indicating if the resulting `mat` component should be a base `[matrix](../../base/html/matrix)` or (by default) a `[Matrix](matrix-class)` of class `[dpoMatrix](dpomatrix-class)`. |
| `do2eigen` | logical indicating if a `[posdefify](../../sfsmisc/html/posdefify)()` eigen step should be applied to the result of the Higham algorithm. |
| `doSym` | logical indicating if `X <- (X + t(X))/2` should be done, after `X <- tcrossprod(Qd, Q)`; some doubt if this is necessary. |
| `doDykstra` | logical indicating if Dykstra's correction should be used; true by default. If false, the algorithm is basically the direct fixpoint iteration *Y(k) = P\_U(P\_S(Y(k-1)))*. |
| `only.values` | logical; if `TRUE`, the result is just the vector of eigenvalues of the approximating matrix. |
| `ensureSymmetry` | logical; by default, `<symmpart>(x)` is used whenever `[isSymmetric](../../base/html/issymmetric)(x)` is not true. The user can explicitly set this to `TRUE` or `FALSE`, saving the symmetry test. *Beware* however that setting it `FALSE` for an **a**symmetric input `x`, is typically nonsense! |
| `eig.tol` | defines relative positiveness of eigenvalues compared to largest one, *λ\_1*. Eigenvalues *λ\_k* are treated as if zero when *λ\_k / λ\_1 ≤ eig.tol*. |
| `conv.tol` | convergence tolerance for Higham algorithm. |
| `posd.tol` | tolerance for enforcing positive definiteness (in the final `posdefify` step when `do2eigen` is `TRUE`). |
| `maxit` | maximum number of iterations allowed. |
| `conv.norm.type` | convergence norm type (`<norm>(*,
type)`) used for Higham algorithm. The default is `"I"` (infinity), for reasons of speed (and back compatibility); using `"F"` is more in line with Higham's proposal. |
| `trace` | logical or integer specifying if convergence monitoring should be traced. |
### Details
This implements the algorithm of Higham (2002), and then (if `do2eigen` is true) forces positive definiteness using code from `[posdefify](../../sfsmisc/html/posdefify)`. The algorithm of Knol and ten Berge (1989) (not implemented here) is more general in that it allows constraints to (1) fix some rows (and columns) of the matrix and (2) force the smallest eigenvalue to have a certain value.
Note that setting `corr = TRUE` just sets `diag(.) <- 1` within the algorithm.
Higham (2002) uses Dykstra's correction, but the version by Jens Oehlschlaegel did not use it (accidentally), and still gave reasonable results; this simplification, now only used if `doDykstra = FALSE`, was active in `nearPD()` up to Matrix version 0.999375-40.
### Value
If `only.values = TRUE`, a numeric vector of eigenvalues of the approximating matrix; Otherwise, as by default, an S3 object of `[class](../../base/html/class)` `"nearPD"`, basically a list with components
| | |
| --- | --- |
| `mat` | a matrix of class `[dpoMatrix](dpomatrix-class)`, the computed positive-definite matrix. |
| `eigenvalues` | numeric vector of eigenvalues of `mat`. |
| `corr` | logical, just the argument `corr`. |
| `normF` | the Frobenius norm (`<norm>(x-X, "F")`) of the difference between the original and the resulting matrix. |
| `iterations` | number of iterations needed. |
| `converged` | logical indicating if iterations converged. |
### Author(s)
Jens Oehlschlaegel donated a first version. Subsequent changes by the Matrix package authors.
### References
Cheng, Sheung Hun and Higham, Nick (1998) A Modified Cholesky Algorithm Based on a Symmetric Indefinite Factorization; *SIAM J. Matrix Anal.\ Appl.*, **19**, 1097–1110.
Knol DL, ten Berge JMF (1989) Least-squares approximation of an improper correlation matrix by a proper one. *Psychometrika* **54**, 53–61.
Higham, Nick (2002) Computing the nearest correlation matrix - a problem from finance; *IMA Journal of Numerical Analysis* **22**, 329–343.
### See Also
A first version of this (with non-optional `corr=TRUE`) has been available as `[nearcor](../../sfsmisc/html/nearcor)()`; and more simple versions with a similar purpose `[posdefify](../../sfsmisc/html/posdefify)()`, both from package sfsmisc.
### Examples
```
## Higham(2002), p.334f - simple example
A <- matrix(1, 3,3); A[1,3] <- A[3,1] <- 0
n.A <- nearPD(A, corr=TRUE, do2eigen=FALSE)
n.A[c("mat", "normF")]
n.A.m <- nearPD(A, corr=TRUE, do2eigen=FALSE, base.matrix=TRUE)$mat
stopifnot(exprs = { #=--------------
all.equal(n.A$mat[1,2], 0.760689917)
all.equal(n.A$normF, 0.52779033, tolerance=1e-9)
all.equal(n.A.m, unname(as.matrix(n.A$mat)), tolerance = 1e-15)# seen rel.d.= 1.46e-16
})
set.seed(27)
m <- matrix(round(rnorm(25),2), 5, 5)
m <- m + t(m)
diag(m) <- pmax(0, diag(m)) + 1
(m <- round(cov2cor(m), 2))
str(near.m <- nearPD(m, trace = TRUE))
round(near.m$mat, 2)
norm(m - near.m$mat) # 1.102 / 1.08
if(require("sfsmisc")) {
m2 <- posdefify(m) # a simpler approach
norm(m - m2) # 1.185, i.e., slightly "less near"
}
round(nearPD(m, only.values=TRUE), 9)
## A longer example, extended from Jens' original,
## showing the effects of some of the options:
pr <- Matrix(c(1, 0.477, 0.644, 0.478, 0.651, 0.826,
0.477, 1, 0.516, 0.233, 0.682, 0.75,
0.644, 0.516, 1, 0.599, 0.581, 0.742,
0.478, 0.233, 0.599, 1, 0.741, 0.8,
0.651, 0.682, 0.581, 0.741, 1, 0.798,
0.826, 0.75, 0.742, 0.8, 0.798, 1),
nrow = 6, ncol = 6)
nc. <- nearPD(pr, conv.tol = 1e-7) # default
nc.$iterations # 2
nc.1 <- nearPD(pr, conv.tol = 1e-7, corr = TRUE)
nc.1$iterations # 11 / 12 (!)
ncr <- nearPD(pr, conv.tol = 1e-15)
str(ncr)# still 2 iterations
ncr.1 <- nearPD(pr, conv.tol = 1e-15, corr = TRUE)
ncr.1 $ iterations # 27 / 30 !
ncF <- nearPD(pr, conv.tol = 1e-15, conv.norm = "F")
stopifnot(all.equal(ncr, ncF))# norm type does not matter at all in this example
## But indeed, the 'corr = TRUE' constraint did ensure a better solution;
## cov2cor() does not just fix it up equivalently :
norm(pr - cov2cor(ncr$mat)) # = 0.09994
norm(pr - ncr.1$mat) # = 0.08746 / 0.08805
### 3) a real data example from a 'systemfit' model (3 eq.):
(load(system.file("external", "symW.rda", package="Matrix"))) # "symW"
dim(symW) # 24 x 24
class(symW)# "dsCMatrix": sparse symmetric
if(dev.interactive()) image(symW)
EV <- eigen(symW, only=TRUE)$values
summary(EV) ## looking more closely {EV sorted decreasingly}:
tail(EV)# all 6 are negative
EV2 <- eigen(sWpos <- nearPD(symW)$mat, only=TRUE)$values
stopifnot(EV2 > 0)
if(require("sfsmisc")) {
plot(pmax(1e-3,EV), EV2, type="o", log="xy", xaxt="n",yaxt="n")
eaxis(1); eaxis(2)
} else plot(pmax(1e-3,EV), EV2, type="o", log="xy")
abline(0,1, col="red3",lty=2)
```
r None
`CsparseMatrix-class` Class "CsparseMatrix" of Sparse Matrices in Column-compressed Form
-----------------------------------------------------------------------------------------
### Description
The `"CsparseMatrix"` class is the virtual class of all sparse matrices coded in sorted compressed column-oriented form. Since it is a virtual class, no objects may be created from it. See `showClass("CsparseMatrix")` for its subclasses.
### Slots
`i`:
Object of class `"integer"` of length nnzero (number of non-zero elements). These are the *0-based* row numbers for each non-zero element in the matrix, i.e., `i` must be in `0:(nrow(.)-1)`.
`p`:
`[integer](../../base/html/integer)` vector for providing pointers, one for each column, to the initial (zero-based) index of elements in the column. `.@p` is of length `ncol(.) + 1`, with `p[1] == 0` and `p[length(p)] == nnzero`, such that in fact, `diff(.@p)` are the number of non-zero elements for each column.
In other words, `m@p[1:ncol(m)]` contains the indices of those elements in `m@x` that are the first elements in the respective column of `m`.
`Dim`, `Dimnames`:
inherited from the superclass, see the `[sparseMatrix](sparsematrix-class)` class.
### Extends
Class `"sparseMatrix"`, directly. Class `"Matrix"`, by class `"sparseMatrix"`.
### Methods
matrix products `[%\*%](matrix-products)`, `[crossprod](matrix-products)()` and `tcrossprod()`, several `[solve](solve-methods)` methods, and other matrix methods available:
Arith
`signature(e1 = "CsparseMatrix", e2 = "numeric")`: ...
Arith
`signature(e1 = "numeric", e2 = "CsparseMatrix")`: ...
Math
`signature(x = "CsparseMatrix")`: ...
band
`signature(x = "CsparseMatrix")`: ...
-
`signature(e1 = "CsparseMatrix", e2 = "numeric")`: ...
-
`signature(e1 = "numeric", e2 = "CsparseMatrix")`: ...
+
`signature(e1 = "CsparseMatrix", e2 = "numeric")`: ...
+
`signature(e1 = "numeric", e2 = "CsparseMatrix")`: ...
coerce
`signature(from = "CsparseMatrix", to = "TsparseMatrix")`: ...
coerce
`signature(from = "CsparseMatrix", to = "denseMatrix")`: ...
coerce
`signature(from = "CsparseMatrix", to = "matrix")`: ...
coerce
`signature(from = "CsparseMatrix", to = "lsparseMatrix")`: ...
coerce
`signature(from = "CsparseMatrix", to = "nsparseMatrix")`: ...
coerce
`signature(from = "TsparseMatrix", to = "CsparseMatrix")`: ...
coerce
`signature(from = "denseMatrix", to = "CsparseMatrix")`: ...
diag
`signature(x = "CsparseMatrix")`: ...
gamma
`signature(x = "CsparseMatrix")`: ...
lgamma
`signature(x = "CsparseMatrix")`: ...
log
`signature(x = "CsparseMatrix")`: ...
t
`signature(x = "CsparseMatrix")`: ...
tril
`signature(x = "CsparseMatrix")`: ...
triu
`signature(x = "CsparseMatrix")`: ...
### Note
All classes extending `CsparseMatrix` have a common validity (see `[validObject](../../methods/html/validobject)`) check function. That function additionally checks the `i` slot for each column to contain increasing row numbers.
In earlier versions of Matrix (`<= 0.999375-16`), `[validObject](../../methods/html/validobject)` automatically re-sorted the entries when necessary, and hence `new()` calls with somewhat permuted `i` and `x` slots worked, as `[new](../../methods/html/new)(...)` (*with* slot arguments) automatically checks the validity.
Now, you have to use `[sparseMatrix](sparsematrix)` to achieve the same functionality or know how to use `.validateCsparse()` to do so.
### See Also
`[colSums](colsums)`, `[kronecker](../../base/html/kronecker)`, and other such methods with own help pages.
Further, the super class of `CsparseMatrix`, `[sparseMatrix](sparsematrix-class)`, and, e.g., class `[dgCMatrix](dgcmatrix-class)` for the links to other classes.
### Examples
```
getClass("CsparseMatrix")
## The common validity check function (based on C code):
getValidity(getClass("CsparseMatrix"))
```
| programming_docs |
r None
`Cholesky-class` Cholesky and Bunch-Kaufman Decompositions
-----------------------------------------------------------
### Description
The `"Cholesky"` class is the class of Cholesky decompositions of positive-semidefinite, real dense matrices. The `"BunchKaufman"` class is the class of Bunch-Kaufman decompositions of symmetric, real matrices. The `"pCholesky"` and `"pBunchKaufman"` classes are their ***p**acked* storage versions.
### Objects from the Class
Objects can be created by calls of the form `new("Cholesky",
...)` or `new("BunchKaufman", ...)`, etc, or rather by calls of the form `<chol>(pm)` or `[BunchKaufman](bunchkaufman-methods)(pm)` where `pm` inherits from the `"[dpoMatrix](dpomatrix-class)"` or `"[dsyMatrix](dsymatrix-class)"` class or as a side-effect of other functions applied to `"dpoMatrix"` objects (see `[dpoMatrix](dpomatrix-class)`).
### Slots
A Cholesky decomposition extends class `[MatrixFactorization](matrixfactorization-class)` but is basically a triangular matrix extending the `"[dtrMatrix](dtrmatrix-class)"` class.
`uplo`:
inherited from the `"dtrMatrix"` class.
`diag`:
inherited from the `"dtrMatrix"` class.
`x`:
inherited from the `"dtrMatrix"` class.
`Dim`:
inherited from the `"dtrMatrix"` class.
`Dimnames`:
inherited from the `"dtrMatrix"` class.
A Bunch-Kaufman decomposition also extends the `"dtrMatrix"` class and has a `perm` slot representing a permutation matrix. The packed versions extend the `"dtpMatrix"` class.
### Extends
Class `"MatrixFactorization"` and `"dtrMatrix"`, directly. Class `"dgeMatrix"`, by class `"dtrMatrix"`. Class `"Matrix"`, by class `"dtrMatrix"`.
### Methods
Both these factorizations can *directly* be treated as (triangular) matrices, as they extend `"dtrMatrix"`, see above. There are currently no further explicit methods defined with class `"Cholesky"` or `"BunchKaufman"` in the signature.
### Note
1. Objects of class `"Cholesky"` typically stem from `<chol>(D)`, applied to a *dense* matrix `D`.
On the other hand, the *function* `[Cholesky](cholesky)(S)` applies to a *sparse* matrix `S`, and results in objects inheriting from class `[CHMfactor](chmfactor-class)`.
2. For traditional matrices `m`, `chol(m)` is a traditional matrix as well, triangular, but simply an *n \* n* numeric `[matrix](../../base/html/matrix)`. Hence, for compatibility, the `"Cholesky"` and `"BunchKaufman"` classes (and their `"p*"` packed versions) also extend triangular Matrix classes (such as "dtrMatrix").
Consequently, `[determinant](../../base/html/det)(R)` for `R <- chol(A)` returns the determinant of `R`, not of `A`. This is in contrast to class `[CHMfactor](chmfactor-class)` objects `C`, where `determinant(C)` gives the determinant of the *original* matrix `A`, for `C <- Cholesky(A)`, see also the `determinant` method documentation on the class `[CHMfactor](chmfactor-class)` page.
### See Also
Classes `[dtrMatrix](dtrmatrix-class)`, `[dpoMatrix](dpomatrix-class)`; function `<chol>`.
Function `[Cholesky](cholesky)` resulting in class `[CHMfactor](chmfactor-class)` objects, *not* class "Cholesky" ones, see the section ‘Note’.
### Examples
```
(sm <- as(as(Matrix(diag(5) + 1), "dsyMatrix"), "dspMatrix"))
signif(csm <- chol(sm), 4)
(pm <- crossprod(Matrix(rnorm(18), nrow = 6, ncol = 3)))
(ch <- chol(pm))
if (toupper(ch@uplo) == "U") # which is TRUE
crossprod(ch)
stopifnot(all.equal(as(crossprod(ch), "matrix"),
as(pm, "matrix"), tolerance=1e-14))
```
r None
`forceSymmetric` Force a Matrix to 'symmetricMatrix' Without Symmetry Checks
-----------------------------------------------------------------------------
### Description
Force a square matrix `x` to a `[symmetricMatrix](symmetricmatrix-class)`, **without** a symmetry check as it would be applied for `as(x,
"symmetricMatrix")`.
### Usage
```
forceSymmetric(x, uplo)
```
### Arguments
| | |
| --- | --- |
| `x` | any square matrix (of numbers), either “"traditional"” (`[matrix](../../base/html/matrix)`) or inheriting from `[Matrix](matrix-class)`. |
| `uplo` | optional string, `"U"` or `"L"` indicating which “triangle” half of `x` should determine the result. The default is `"U"` unless `x` already has a `uplo` slot (i.e., when it is `[symmetricMatrix](symmetricmatrix-class)`, or `[triangularMatrix](triangularmatrix-class)`), where the default will be `x@uplo`. |
### Value
a square matrix inheriting from class `[symmetricMatrix](symmetricmatrix-class)`.
### See Also
`<symmpart>` for the symmetric part of a matrix, or the coercions `as(x, <symmetricMatrix class>)`.
### Examples
```
## Hilbert matrix
i <- 1:6
h6 <- 1/outer(i - 1L, i, "+")
sd <- sqrt(diag(h6))
hh <- t(h6/sd)/sd # theoretically symmetric
isSymmetric(hh, tol=0) # FALSE; hence
try( as(hh, "symmetricMatrix") ) # fails, but this works fine:
H6 <- forceSymmetric(hh)
## result can be pretty surprising:
(M <- Matrix(1:36, 6))
forceSymmetric(M) # symmetric, hence very different in lower triangle
(tm <- tril(M))
forceSymmetric(tm)
```
r None
`expm` Matrix Exponential
--------------------------
### Description
Compute the exponential of a matrix.
### Usage
```
expm(x)
```
### Arguments
| | |
| --- | --- |
| `x` | a matrix, typically inheriting from the `[dMatrix](dmatrix-class)` class. |
### Details
The exponential of a matrix is defined as the infinite Taylor series `expm(A) = I + A + A^2/2! + A^3/3! + ...` (although this is definitely not the way to compute it). The method for the `dgeMatrix` class uses Ward's diagonal Pade' approximation with three step preconditioning.
### Value
The matrix exponential of `x`.
### Note
The [expm](https://CRAN.R-project.org/package=expm) package contains newer (partly faster and more accurate) algorithms for `expm()` and includes `[logm](../../expm/html/logm)` and `[sqrtm](../../expm/html/sqrtm)`.
### Author(s)
This is a translation of the implementation of the corresponding Octave function contributed to the Octave project by A. Scottedward Hodel [[email protected]](mailto:[email protected]). A bug in there has been fixed by Martin Maechler.
### References
<https://en.wikipedia.org/wiki/Matrix_exponential>
Cleve Moler and Charles Van Loan (2003) Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later. *SIAM Review* **45**, 1, 3–49.
Eric W. Weisstein et al. (1999) *Matrix Exponential*. From MathWorld, <https://mathworld.wolfram.com/MatrixExponential.html>
### See Also
`[Schur](schur)`; additionally, `[expm](../../expm/html/expm)`, `[logm](../../expm/html/logm)`, etc in package [expm](https://CRAN.R-project.org/package=expm).
### Examples
```
(m1 <- Matrix(c(1,0,1,1), nc = 2))
(e1 <- expm(m1)) ; e <- exp(1)
stopifnot(all.equal(e1@x, c(e,0,e,e), tolerance = 1e-15))
(m2 <- Matrix(c(-49, -64, 24, 31), nc = 2))
(e2 <- expm(m2))
(m3 <- Matrix(cbind(0,rbind(6*diag(3),0))))# sparse!
(e3 <- expm(m3)) # upper triangular
```
r None
`dsyMatrix-class` Symmetric Dense (Packed | Non-packed) Numeric Matrices
-------------------------------------------------------------------------
### Description
* The `"dsyMatrix"` class is the class of symmetric, dense matrices in *non-packed* storage and
* `"dspMatrix"` is the class of symmetric dense matrices in *packed* storage. Only the upper triangle or the lower triangle is stored.
### Objects from the Class
Objects can be created by calls of the form `new("dsyMatrix",
...)` or `new("dspMatrix", ...)`, respectively.
### Slots
`uplo`:
Object of class `"character"`. Must be either "U", for upper triangular, and "L", for lower triangular.
`x`:
Object of class `"numeric"`. The numeric values that constitute the matrix, stored in column-major order.
`Dim`,`Dimnames`:
The dimension (a length-2 `"integer"`) and corresponding names (or `NULL`), see the `[Matrix](matrix-class)`.
`factors`:
Object of class `"list"`. A named list of factorizations that have been computed for the matrix.
### Extends
`"dsyMatrix"` extends class `"dgeMatrix"`, directly, whereas
`"dspMatrix"` extends class `"ddenseMatrix"`, directly.
Both extend class `"symmetricMatrix"`, directly, and class `"Matrix"` and others, *in*directly, use `[showClass](../../methods/html/rclassutils)("dsyMatrix")`, e.g., for details.
### Methods
coerce
`signature(from = "ddenseMatrix", to = "dgeMatrix")`
coerce
`signature(from = "dspMatrix", to = "matrix")`
coerce
`signature(from = "dsyMatrix", to = "matrix")`
coerce
`signature(from = "dsyMatrix", to = "dspMatrix")`
coerce
`signature(from = "dspMatrix", to = "dsyMatrix")`
norm
`signature(x = "dspMatrix", type = "character")`, or `x = "dsyMatrix"` or `type = "missing"`: Computes the matrix norm of the desired type, see, `<norm>`.
rcond
`signature(x = "dspMatrix", type = "character")`, or `x = "dsyMatrix"` or `type = "missing"`: Computes the reciprocal condition number, `<rcond>()`.
solve
`signature(a = "dspMatrix", b = "....")`, and
solve
`signature(a = "dsyMatrix", b = "....")`: `x
<- solve(a,b)` solves *A x = b* for *x*; see `<solve-methods>`.
t
`signature(x = "dsyMatrix")`: Transpose; swaps from upper triangular to lower triangular storage, i.e., the uplo slot from `"U"` to `"L"` or vice versa, the same as for all symmetric matrices.
### See Also
The *positive (Semi-)definite* dense (packed or non-packed numeric matrix classes `[dpoMatrix](dpomatrix-class)`, `[dppMatrix](dpomatrix-class)` and `[corMatrix](dpomatrix-class)`,
Classes `[dgeMatrix](dgematrix-class)` and `[Matrix](matrix-class)`; `[solve](../../base/html/solve)`, `<norm>`, `<rcond>`, `[t](../../base/html/t)`
### Examples
```
## Only upper triangular part matters (when uplo == "U" as per default)
(sy2 <- new("dsyMatrix", Dim = as.integer(c(2,2)), x = c(14, NA,32,77)))
str(t(sy2)) # uplo = "L", and the lower tri. (i.e. NA is replaced).
chol(sy2) #-> "Cholesky" matrix
(sp2 <- pack(sy2)) # a "dspMatrix"
## Coercing to dpoMatrix gives invalid object:
sy3 <- new("dsyMatrix", Dim = as.integer(c(2,2)), x = c(14, -1, 2, -7))
try(as(sy3, "dpoMatrix")) # -> error: not positive definite
```
r None
`atomicVector-class` Virtual Class "atomicVector" of Atomic Vectors
--------------------------------------------------------------------
### Description
The `[class](../../base/html/class)` `"atomicVector"` is a *virtual* class containing all atomic vector classes of base **R**, as also implicitly defined via `[is.atomic](../../base/html/is.recursive)`.
### Objects from the Class
A virtual Class: No objects may be created from it.
### Methods
In the Matrix package, the "atomicVector" is used in signatures where typically “old-style” "matrix" objects can be used and can be substituted by simple vectors.
### Extends
The atomic classes `"logical"`, `"integer"`, `"double"`, `"numeric"`, `"complex"`, `"raw"` and `"character"` are extended directly. Note that `"numeric"` already contains `"integer"` and `"double"`, but we want all of them to be direct subclasses of `"atomicVector"`.
### Author(s)
Martin Maechler
### See Also
`[is.atomic](../../base/html/is.recursive)`, `[integer](../../base/html/integer)`, `[numeric](../../base/html/numeric)`, `[complex](../../base/html/complex)`, etc.
### Examples
```
showClass("atomicVector")
```
r None
`uniqTsparse` Unique (Sorted) TsparseMatrix Representations
------------------------------------------------------------
### Description
Detect or “unify” (or “standardize”) non-unique `[TsparseMatrix](tsparsematrix-class)` matrices, prducing unique *(i,j,x)* triplets which are *sorted*, first in *j*, then in *i* (in the sense of `[order](../../base/html/order)(j,i)`).
Note that `new(.)`, `[spMatrix](spmatrix)` or `[sparseMatrix](sparsematrix)` constructors for `"dgTMatrix"` (and other `"[TsparseMatrix](tsparsematrix-class)"` classes) implicitly add *x\_k*'s that belong to identical *(i\_k, j\_k)* pairs.
`anyDuplicatedT()` reports the index of the first duplicated pair, or `0` if there is none.
`uniqTsparse(x)` replaces duplicated index pairs *(i,j)* and their corresponding `x` slot entries by the triple *(i,j, sx)* where `sx = sum(x [<all pairs matching (i,j)>])`, and for logical `x`, addition is replaced by logical *or*.
### Usage
```
uniqTsparse(x, class.x = c(class(x)))
anyDuplicatedT(x, di = dim(x))
```
### Arguments
| | |
| --- | --- |
| `x` | a sparse matrix stored in triplet form, i.e., inheriting from class `[TsparseMatrix](tsparsematrix-class)`. |
| `class.x` | optional character string specifying `class(x)`. |
| `di` | the matrix dimension of `x`, `[dim](../../base/html/dim)(x)`. |
### Value
`uniqTsparse(x)` returns a `[TsparseMatrix](tsparsematrix-class)` “like x”, of the same class and with the same elements, just internally possibly changed to “unique” *(i,j,x)* triplets in *sorted* order.
`anyDuplicatedT(x)` returns an `[integer](../../base/html/integer)` as `[anyDuplicated](../../base/html/duplicated)`, the *index* of the first duplicated entry (from the *(i,j)* pairs) if there is one, and `0` otherwise.
### See Also
`[TsparseMatrix](tsparsematrix-class)`, for uniqueness, notably `[dgTMatrix](dgtmatrix-class)`.
### Examples
```
example("dgTMatrix-class", echo=FALSE)
## -> 'T2' with (i,j,x) slots of length 5 each
T2u <- uniqTsparse(T2)
stopifnot(## They "are" the same (and print the same):
all.equal(T2, T2u, tol=0),
## but not internally:
anyDuplicatedT(T2) == 2,
anyDuplicatedT(T2u) == 0,
length(T2 @x) == 5,
length(T2u@x) == 3)
## is 'x' a "uniq Tsparse" Matrix ? [requires x to be TsparseMatrix!]
non_uniqT <- function(x, di = dim(x))
is.unsorted(x@j) || anyDuplicatedT(x, di)
non_uniqT(T2 ) # TRUE
non_uniqT(T2u) # FALSE
T3 <- T2u
T3[1, c(1,3)] <- 10; T3[2, c(1,5)] <- 20
T3u <- uniqTsparse(T3)
str(T3u) # sorted in 'j', and within j, sorted in i
stopifnot(!non_uniqT(T3u))
## Logical l.TMatrix and n.TMatrix :
(L2 <- T2 > 0)
validObject(L2u <- uniqTsparse(L2))
(N2 <- as(L2, "nMatrix"))
validObject(N2u <- uniqTsparse(N2))
stopifnot(N2u@i == L2u@i, L2u@i == T2u@i, N2@i == L2@i, L2@i == T2@i,
N2u@j == L2u@j, L2u@j == T2u@j, N2@j == L2@j, L2@j == T2@j)
# now with a nasty NA [partly failed in Matrix 1.1-5]:
L2.N <- L2; L2.N@x[2] <- NA; L2.N
validObject(L2.N)
(m2N <- as.matrix(L2.N)) # looks "ok"
iL <- as.integer(m2N)
stopifnot(identical(10L, which(is.na(match(iL, 0:1)))))
symnum(m2N)
```
r None
`Subassign-methods` Methods for "[
-----------------------------------
### Description
Methods for `"[<-"`, i.e., extraction or subsetting mostly of matrices, in package Matrix.
**Note**: Contrary to standard `[matrix](../../base/html/matrix)` assignment in base **R**, in `x[..] <- val` it is typically an **error** (see `[stop](../../base/html/stop)`) when the [type](../../base/html/typeof) or `[class](../../base/html/class)` of `val` would require the class of `x` to be changed, e.g., when `x` is logical, say `"lsparseMatrix"`, and `val` is numeric. In other cases, e.g., when `x` is a `"nsparseMatrix"` and `val` is not `TRUE` or `FALSE`, a warning is signalled, and `val` is “interpreted” as `[logical](../../base/html/logical)`, and (logical) `[NA](../../base/html/na)` is interpreted as `TRUE`.
### Methods
There are *many many* more than these:
x = "Matrix", i = "missing", j = "missing", value= "ANY"
is currently a simple fallback method implementation which ensures “readable” error messages.
x = "Matrix", i = "ANY", j = "ANY", value= "ANY"
currently gives an error
x = "denseMatrix", i = "index", j = "missing", value= "numeric"
...
x = "denseMatrix", i = "index", j = "index", value= "numeric"
...
x = "denseMatrix", i = "missing", j = "index", value= "numeric"
...
### See Also
`[[-methods](xtrct-methods)` for subsetting `"Matrix"` objects; the `[index](index-class)` class; `[Extract](../../base/html/extract)` about the standard subset assignment (and extraction).
### Examples
```
set.seed(101)
(a <- m <- Matrix(round(rnorm(7*4),2), nrow = 7))
a[] <- 2.2 # <<- replaces **every** entry
a
## as do these:
a[,] <- 3 ; a[TRUE,] <- 4
m[2, 3] <- 3.14 # simple number
m[3, 3:4]<- 3:4 # simple numeric of length 2
## sub matrix assignment:
m[-(4:7), 3:4] <- cbind(1,2:4) #-> upper right corner of 'm'
m[3:5, 2:3] <- 0
m[6:7, 1:2] <- Diagonal(2)
m
## rows or columns only:
m[1,] <- 10
m[,2] <- 1:7
m[-(1:6), ] <- 3:0 # not the first 6 rows, i.e. only the 7th
as(m, "sparseMatrix")
```
r None
`ltrMatrix-class` Triangular Dense Logical Matrices
----------------------------------------------------
### Description
The `"ltrMatrix"` class is the class of triangular, dense, logical matrices in nonpacked storage. The `"ltpMatrix"` class is the same except in packed storage.
### Slots
`x`:
Object of class `"logical"`. The logical values that constitute the matrix, stored in column-major order.
`uplo`:
Object of class `"character"`. Must be either "U", for upper triangular, and "L", for lower triangular.
`diag`:
Object of class `"character"`. Must be either `"U"`, for unit triangular (diagonal is all ones), or `"N"`; see `[triangularMatrix](triangularmatrix-class)`.
`Dim`,`Dimnames`:
The dimension (a length-2 `"integer"`) and corresponding names (or `NULL`), see the `[Matrix](matrix-class)` class.
`factors`:
Object of class `"list"`. A named list of factorizations that have been computed for the matrix.
### Extends
Both extend classes `"[ldenseMatrix](ldensematrix-class)"` and `"[triangularMatrix](triangularmatrix-class)"`, directly; further, class `"Matrix"`, `"[lMatrix](dmatrix-class)"` and others, *in*directly. Use `[showClass](../../methods/html/rclassutils)("ltrMatrix")`, e.g., for details.
### Methods
Currently, mainly `[t](../../base/html/t)()` and coercion methods (for `[as](../../methods/html/as)(.)`; use, e.g., `[showMethods](../../methods/html/showmethods)(class="ltpMatrix")` for details.
### See Also
Classes `[lgeMatrix](lgematrix-class)`, `[Matrix](matrix-class)`; function `[t](../../base/html/t)`
### Examples
```
showClass("ltrMatrix")
str(new("ltpMatrix"))
(lutr <- as(upper.tri(matrix(,4,4)), "ltrMatrix"))
str(lutp <- as(lutr, "ltpMatrix"))# packed matrix: only 10 = (4+1)*4/2 entries
!lutp ## the logical negation (is *not* logical triangular !)
## but this one is:
stopifnot(all.equal(lutp, as(!!lutp, "ltpMatrix")))
```
r None
`dsparseMatrix-class` Virtual Class "dsparseMatrix" of Numeric Sparse Matrices
-------------------------------------------------------------------------------
### Description
The Class `"dsparseMatrix"` is the virtual (super) class of all numeric sparse matrices.
### Slots
`Dim`:
the matrix dimension, see class `"[Matrix](matrix-class)"`.
`Dimnames`:
see the `"Matrix"` class.
`x`:
a `[numeric](../../base/html/numeric)` vector containing the (non-zero) matrix entries.
### Extends
Class `"dMatrix"` and `"sparseMatrix"`, directly.
Class `"Matrix"`, by the above classes.
### See Also
the documentation of the (non virtual) sub classes, see `showClass("dsparseMatrix")`; in particular, [dgTMatrix](dgtmatrix-class), [dgCMatrix](dgcmatrix-class), and [dgRMatrix](dgrmatrix-class).
### Examples
```
showClass("dsparseMatrix")
```
r None
`bkde2D` Compute a 2D Binned Kernel Density Estimate
-----------------------------------------------------
### Description
Returns the set of grid points in each coordinate direction, and the matrix of density estimates over the mesh induced by the grid points. The kernel is the standard bivariate normal density.
### Usage
```
bkde2D(x, bandwidth, gridsize = c(51L, 51L), range.x, truncate = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x` | a two-column numeric matrix containing the observations from the distribution whose density is to be estimated. Missing values are not allowed. |
| `bandwidth` | numeric vector oflength 2, containing the bandwidth to be used in each coordinate direction. |
| `gridsize` | vector containing the number of equally spaced points in each direction over which the density is to be estimated. |
| `range.x` | a list containing two vectors, where each vector contains the minimum and maximum values of `x` at which to compute the estimate for each direction. The default minimum in each direction is minimum data value minus 1.5 times the bandwidth for that direction. The default maximum is the maximum data value plus 1.5 times the bandwidth for that direction |
| `truncate` | logical flag: if TRUE, data with `x` values outside the range specified by `range.x` are ignored. |
### Value
a list containing the following components:
| | |
| --- | --- |
| `x1` | vector of values of the grid points in the first coordinate direction at which the estimate was computed. |
| `x2` | vector of values of the grid points in the second coordinate direction at which the estimate was computed. |
| `fhat` | matrix of density estimates over the mesh induced by `x1` and `x2`. |
### Details
This is the binned approximation to the 2D kernel density estimate. Linear binning is used to obtain the bin counts and the Fast Fourier Transform is used to perform the discrete convolutions. For each `x1`,`x2` pair the bivariate Gaussian kernel is centered on that location and the heights of the kernel, scaled by the bandwidths, at each datapoint are summed. This sum, after a normalization, is the corresponding `fhat` value in the output.
### References
Wand, M. P. (1994). Fast Computation of Multivariate Kernel Estimators. *Journal of Computational and Graphical Statistics,* **3**, 433-445.
Wand, M. P. and Jones, M. C. (1995). *Kernel Smoothing.* Chapman and Hall, London.
### See Also
`<bkde>`, `[density](../../stats/html/density)`, `[hist](../../graphics/html/hist)`.
### Examples
```
data(geyser, package="MASS")
x <- cbind(geyser$duration, geyser$waiting)
est <- bkde2D(x, bandwidth=c(0.7, 7))
contour(est$x1, est$x2, est$fhat)
persp(est$fhat)
```
| programming_docs |
r None
`dpih` Select a Histogram Bin Width
------------------------------------
### Description
Uses direct plug-in methodology to select the bin width of a histogram.
### Usage
```
dpih(x, scalest = "minim", level = 2L, gridsize = 401L,
range.x = range(x), truncate = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x` | numeric vector containing the sample on which the histogram is to be constructed. |
| `scalest` | estimate of scale. `"stdev"` - standard deviation is used. `"iqr"` - inter-quartile range divided by 1.349 is used. `"minim"` - minimum of `"stdev"` and `"iqr"` is used. |
| `level` | number of levels of functional estimation used in the plug-in rule. |
| `gridsize` | number of grid points used in the binned approximations to functional estimates. |
| `range.x` | range over which functional estimates are obtained. The default is the minimum and maximum data values. |
| `truncate` | if `truncate` is `TRUE` then observations outside of the interval specified by `range.x` are omitted. Otherwise, they are used to weight the extreme grid points. |
### Details
The direct plug-in approach, where unknown functionals that appear in expressions for the asymptotically optimal bin width and bandwidths are replaced by kernel estimates, is used. The normal distribution is used to provide an initial estimate.
### Value
the selected bin width.
### Background
This method for selecting the bin width of a histogram is described in Wand (1995). It is an extension of the normal scale rule of Scott (1979) and uses plug-in ideas from bandwidth selection for kernel density estimation (e.g. Sheather and Jones, 1991).
### References
Scott, D. W. (1979). On optimal and data-based histograms. *Biometrika*, **66**, 605–610.
Sheather, S. J. and Jones, M. C. (1991). A reliable data-based bandwidth selection method for kernel density estimation. *Journal of the Royal Statistical Society, Series B*, **53**, 683–690.
Wand, M. P. (1995). Data-based choice of histogram binwidth. *The American Statistician*, **51**, 59–64.
### See Also
`[hist](../../graphics/html/hist)`
### Examples
```
data(geyser, package="MASS")
x <- geyser$duration
h <- dpih(x)
bins <- seq(min(x)-h, max(x)+h, by=h)
hist(x, breaks=bins)
```
r None
`dpill` Select a Bandwidth for Local Linear Regression
-------------------------------------------------------
### Description
Use direct plug-in methodology to select the bandwidth of a local linear Gaussian kernel regression estimate, as described by Ruppert, Sheather and Wand (1995).
### Usage
```
dpill(x, y, blockmax = 5, divisor = 20, trim = 0.01, proptrun = 0.05,
gridsize = 401L, range.x, truncate = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x` | numeric vector of x data. Missing values are not accepted. |
| `y` | numeric vector of y data. This must be same length as `x`, and missing values are not accepted. |
| `blockmax` | the maximum number of blocks of the data for construction of an initial parametric estimate. |
| `divisor` | the value that the sample size is divided by to determine a lower limit on the number of blocks of the data for construction of an initial parametric estimate. |
| `trim` | the proportion of the sample trimmed from each end in the `x` direction before application of the plug-in methodology. |
| `proptrun` | the proportion of the range of `x` at each end truncated in the functional estimates. |
| `gridsize` | number of equally-spaced grid points over which the function is to be estimated. |
| `range.x` | vector containing the minimum and maximum values of `x` at which to compute the estimate. For density estimation the default is the minimum and maximum data values with 5% of the range added to each end. For regression estimation the default is the minimum and maximum data values. |
| `truncate` | logical flag: if `TRUE`, data with `x` values outside the range specified by `range.x` are ignored. |
### Details
The direct plug-in approach, where unknown functionals that appear in expressions for the asymptotically optimal bandwidths are replaced by kernel estimates, is used. The kernel is the standard normal density. Least squares quartic fits over blocks of data are used to obtain an initial estimate. Mallow's *Cp* is used to select the number of blocks.
### Value
the selected bandwidth.
### Warning
If there are severe irregularities (i.e. outliers, sparse regions) in the `x` values then the local polynomial smooths required for the bandwidth selection algorithm may become degenerate and the function will crash. Outliers in the `y` direction may lead to deterioration of the quality of the selected bandwidth.
### References
Ruppert, D., Sheather, S. J. and Wand, M. P. (1995). An effective bandwidth selector for local least squares regression. *Journal of the American Statistical Association*, **90**, 1257–1270.
Wand, M. P. and Jones, M. C. (1995). *Kernel Smoothing.* Chapman and Hall, London.
### See Also
`[ksmooth](../../stats/html/ksmooth)`, `<locpoly>`.
### Examples
```
data(geyser, package = "MASS")
x <- geyser$duration
y <- geyser$waiting
plot(x, y)
h <- dpill(x, y)
fit <- locpoly(x, y, bandwidth = h)
lines(fit)
```
r None
`locpoly` Estimate Functions Using Local Polynomials
-----------------------------------------------------
### Description
Estimates a probability density function, regression function or their derivatives using local polynomials. A fast binned implementation over an equally-spaced grid is used.
### Usage
```
locpoly(x, y, drv = 0L, degree, kernel = "normal",
bandwidth, gridsize = 401L, bwdisc = 25,
range.x, binned = FALSE, truncate = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x` | numeric vector of x data. Missing values are not accepted. |
| `bandwidth` | the kernel bandwidth smoothing parameter. It may be a single number or an array having length `gridsize`, representing a bandwidth that varies according to the location of estimation. |
| `y` | vector of y data. This must be same length as `x`, and missing values are not accepted. |
| `drv` | order of derivative to be estimated. |
| `degree` | degree of local polynomial used. Its value must be greater than or equal to the value of `drv`. The default value is of `degree` is `drv` + 1. |
| `kernel` | `"normal"` - the Gaussian density function. Currently ignored. |
| `gridsize` | number of equally-spaced grid points over which the function is to be estimated. |
| `bwdisc` | number of logarithmically-equally-spaced bandwidths on which `bandwidth` is discretised, to speed up computation. |
| `range.x` | vector containing the minimum and maximum values of `x` at which to compute the estimate. |
| `binned` | logical flag: if `TRUE`, then `x` and `y` are taken to be grid counts rather than raw data. |
| `truncate` | logical flag: if `TRUE`, data with `x` values outside the range specified by `range.x` are ignored. |
### Value
if `y` is specified, a local polynomial regression estimate of E[Y|X] (or its derivative) is computed. If `y` is missing, a local polynomial estimate of the density of `x` (or its derivative) is computed.
a list containing the following components:
| | |
| --- | --- |
| `x` | vector of sorted x values at which the estimate was computed. |
| `y` | vector of smoothed estimates for either the density or the regression at the corresponding `x`. |
### Details
Local polynomial fitting with a kernel weight is used to estimate either a density, regression function or their derivatives. In the case of density estimation, the data are binned and the local fitting procedure is applied to the bin counts. In either case, binned approximations over an equally-spaced grid is used for fast computation. The bandwidth may be either scalar or a vector of length `gridsize`.
### References
Wand, M. P. and Jones, M. C. (1995). *Kernel Smoothing.* Chapman and Hall, London.
### See Also
`<bkde>`, `[density](../../stats/html/density)`, `<dpill>`, `[ksmooth](../../stats/html/ksmooth)`, `[loess](../../stats/html/loess)`, `[smooth](../../stats/html/smooth)`, `[supsmu](../../stats/html/supsmu)`.
### Examples
```
data(geyser, package = "MASS")
# local linear density estimate
x <- geyser$duration
est <- locpoly(x, bandwidth = 0.25)
plot(est, type = "l")
# local linear regression estimate
y <- geyser$waiting
plot(x, y)
fit <- locpoly(x, y, bandwidth = 0.25)
lines(fit)
```
r None
`bkfe` Compute a Binned Kernel Functional Estimate
---------------------------------------------------
### Description
Returns an estimate of a binned approximation to the kernel estimate of the specified density functional. The kernel is the standard normal density.
### Usage
```
bkfe(x, drv, bandwidth, gridsize = 401L, range.x, binned = FALSE,
truncate = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x` | numeric vector of observations from the distribution whose density is to be estimated. Missing values are not allowed. |
| `drv` | order of derivative in the density functional. Must be a non-negative even integer. |
| `bandwidth` | the kernel bandwidth smoothing parameter. Must be supplied. |
| `gridsize` | the number of equally-spaced points over which binning is performed. |
| `range.x` | vector containing the minimum and maximum values of `x` at which to compute the estimate. The default is the minimum and maximum data values, extended by the support of the kernel. |
| `binned` | logical flag: if `TRUE`, then `x` and `y` are taken to be grid counts rather than raw data. |
| `truncate` | logical flag: if `TRUE`, data with `x` values outside the range specified by `range.x` are ignored. |
### Details
The density functional of order `drv` is the integral of the product of the density and its `drv`th derivative. The kernel estimates of such quantities are computed using a binned implementation, and the kernel is the standard normal density.
### Value
the (scalar) estimated functional.
### Background
Estimates of this type were proposed by Sheather and Jones (1991).
### References
Sheather, S. J. and Jones, M. C. (1991). A reliable data-based bandwidth selection method for kernel density estimation. *Journal of the Royal Statistical Society, Series B*, **53**, 683–690.
Wand, M. P. and Jones, M. C. (1995). *Kernel Smoothing.* Chapman and Hall, London.
### Examples
```
data(geyser, package="MASS")
x <- geyser$duration
est <- bkfe(x, drv=4, bandwidth=0.3)
```
r None
`bkde` Compute a Binned Kernel Density Estimate
------------------------------------------------
### Description
Returns x and y coordinates of the binned kernel density estimate of the probability density of the data.
### Usage
```
bkde(x, kernel = "normal", canonical = FALSE, bandwidth,
gridsize = 401L, range.x, truncate = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x` | numeric vector of observations from the distribution whose density is to be estimated. Missing values are not allowed. |
| `bandwidth` | the kernel bandwidth smoothing parameter. Larger values of `bandwidth` make smoother estimates, smaller values of `bandwidth` make less smooth estimates. The default is a bandwidth computed from the variance of `x`, specifically the ‘oversmoothed bandwidth selector’ of Wand and Jones (1995, page 61). |
| `kernel` | character string which determines the smoothing kernel. `kernel` can be: `"normal"` - the Gaussian density function (the default). `"box"` - a rectangular box. `"epanech"` - the centred beta(2,2) density. `"biweight"` - the centred beta(3,3) density. `"triweight"` - the centred beta(4,4) density. This can be abbreviated to any unique abbreviation. |
| `canonical` | length-one logical vector: if `TRUE`, canonically scaled kernels are used. |
| `gridsize` | the number of equally spaced points at which to estimate the density. |
| `range.x` | vector containing the minimum and maximum values of `x` at which to compute the estimate. The default is the minimum and maximum data values, extended by the support of the kernel. |
| `truncate` | logical flag: if `TRUE`, data with `x` values outside the range specified by `range.x` are ignored. |
### Details
This is the binned approximation to the ordinary kernel density estimate. Linear binning is used to obtain the bin counts. For each `x` value in the sample, the kernel is centered on that `x` and the heights of the kernel at each datapoint are summed. This sum, after a normalization, is the corresponding `y` value in the output.
### Value
a list containing the following components:
| | |
| --- | --- |
| `x` | vector of sorted `x` values at which the estimate was computed. |
| `y` | vector of density estimates at the corresponding `x`. |
### Background
Density estimation is a smoothing operation. Inevitably there is a trade-off between bias in the estimate and the estimate's variability: large bandwidths will produce smooth estimates that may hide local features of the density; small bandwidths may introduce spurious bumps into the estimate.
### References
Wand, M. P. and Jones, M. C. (1995). *Kernel Smoothing.* Chapman and Hall, London.
### See Also
`[density](../../stats/html/density)`, `<dpik>`, `[hist](../../graphics/html/hist)`, `[ksmooth](../../stats/html/ksmooth)`.
### Examples
```
data(geyser, package="MASS")
x <- geyser$duration
est <- bkde(x, bandwidth=0.25)
plot(est, type="l")
```
r None
`dpik` Select a Bandwidth for Kernel Density Estimation
--------------------------------------------------------
### Description
Use direct plug-in methodology to select the bandwidth of a kernel density estimate.
### Usage
```
dpik(x, scalest = "minim", level = 2L, kernel = "normal",
canonical = FALSE, gridsize = 401L, range.x = range(x),
truncate = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x` | numeric vector containing the sample on which the kernel density estimate is to be constructed. |
| `scalest` | estimate of scale. `"stdev"` - standard deviation is used. `"iqr"` - inter-quartile range divided by 1.349 is used. `"minim"` - minimum of `"stdev"` and `"iqr"` is used. |
| `level` | number of levels of functional estimation used in the plug-in rule. |
| `kernel` | character string which determines the smoothing kernel. `kernel` can be: `"normal"` - the Gaussian density function (the default). `"box"` - a rectangular box. `"epanech"` - the centred beta(2,2) density. `"biweight"` - the centred beta(3,3) density. `"triweight"` - the centred beta(4,4) density. This can be abbreviated to any unique abbreviation. |
| `canonical` | logical flag: if `TRUE`, canonically scaled kernels are used |
| `gridsize` | the number of equally-spaced points over which binning is performed to obtain kernel functional approximation. |
| `range.x` | vector containing the minimum and maximum values of `x` at which to compute the estimate. The default is the minimum and maximum data values. |
| `truncate` | logical flag: if `TRUE`, data with `x` values outside the range specified by `range.x` are ignored. |
### Details
The direct plug-in approach, where unknown functionals that appear in expressions for the asymptotically optimal bandwidths are replaced by kernel estimates, is used. The normal distribution is used to provide an initial estimate.
### Value
the selected bandwidth.
### Background
This method for selecting the bandwidth of a kernel density estimate was proposed by Sheather and Jones (1991) and is described in Section 3.6 of Wand and Jones (1995).
### References
Sheather, S. J. and Jones, M. C. (1991). A reliable data-based bandwidth selection method for kernel density estimation. *Journal of the Royal Statistical Society, Series B*, **53**, 683–690.
Wand, M. P. and Jones, M. C. (1995). *Kernel Smoothing.* Chapman and Hall, London.
### See Also
`<bkde>`, `[density](../../stats/html/density)`, `[ksmooth](../../stats/html/ksmooth)`
### Examples
```
data(geyser, package="MASS")
x <- geyser$duration
h <- dpik(x)
est <- bkde(x, bandwidth=h)
plot(est,type="l")
```
r None
`InternalMethods` Internal Generic Functions
---------------------------------------------
### Description
Many **R**-internal functions are *generic* and allow methods to be written for.
### Details
The following primitive and internal functions are *generic*, i.e., you can write `[methods](../../utils/html/methods)` for them:
`[[](extract)`, `[[[](extract)`, `[$](extract)`, `[[<-](extract)`, `[[[<-](extract)`, `[$<-](extract)`,
`<length>`, `[length<-](length)`, `<lengths>`, `<dimnames>`, `[dimnames<-](dimnames)`, `<dim>`, `[dim<-](dim)`, `<names>`, `[names<-](names)`, `[levels<-](levels)`, `[@<-](slotop)`,
`<c>`, `<unlist>`, `<cbind>`, `[rbind](cbind)`,
`[as.character](character)`, `[as.complex](complex)`, `[as.double](double)`, `[as.integer](integer)`, `[as.logical](logical)`, `[as.raw](raw)`, `[as.vector](vector)`, `[as.call](call)`, `<as.environment>` `[is.array](array)`, `[is.matrix](matrix)`, `[is.na](na)`, `[anyNA](na)`, `[is.nan](is.finite)`, `<is.finite>` `[is.infinite](is.finite)` `[is.numeric](numeric)`, `<nchar>` `<rep>`, `[rep.int](rep)` `[rep\_len](rep)` `[seq.int](seq)` (which dispatches methods for `"seq"`), `<is.unsorted>` and `<xtfrm>`
In addition, `is.name` is a synonym for `[is.symbol](name)` and dispatches methods for the latter. Similarly, `[as.numeric](numeric)` is a synonym for `as.double` and dispatches methods for the latter, i.e., S3 methods are for `as.double`, whereas S4 methods are to be written for `as.numeric`.
Note that all of the [group generic](groupgeneric) functions are also internal/primitive and allow methods to be written for them.
`.S3PrimitiveGenerics` is a character vector listing the primitives which are internal generic and not [group generic](groupgeneric). Currently `[as.vector](vector)`, `<cbind>`, `[rbind](cbind)` and `<unlist>` are the internal non-primitive functions which are internally generic.
For efficiency, internal dispatch only occurs on *objects*, that is those for which `<is.object>` returns true.
### See Also
`[methods](../../utils/html/methods)` for the methods which are available.
r None
`backsolve` Solve an Upper or Lower Triangular System
------------------------------------------------------
### Description
Solves a triangular system of linear equations.
### Usage
```
backsolve(r, x, k = ncol(r), upper.tri = TRUE,
transpose = FALSE)
forwardsolve(l, x, k = ncol(l), upper.tri = FALSE,
transpose = FALSE)
```
### Arguments
| | |
| --- | --- |
| `r, l` | an upper (or lower) triangular matrix giving the coefficients for the system to be solved. Values below (above) the diagonal are ignored. |
| `x` | a matrix whose columns give the right-hand sides for the equations. |
| `k` | The number of columns of `r` and rows of `x` to use. |
| `upper.tri` | logical; if `TRUE` (default), the *upper* *tri*angular part of `r` is used. Otherwise, the lower one. |
| `transpose` | logical; if `TRUE`, solve *r' \* y = x* for *y*, i.e., `t(r) %*% y == x`. |
### Details
Solves a system of linear equations where the coefficient matrix is upper (or ‘right’, ‘R’) or lower (‘left’, ‘L’) triangular.
`x <- backsolve (R, b)` solves *R x = b*, and
`x <- forwardsolve(L, b)` solves *L x = b*, respectively.
The `r`/`l` must have at least `k` rows and columns, and `x` must have at least `k` rows.
This is a wrapper for the level-3 BLAS routine `dtrsm`.
### Value
The solution of the triangular system. The result will be a vector if `x` is a vector and a matrix if `x` is a matrix.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Dongarra, J. J., Bunch, J. R., Moler, C. B. and Stewart, G. W. (1978) *LINPACK Users Guide*. Philadelphia: SIAM Publications.
### See Also
`<chol>`, `<qr>`, `<solve>`.
### Examples
```
## upper triangular matrix 'r':
r <- rbind(c(1,2,3),
c(0,1,1),
c(0,0,2))
( y <- backsolve(r, x <- c(8,4,2)) ) # -1 3 1
r %*% y # == x = (8,4,2)
backsolve(r, x, transpose = TRUE) # 8 -12 -5
```
| programming_docs |
r None
`Recall` Recursive Calling
---------------------------
### Description
`Recall` is used as a placeholder for the name of the function in which it is called. It allows the definition of recursive functions which still work after being renamed, see example below.
### Usage
```
Recall(...)
```
### Arguments
| | |
| --- | --- |
| `...` | all the arguments to be passed. |
### Note
`Recall` will not work correctly when passed as a function argument, e.g. to the `apply` family of functions.
### See Also
`<do.call>` and `<call>`.
`[local](eval)` for another way to write anonymous recursive functions.
### Examples
```
## A trivial (but inefficient!) example:
fib <- function(n)
if(n<=2) { if(n>=0) 1 else 0 } else Recall(n-1) + Recall(n-2)
fibonacci <- fib; rm(fib)
## renaming wouldn't work without Recall
fibonacci(10) # 55
```
r None
`LongVectors` Long Vectors
---------------------------
### Description
Vectors of *2^31* or more elements were added in **R** 3.0.0.
### Details
Prior to **R** 3.0.0, all vectors in **R** were restricted to at most *2^31 - 1* elements and could be indexed by integer vectors.
Currently all [atomic](vector) (raw, logical, integer, numeric, complex, character) vectors, <list>s and <expression>s can be much longer on 64-bit platforms: such vectors are referred to as ‘long vectors’ and have a slightly different internal structure. In theory they can contain up to *2^52* elements, but address space limits of current CPUs and OSes will be much smaller. Such objects will have a <length> that is expressed as a double, and can be indexed by double vectors.
Arrays (including matrices) can be based on long vectors provided each of their dimensions is at most *2^31 - 1*: thus there are no 1-dimensional long arrays.
**R** code typically only needs minor changes to work with long vectors, maybe only checking that `as.integer` is not used unnecessarily for e.g. lengths. However, compiled code typically needs quite extensive changes. Note that the `[.C](foreign)` and `[.Fortran](foreign)` interfaces do not accept long vectors, so `[.Call](callexternal)` (or similar) has to be used.
Because of the storage requirements (a minimum of 64 bytes per character string), character vectors are only going to be usable if they have a small number of distinct elements, and even then factors will be more efficient (4 bytes per element rather than 8). So it is expected that most of the usage of long vectors will be integer vectors (including factors) and numeric vectors.
### Matrix algebra
It is now possible to use *m x n* matrices with more than 2 billion elements. Whether matrix algebra (including `[%\*%](matmult)`, `<crossprod>`, `<svd>`, `<qr>`, `<solve>` and `<eigen>`) will actually work is somewhat implementation dependent, including the Fortran compiler used and if an external BLAS or LAPACK is used.
An efficient parallel BLAS implementation will often be important to obtain usable performance. For example on one particular platform `chol` on a 47,000 square matrix took about 5 hours with the internal BLAS, 21 minutes using an optimized BLAS on one core, and 2 minutes using an optimized BLAS on 16 cores.
r None
`message` Diagnostic Messages
------------------------------
### Description
Generate a diagnostic message from its arguments.
### Usage
```
message(..., domain = NULL, appendLF = TRUE)
suppressMessages(expr, classes = "message")
packageStartupMessage(..., domain = NULL, appendLF = TRUE)
suppressPackageStartupMessages(expr)
.makeMessage(..., domain = NULL, appendLF = FALSE)
```
### Arguments
| | |
| --- | --- |
| `...` | zero or more objects which can be coerced to character (and which are pasted together with no separator) or (for `message` only) a single condition object. |
| `domain` | see `<gettext>`. If `NA`, messages will not be translated, see also the note in `<stop>`. |
| `appendLF` | logical: should messages given as a character string have a newline appended? |
| `expr` | expression to evaluate. |
| `classes` | character, indicating which classes of messages should be suppressed. |
### Details
`message` is used for generating ‘simple’ diagnostic messages which are neither warnings nor errors, but nevertheless represented as conditions. Unlike warnings and errors, a final newline is regarded as part of the message, and is optional. The default handler sends the message to the `[stderr](showconnections)()` [connection](connections).
If a condition object is supplied to `message` it should be the only argument, and further arguments will be ignored, with a warning.
While the message is being processed, a `muffleMessage` restart is available.
`suppressMessages` evaluates its expression in a context that ignores all ‘simple’ diagnostic messages.
`packageStartupMessage` is a variant whose messages can be suppressed separately by `suppressPackageStartupMessages`. (They are still messages, so can be suppressed by `suppressMessages`.)
`.makeMessage` is a utility used by `message`, `warning` and `stop` to generate a text message from the `...` arguments by possible translation (see `<gettext>`) and concatenation (with no separator).
### See Also
`<warning>` and `<stop>` for generating warnings and errors; `<conditions>` for condition handling and recovery.
`<gettext>` for the mechanisms for the automated translation of text.
### Examples
```
message("ABC", "DEF")
suppressMessages(message("ABC"))
testit <- function() {
message("testing package startup messages")
packageStartupMessage("initializing ...", appendLF = FALSE)
Sys.sleep(1)
packageStartupMessage(" done")
}
testit()
suppressPackageStartupMessages(testit())
suppressMessages(testit())
```
r None
`sort` Sorting or Ordering Vectors
-----------------------------------
### Description
Sort (or *order*) a vector or factor (partially) into ascending or descending order. For ordering along more than one variable, e.g., for sorting data frames, see `<order>`.
### Usage
```
sort(x, decreasing = FALSE, ...)
## Default S3 method:
sort(x, decreasing = FALSE, na.last = NA, ...)
sort.int(x, partial = NULL, na.last = NA, decreasing = FALSE,
method = c("auto", "shell", "quick", "radix"), index.return = FALSE)
```
### Arguments
| | |
| --- | --- |
| `x` | for `sort` an **R** object with a class or a numeric, complex, character or logical vector. For `sort.int`, a numeric, complex, character or logical vector, or a factor. |
| `decreasing` | logical. Should the sort be increasing or decreasing? For the `"radix"` method, this can be a vector of length equal to the number of arguments in `...`. For the other methods, it must be length one. Not available for partial sorting. |
| `...` | arguments to be passed to or from methods or (for the default methods and objects without a class) to `sort.int`. |
| `na.last` | for controlling the treatment of `NA`s. If `TRUE`, missing values in the data are put last; if `FALSE`, they are put first; if `NA`, they are removed. |
| `partial` | `NULL` or a vector of indices for partial sorting. |
| `method` | character string specifying the algorithm used. Not available for partial sorting. Can be abbreviated. |
| `index.return` | logical indicating if the ordering index vector should be returned as well. Supported by `method == "radix"` for any `na.last` mode and data type, and the other methods when `na.last = NA` (the default) and fully sorting non-factors. |
### Details
`sort` is a generic function for which methods can be written, and `sort.int` is the internal method which is compatible with S if only the first three arguments are used.
The default `sort` method makes use of `<order>` for classed objects, which in turn makes use of the generic function `<xtfrm>` (and can be slow unless a `xtfrm` method has been defined or `[is.numeric](numeric)(x)` is true).
Complex values are sorted first by the real part, then the imaginary part.
The `"auto"` method selects `"radix"` for short (less than *2^31* elements) numeric vectors, integer vectors, logical vectors and factors; otherwise, `"shell"`.
Except for method `"radix"`, the sort order for character vectors will depend on the collating sequence of the locale in use: see `[Comparison](comparison)`. The sort order for factors is the order of their levels (which is particularly appropriate for ordered factors).
If `partial` is not `NULL`, it is taken to contain indices of elements of the result which are to be placed in their correct positions in the sorted array by partial sorting. For each of the result values in a specified position, any values smaller than that one are guaranteed to have a smaller index in the sorted array and any values which are greater are guaranteed to have a bigger index in the sorted array. (This is included for efficiency, and many of the options are not available for partial sorting. It is only substantially more efficient if `partial` has a handful of elements, and a full sort is done (a Quicksort if possible) if there are more than 10.) Names are discarded for partial sorting.
Method `"shell"` uses Shellsort (an *O(n^{4/3})* variant from Sedgewick (1986)). If `x` has names a stable modification is used, so ties are not reordered. (This only matters if names are present.)
Method `"quick"` uses Singleton (1969)'s implementation of Hoare's Quicksort method and is only available when `x` is numeric (double or integer) and `partial` is `NULL`. (For other types of `x` Shellsort is used, silently.) It is normally somewhat faster than Shellsort (perhaps 50% faster on vectors of length a million and twice as fast at a billion) but has poor performance in the rare worst case. (Peto's modification using a pseudo-random midpoint is used to make the worst case rarer.) This is not a stable sort, and ties may be reordered.
Method `"radix"` relies on simple hashing to scale time linearly with the input size, i.e., its asymptotic time complexity is O(n). The specific variant and its implementation originated from the data.table package and are due to Matt Dowle and Arun Srinivasan. For small inputs (< 200), the implementation uses an insertion sort (O(n^2)) that operates in-place to avoid the allocation overhead of the radix sort. For integer vectors of range less than 100,000, it switches to a simpler and faster linear time counting sort. In all cases, the sort is stable; the order of ties is preserved. It is the default method for integer vectors and factors.
The `"radix"` method generally outperforms the other methods, especially for character vectors and small integers. Compared to quick sort, it is slightly faster for vectors with large integer or real values (but unlike quick sort, radix is stable and supports all `na.last` options). The implementation is orders of magnitude faster than shell sort for character vectors, in part thanks to clever use of the internal `CHARSXP` table.
However, there are some caveats with the radix sort:
* If `x` is a `character` vector, all elements must share the same encoding. Only UTF-8 (including ASCII) and Latin-1 encodings are supported. Collation always follows the "C" locale.
* [Long vectors](longvectors) (with more than 2^32 elements) and `complex` vectors are not supported yet.
### Value
For `sort`, the result depends on the S3 method which is dispatched. If `x` does not have a class `sort.int` is used and it description applies. For classed objects which do not have a specific method the default method will be used and is equivalent to `x[order(x, ...)]`: this depends on the class having a suitable method for `[` (and also that `<order>` will work, which requires a `<xtfrm>` method).
For `sort.int` the value is the sorted vector unless `index.return` is true, when the result is a list with components named `x` and `ix` containing the sorted numbers and the ordering index vector. In the latter case, if `method ==
"quick"` ties may be reversed in the ordering (unlike `sort.list`) as quicksort is not stable. For `method ==
"radix"`, `index.return` is supported for all `na.last` modes. The other methods only support `index.return` when `na.last` is `NA`. The index vector refers to element numbers *after removal of `NA`s*: see `<order>` if you want the original element numbers.
All attributes are removed from the return value (see Becker *et al*, 1988, p.146) except names, which are sorted. (If `partial` is specified even the names are removed.) Note that this means that the returned value has no class, except for factors and ordered factors (which are treated specially and whose result is transformed back to the original class).
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988). *The New S Language*. Wadsworth & Brooks/Cole.
Knuth, D. E. (1998). *The Art of Computer Programming, Volume 3: Sorting and Searching*, 2nd ed. Addison-Wesley.
Sedgewick, R. (1986). A new upper bound for Shellsort. *Journal of Algorithms*, **7**, 159–173. doi: [10.1016/0196-6774(86)90001-5](https://doi.org/10.1016/0196-6774(86)90001-5).
Singleton, R. C. (1969). Algorithm 347: an efficient algorithm for sorting with minimal storage. *Communications of the ACM*, **12**, 185–186. doi: [10.1145/362875.362901](https://doi.org/10.1145/362875.362901).
### See Also
‘[Comparison](comparison)’ for how character strings are collated.
`<order>` for sorting on or reordering multiple variables.
`<is.unsorted>`. `<rank>`.
### Examples
```
require(stats)
x <- swiss$Education[1:25]
x; sort(x); sort(x, partial = c(10, 15))
## illustrate 'stable' sorting (of ties):
sort(c(10:3, 2:12), method = "shell", index.return = TRUE) # is stable
## $x : 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 12
## $ix: 9 8 10 7 11 6 12 5 13 4 14 3 15 2 16 1 17 18 19
sort(c(10:3, 2:12), method = "quick", index.return = TRUE) # is not
## $x : 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 12
## $ix: 9 10 8 7 11 6 12 5 13 4 14 3 15 16 2 17 1 18 19
x <- c(1:3, 3:5, 10)
is.unsorted(x) # FALSE: is sorted
is.unsorted(x, strictly = TRUE) # TRUE : is not (and cannot be)
# sorted strictly
## Not run:
## Small speed comparison simulation:
N <- 2000
Sim <- 20
rep <- 1000 # << adjust to your CPU
c1 <- c2 <- numeric(Sim)
for(is in seq_len(Sim)){
x <- rnorm(N)
c1[is] <- system.time(for(i in 1:rep) sort(x, method = "shell"))[1]
c2[is] <- system.time(for(i in 1:rep) sort(x, method = "quick"))[1]
stopifnot(sort(x, method = "shell") == sort(x, method = "quick"))
}
rbind(ShellSort = c1, QuickSort = c2)
cat("Speedup factor of quick sort():\n")
summary({qq <- c1 / c2; qq[is.finite(qq)]})
## A larger test
x <- rnorm(1e7)
system.time(x1 <- sort(x, method = "shell"))
system.time(x2 <- sort(x, method = "quick"))
system.time(x3 <- sort(x, method = "radix"))
stopifnot(identical(x1, x2))
stopifnot(identical(x1, x3))
## End(Not run)
```
r None
`mean` Arithmetic Mean
-----------------------
### Description
Generic function for the (trimmed) arithmetic mean.
### Usage
```
mean(x, ...)
## Default S3 method:
mean(x, trim = 0, na.rm = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | An **R** object. Currently there are methods for numeric/logical vectors and [date](dates), [date-time](datetimeclasses) and [time interval](difftime) objects. Complex vectors are allowed for `trim = 0`, only. |
| `trim` | the fraction (0 to 0.5) of observations to be trimmed from each end of `x` before the mean is computed. Values of trim outside that range are taken as the nearest endpoint. |
| `na.rm` | a logical value indicating whether `NA` values should be stripped before the computation proceeds. |
| `...` | further arguments passed to or from other methods. |
### Value
If `trim` is zero (the default), the arithmetic mean of the values in `x` is computed, as a numeric or complex vector of length one. If `x` is not logical (coerced to numeric), numeric (including integer) or complex, `NA_real_` is returned, with a warning.
If `trim` is non-zero, a symmetrically trimmed mean is computed with a fraction of `trim` observations deleted from each end before the mean is computed.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[weighted.mean](../../stats/html/weighted.mean)`, `[mean.POSIXct](datetimeclasses)`, `[colMeans](colsums)` for row and column means.
### Examples
```
x <- c(0:10, 50)
xm <- mean(x)
c(xm, mean(x, trim = 0.10))
```
r None
`all` Are All Values True?
---------------------------
### Description
Given a set of logical vectors, are all of the values true?
### Usage
```
all(..., na.rm = FALSE)
```
### Arguments
| | |
| --- | --- |
| `...` | zero or more logical vectors. Other objects of zero length are ignored, and the rest are coerced to logical ignoring any class. |
| `na.rm` | logical. If true `NA` values are removed before the result is computed. |
### Details
This is a generic function: methods can be defined for it directly or via the `[Summary](groupgeneric)` group generic. For this to work properly, the arguments `...` should be unnamed, and dispatch is on the first argument.
Coercion of types other than integer (raw, double, complex, character, list) gives a warning as this is often unintentional.
This is a <primitive> function.
### Value
The value is a logical vector of length one.
Let `x` denote the concatenation of all the logical vectors in `...` (after coercion), after removing `NA`s if requested by `na.rm = TRUE`.
The value returned is `TRUE` if all of the values in `x` are `TRUE` (including if there are no values), and `FALSE` if at least one of the values in `x` is `FALSE`. Otherwise the value is `NA` (which can only occur if `na.rm = FALSE` and `...` contains no `FALSE` values and at least one `NA` value).
### S4 methods
This is part of the S4 `[Summary](../../methods/html/s4groupgeneric)` group generic. Methods for it must use the signature `x, ..., na.rm`.
### Note
That `all(logical(0))` is true is a useful convention: it ensures that
```
all(all(x), all(y)) == all(x, y)
```
even if `x` has length zero.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<any>`, the ‘complement’ of `all`, and `<stopifnot>(*)` which is an `all(*)` ‘insurance’.
### Examples
```
range(x <- sort(round(stats::rnorm(10) - 1.2, 1)))
if(all(x < 0)) cat("all x values are negative\n")
all(logical(0)) # true, as all zero of the elements are true.
```
r None
`dcf` Read and Write Data in DCF Format
----------------------------------------
### Description
Reads or writes an **R** object from/to a file in Debian Control File format.
### Usage
```
read.dcf(file, fields = NULL, all = FALSE, keep.white = NULL)
write.dcf(x, file = "", append = FALSE, useBytes = FALSE,
indent = 0.1 * getOption("width"),
width = 0.9 * getOption("width"),
keep.white = NULL)
```
### Arguments
| | |
| --- | --- |
| `file` | either a character string naming a file or a [connection](connections). `""` indicates output to the console. For `read.dcf` this can name a compressed file (see `[gzfile](connections)`). |
| `fields` | Fields to read from the DCF file. Default is to read all fields. |
| `all` | a logical indicating whether in case of multiple occurrences of a field in a record, all these should be gathered. If `all` is false (default), only the last such occurrence is used. |
| `keep.white` | a character string with the names of the fields for which whitespace should be kept as is, or `NULL` (default) indicating that there are no such fields. Coerced to character if possible. For fields where whitespace is not to be kept as is, `read.dcf` removes leading and trailing whitespace, and `write.dcf` folds using `<strwrap>`. |
| `x` | the object to be written, typically a data frame. If not, it is attempted to coerce `x` to a data frame. |
| `append` | logical. If `TRUE`, the output is appended to the file. If `FALSE`, any existing file of the name is destroyed. |
| `useBytes` | logical to be passed to `[writeLines](writelines)()`, see there: “for expert use”. |
| `indent` | a positive integer specifying the indentation for continuation lines in output entries. |
| `width` | a positive integer giving the target column for wrapping lines in the output. |
### Details
DCF is a simple format for storing databases in plain text files that can easily be directly read and written by humans. DCF is used in various places to store **R** system information, like descriptions and contents of packages.
The DCF rules as implemented in **R** are:
1. A database consists of one or more records, each with one or more named fields. Not every record must contain each field. Fields may appear more than once in a record.
2. Regular lines start with a non-whitespace character.
3. Regular lines are of form `tag:value`, i.e., have a name tag and a value for the field, separated by `:` (only the first `:` counts). The value can be empty (i.e., whitespace only).
4. Lines starting with whitespace are continuation lines (to the preceding field) if at least one character in the line is non-whitespace. Continuation lines where the only non-whitespace character is a . are taken as blank lines (allowing for multi-paragraph field values).
5. Records are separated by one or more empty (i.e., whitespace only) lines.
6. Individual lines may not be arbitrarily long; prior to **R** 3.0.2 the length limit was approximately 8191 bytes per line.
Note that `read.dcf(all = FALSE)` reads the file byte-by-byte. This allows a ‘DESCRIPTION’ file to be read and only its ASCII fields used, or its Encoding field used to re-encode the remaining fields.
`write.dcf` does not write `NA` fields.
### Value
The default `read.dcf(all = FALSE)` returns a character matrix with one row per record and one column per field. Leading and trailing whitespace of field values is ignored unless a field is listed in `keep.white`. If a tag name is specified in the file, but the corresponding value is empty, then an empty string is returned. If the tag name of a field is specified in `fields` but never used in a record, then the corresponding value is `NA`. If fields are repeated within a record, the last one encountered is returned. Malformed lines lead to an error.
For `read.dcf(all = TRUE)` a data frame is returned, again with one row per record and one column per field. The columns are lists of character vectors for fields with multiple occurrences, and character vectors otherwise.
Note that an empty `file` is a valid DCF file, and `read.dcf` will return a zero-row matrix or data frame.
For `write.dcf`, invisible `NULL`.
### Note
As from **R** 3.4.0, ‘whitespace’ in all cases includes newlines.
### References
<https://www.debian.org/doc/debian-policy/ch-controlfields.html>.
Note that **R** does not require encoding in UTF-8, which is a recent Debian requirement. Nor does it use the Debian-specific sub-format which allows comment lines starting with #.
### See Also
`[write.table](../../utils/html/write.table)`.
`[available.packages](../../utils/html/available.packages)`, which uses `read.dcf` to read the indices of package repositories.
### Examples
```
## Create a reduced version of the DESCRIPTION file in package 'splines'
x <- read.dcf(file = system.file("DESCRIPTION", package = "splines"),
fields = c("Package", "Version", "Title"))
write.dcf(x)
## An online DCF file with multiple records
con <- url("https://cran.r-project.org/src/contrib/PACKAGES")
y <- read.dcf(con, all = TRUE)
close(con)
utils::str(y)
```
| programming_docs |
r None
`userhooks` Functions to Get and Set Hooks for Load, Attach, Detach and Unload
-------------------------------------------------------------------------------
### Description
These functions allow users to set actions to be taken before packages are attached/detached and namespaces are (un)loaded.
### Usage
```
getHook(hookName)
setHook(hookName, value,
action = c("append", "prepend", "replace"))
packageEvent(pkgname,
event = c("onLoad", "attach", "detach", "onUnload"))
```
### Arguments
| | |
| --- | --- |
| `hookName` | character string: the hook name |
| `pkgname` | character string: the package/namespace name |
| `event` | character string: an event for the package. Can be abbreviated. |
| `value` | A function or a list of functions, or for `action = "replace"`, `NULL` |
| `action` | The action to be taken. Can be abbreviated. |
### Details
`setHook` provides a general mechanism for users to register hooks, a list of functions to be called from system (or user) functions. The initial set of hooks was associated with events on packages/namespaces: these hooks are named via calls to `packageEvent`.
To remove a hook completely, call `setHook(hookName, NULL, "replace")`.
When an **R** package is attached by `<library>` or loaded by other means, it can call initialization code. See `[.onLoad](ns-hooks)` for a description of the package hook functions called during initialization. Users can add their own initialization code via the hooks provided by `setHook()`, functions which will be called as `funname(pkgname, pkgpath)` inside a `<try>` call.
The sequence of events depends on which hooks are defined, and whether a package is attached or just loaded. In the case where all hooks are defined and a package is attached, the order of initialization events is as follows:
1. The package namespace is loaded.
2. The package's `[.onLoad](ns-hooks)` function is run.
3. If S4 methods dispatch is on, any actions set by `[setLoadAction](../../methods/html/setloadactions)` are run.
4. The namespace is sealed.
5. The user's `"onLoad"` hook is run.
6. The package is added to the search path.
7. The package's `[.onAttach](ns-hooks)` function is run.
8. The package environment is sealed.
9. The user's `"attach"` hook is run.
A similar sequence (but in reverse) is run when a package is detached and its namespace unloaded:
1. The user's `"detach"` hook is run.
2. The package's `[.Last.lib](ns-hooks)` function is run.
3. The package is removed from the search path.
4. The user's `"onUnload"` hook is run.
5. The package's `[.onUnload](ns-hooks)` function is run.
6. The package namespace is unloaded.
Note that when an **R** session is finished, packages are not detached and namespaces are not unloaded, so the corresponding hooks will not be run.
Also note that some of the user hooks are run without the package being on the search path, so in those hooks objects in the package need to be referred to using the double (or triple) colon operator, as in the example.
If multiple hooks are added, they are normally run in the order shown by `getHook`, but the `"detach"` and `"onUnload"` hooks are run in reverse order so the default for package events is to add hooks ‘inside’ existing ones.
The hooks are stored in the environment `.userHooksEnv` in the base package, with ‘mangled’ names.
### Value
For `getHook` function, a list of functions (possibly empty). For `setHook` function, no return value. For `packageEvent`, the derived hook name (a character string).
### Note
Hooks need to be set before the event they modify: for standard packages this can be problematic as methods is loaded and attached early in the startup sequence. The usual place to set hooks such as the example below is in the ‘.Rprofile’ file, but that will not work for methods.
### See Also
`<library>`, `<detach>`, `[loadNamespace](ns-load)`.
See `[::](ns-dblcolon)` for a discussion of the double and triple colon operators.
Other hooks may be added later: functions `[plot.new](../../graphics/html/frame)` and `[persp](../../graphics/html/persp)` already have them.
### Examples
```
setHook(packageEvent("grDevices", "onLoad"),
function(...) grDevices::ps.options(horizontal = FALSE))
```
r None
`is.unsorted` Test if an Object is Not Sorted
----------------------------------------------
### Description
Test if an object is not sorted (in increasing order), without the cost of sorting it.
### Usage
```
is.unsorted(x, na.rm = FALSE, strictly = FALSE)
```
### Arguments
| | |
| --- | --- |
| `x` | an **R** object with a class or a numeric, complex, character, logical or raw vector. |
| `na.rm` | logical. Should missing values be removed before checking? |
| `strictly` | logical indicating if the check should be for *strictly* increasing values. |
### Details
`is.unsorted` is generic: you can write methods to handle specific classes of objects, see [InternalMethods](internalmethods).
### Value
A length-one logical value. All objects of length 0 or 1 are sorted. Otherwise, the result will be `NA` except for atomic vectors and objects with an S3 class (where the `>=` or `>` method is used to compare `x[i]` with `x[i-1]` for `i` in `2:length(x)`) or with an S4 class where you have to provide a method for `<is.unsorted>()`.
### Note
This function is designed for objects with one-dimensional indices, as described above. Data frames, matrices and other arrays may give surprising results.
### See Also
`<sort>`, `<order>`.
r None
`dontCheck` Identity Function to Suppress Checking
---------------------------------------------------
### Description
The `dontCheck` function is the same as `<identity>`, but is interpreted by `R CMD check` code analysis as a directive to suppress checking of `x`. Currently this is only used by `[checkFF](../../tools/html/checkff)(registration = TRUE)` when checking the `.NAME` argument of foreign function calls.
### Usage
```
dontCheck(x)
```
### Arguments
| | |
| --- | --- |
| `x` | an **R** object. |
### See Also
`suppressForeignCheck` which explains why that and `dontCheck` are undesirable and should be avoided if at all possible.
r None
`Paren` Parentheses and Braces
-------------------------------
### Description
Open parenthesis, `(`, and open brace, `{`, are `[.Primitive](primitive)` functions in **R**.
Effectively, `(` is semantically equivalent to the identity `function(x) x`, whereas `{` is slightly more interesting, see examples.
### Usage
```
( ... )
{ ... }
```
### Value
For `(`, the result of evaluating the argument. This has visibility set, so will auto-print if used at top-level.
For `{`, the result of the last expression evaluated. This has the visibility of the last evaluation.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[if](control)`, `[return](function)`, etc for other objects used in the **R** language itself.
`[Syntax](syntax)` for operator precedence.
### Examples
```
f <- get("(")
e <- expression(3 + 2 * 4)
identical(f(e), e)
do <- get("{")
do(x <- 3, y <- 2*x-3, 6-x-y); x; y
## note the differences
(2+3)
{2+3; 4+5}
(invisible(2+3))
{invisible(2+3)}
```
r None
`bincode` Bin a Numeric Vector
-------------------------------
### Description
Bin a numeric vector and return integer codes for the binning.
### Usage
```
.bincode(x, breaks, right = TRUE, include.lowest = FALSE)
```
### Arguments
| | |
| --- | --- |
| `x` | a numeric vector which is to be converted to integer codes by binning. |
| `breaks` | a numeric vector of two or more cut points, sorted in increasing order. |
| `right` | logical, indicating if the intervals should be closed on the right (and open on the left) or vice versa. |
| `include.lowest` | logical, indicating if an ‘x[i]’ equal to the lowest (or highest, for `right = FALSE`) ‘breaks’ value should be included in the first (or last) bin. |
### Details
This is a ‘barebones’ version of `cut.default(labels =
FALSE)` intended for use in other functions which have checked the arguments passed. (Note the different order of the arguments they have in common.)
Unlike `<cut>`, the `breaks` do not need to be unique. An input can only fall into a zero-length interval if it is closed at both ends, so only if `include.lowest = TRUE` and it is the first (or last for `right = FALSE`) interval.
### Value
An integer vector of the same length as `x` indicating which bin each element falls into (the leftmost bin being bin `1`). `NaN` and `NA` elements of `x` are mapped to `NA` codes, as are values outside range of `breaks`.
### See Also
`<cut>`, `<tabulate>`
### Examples
```
## An example with non-unique breaks:
x <- c(0, 0.01, 0.5, 0.99, 1)
b <- c(0, 0, 1, 1)
.bincode(x, b, TRUE)
.bincode(x, b, FALSE)
.bincode(x, b, TRUE, TRUE)
.bincode(x, b, FALSE, TRUE)
```
r None
`Defunct` Marking Objects as Defunct
-------------------------------------
### Description
When a function is removed from **R** it should be replaced by a function which calls `.Defunct`.
### Usage
```
.Defunct(new, package = NULL, msg)
```
### Arguments
| | |
| --- | --- |
| `new` | character string: A suggestion for a replacement function. |
| `package` | character string: The package to be used when suggesting where the defunct function might be listed. |
| `msg` | character string: A message to be printed, if missing a default message is used. |
### Details
`.Defunct` is called from defunct functions. Functions should be listed in `help("pkg-defunct")` for an appropriate `pkg`, including `base` (with the alias added to the respective Rd file).
`.Defunct` signals an error of class `defunctError` with fields `old`, `new`, and `package`.
### See Also
`[Deprecated](deprecated)`.
`base-defunct` and so on which list the defunct functions in the packages.
r None
`warnings` Print Warning Messages
----------------------------------
### Description
`warnings` and its `print` method print the variable `last.warning` in a pleasing form.
### Usage
```
warnings(...)
## S3 method for class 'warnings'
summary(object, ...)
## S3 method for class 'warnings'
print(x, tags,
header = ngettext(n, "Warning message:\n", "Warning messages:\n"),
...)
## S3 method for class 'summary.warnings'
print(x, ...)
```
### Arguments
| | |
| --- | --- |
| `...` | arguments to be passed to `<cat>` (for `warnings()`). |
| `object` | a `"warnings"` object as returned by `warnings()`. |
| `x` | a `"warnings"` or `"summary.warnings"` object. |
| `tags` | if not `<missing>`, a `<character>` vector of the same `<length>` as `x`, to “label” the messages. Defaults to `paste0(seq_len(n), ": ")` for *n >= 2* where `n <- length(x)`. |
| `header` | a character string `<cat>()`ed before the messages are printed. |
### Details
See the description of `<options>("warn")` for the circumstances under which there is a `last.warning` object and `warnings()` is used. In essence this is if `options(warn =
0)` and `warning` has been called at least once.
Note that the `<length>(last.warning)` is maximally `[getOption](options)("nwarnings")` (at the time the warnings are generated) which is `50` by default. To increase, use something like
```
options(nwarnings = 10000)
```
It is possible that `last.warning` refers to the last recorded warning and not to the last warning, for example if `options(warn)` has been changed or if a catastrophic error occurred.
### Value
`warnings()` returns an object of S3 class `"warnings"`, basically a named `<list>`.
`summary(<warnings>)` returns a `"summary.warnings"` object which is basically the `<list>` of unique warnings (`unique(object)`) with a `"counts"` attribute, somewhat experimentally.
### Warning
It is undocumented where `last.warning` is stored nor that it is visible, and this is subject to change.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<warning>`.
### Examples
```
## NB this example is intended to be pasted in,
## rather than run by example()
ow <- options("warn")
for(w in -1:1) {
options(warn = w); cat("\n warn =", w, "\n")
for(i in 1:3) { cat(i,"..\n"); m <- matrix(1:7, 3,4) }
cat("--=--=--\n")
}
## at the end prints all three warnings, from the 'option(warn = 0)' above
options(ow) # reset to previous, typically 'warn = 0'
tail(warnings(), 2) # see the last two warnings only (via '[' method)
## Often the most useful way to look at many warnings:
summary(warnings())
op <- options(nwarnings = 10000) ## <- get "full statistics"
x <- 1:36; for(n in 1:13) for(m in 1:12) A <- matrix(x, n,m) # There were 105 warnings ...
summary(warnings())
options(op) # revert to previous (keeping 50 messages by default)
```
r None
`DateTimeClasses` Date-Time Classes
------------------------------------
### Description
Description of the classes `"POSIXlt"` and `"POSIXct"` representing calendar dates and times.
### Usage
```
## S3 method for class 'POSIXct'
print(x, tz = "", usetz = TRUE, max = NULL, ...)
## S3 method for class 'POSIXct'
summary(object, digits = 15, ...)
time + z
z + time
time - z
time1 lop time2
```
### Arguments
| | |
| --- | --- |
| `x, object` | an object to be printed or summarized from one of the date-time classes. |
| `tz, usetz` | for timezone formatting, passed to `[format.POSIXct](strptime)`. |
| `max` | numeric or `NULL`, specifying the maximal number of entries to be printed. By default, when `NULL`, `[getOption](options)("max.print")` used. |
| `digits` | number of significant digits for the computations: should be high enough to represent the least important time unit exactly. |
| `...` | further arguments to be passed from or to other methods. |
| `time` | date-time objects |
| `time1, time2` | date-time objects or character vectors. (Character vectors are converted by `[as.POSIXct](as.posixlt)`.) |
| `z` | a numeric vector (in seconds) |
| `lop` | one of `==`, `!=`, `<`, `<=`, `>` or `>=`. |
### Details
There are two basic classes of date/times. Class `"POSIXct"` represents the (signed) number of seconds since the beginning of 1970 (in the UTC time zone) as a numeric vector. Class `"POSIXlt"` is a named list of vectors representing
`sec`
0–61: seconds.
`min`
0–59: minutes.
`hour`
0–23: hours.
`mday`
1–31: day of the month
`mon`
0–11: months after the first of the year.
`year`
years since 1900.
`wday`
0–6 day of the week, starting on Sunday.
`yday`
0–365: day of the year (365 only in leap years).
`isdst`
Daylight Saving Time flag. Positive if in force, zero if not, negative if unknown.
`zone`
(Optional.) The abbreviation for the time zone in force at that time: `""` if unknown (but `""` might also be used for UTC).
`gmtoff`
(Optional.) The offset in seconds from GMT: positive values are East of the meridian. Usually `NA` if unknown, but `0` could mean unknown.
(The last two components are not present for times in UTC and are platform-dependent: they are supported on platforms based on BSD or `glibc` (including Linux and macOS) and those using the `tzcode` implementation shipped with **R** (including Windows). But they are not necessarily set.). Note that the internal list structure is somewhat hidden, as many methods (including `<length>(x)`, `<print>()` and `[str](../../utils/html/str)`) apply to the abstract date-time vector, as for `"POSIXct"`. As from **R** 3.5.0, one can extract and replace *single* components via `[` indexing with two indices (see the examples). The classes correspond to the POSIX/C99 constructs of ‘calendar time’ (the `time_t` data type) and ‘local time’ (or broken-down time, the `struct tm` data type), from which they also inherit their names. The components of `"POSIXlt"` are integer vectors, except `sec` and `zone`.
`"POSIXct"` is more convenient for including in data frames, and `"POSIXlt"` is closer to human-readable forms. A virtual class `"POSIXt"` exists from which both of the classes inherit: it is used to allow operations such as subtraction to mix the two classes.
Components `wday` and `yday` of `"POSIXlt"` are for information, and are not used in the conversion to calendar time. However, `isdst` is needed to distinguish times at the end of DST: typically 1am to 2am occurs twice, first in DST and then in standard time. At all other times `isdst` can be deduced from the first six values, but the behaviour if it is set incorrectly is platform-dependent.
Logical comparisons and some arithmetic operations are available for both classes. One can add or subtract a number of seconds from a date-time object, but not add two date-time objects. Subtraction of two date-time objects is equivalent to using `<difftime>`. Be aware that `"POSIXlt"` objects will be interpreted as being in the current time zone for these operations unless a time zone has been specified.
`"POSIXlt"` objects will often have an attribute `"tzone"`, a character vector of length 3 giving the time zone name (from the TZ environment variable or argument `tz` of functions creating `"POSIXlt"` objects; `""` marks the current time zone) and the names of the base time zone and the alternate (daylight-saving) time zone. Sometimes this may just be of length one, giving the [time zone](timezones) name.
`"POSIXct"` objects may also have an attribute `"tzone"`, a character vector of length one. If set to a non-empty value, it will determine how the object is converted to class `"POSIXlt"` and in particular how it is printed. This is usually desirable, but if you want to specify an object in a particular time zone but to be printed in the current time zone you may want to remove the `"tzone"` attribute (e.g., by `c(x)`).
Unfortunately, the conversion is complicated by the operation of time zones and leap seconds (according to this version of **R**'s data, 27 days have been 86401 seconds long so far, the last being on (actually, immediately before) 2017-01-01: the times of the extra seconds are in the object `.leap.seconds`). The details of this are entrusted to the OS services where possible. It seems that some rare systems used to use leap seconds, but all known current platforms ignore them (as required by POSIX). This is detected and corrected for at build time, so `"POSIXct"` times used by **R** do not include leap seconds on any platform.
Using `<c>` on `"POSIXlt"` objects converts them to the current time zone, and on `"POSIXct"` objects drops any `"tzone"` attributes, unless they are all marked with the same time zone.
A few times have specific issues. First, the leap seconds are ignored, and real times such as `"2005-12-31 23:59:60"` are (probably) treated as the next second. However, they will never be generated by **R**, and are unlikely to arise as input. Second, on some OSes there is a problem in the POSIX/C99 standard with `"1969-12-31 23:59:59 UTC"`, which is `-1` in calendar time and that value is on those OSes also used as an error code. Thus `as.POSIXct("1969-12-31
23:59:59", format = "%Y-%m-%d %H:%M:%S", tz = "UTC")` may give `NA`, and hence `as.POSIXct("1969-12-31 23:59:59",
tz = "UTC")` will give `"1969-12-31 23:59:00"`. Other OSes (including the code used by **R** on Windows) report errors separately and so are able to handle that time as valid.
The print methods respect `<options>("max.print")`.
### Sub-second Accuracy
Classes `"POSIXct"` and `"POSIXlt"` are able to express fractions of a second. (Conversion of fractions between the two forms may not be exact, but will have better than microsecond accuracy.)
Fractional seconds are printed only if `<options>("digits.secs")` is set: see `[strftime](strptime)`.
### Valid ranges for times
The `"POSIXlt"` class can represent a very wide range of times (up to billions of years), but such times can only be interpreted with reference to a time zone.
The concept of time zones was first adopted in the nineteenth century, and the Gregorian calendar was introduced in 1582 but not universally adopted until 1927. OS services almost invariably assume the Gregorian calendar and may assume that the time zone that was first enacted for the location was in force before that date. (The earliest legislated time zone seems to have been London on 1847-12-01.) Some OSes assume the previous use of ‘local time’ based on the longitude of a location within the time zone.
Most operating systems represent `POSIXct` times as C type `long`. This means that on 32-bit OSes this covers the period 1902 to 2037. On all known 64-bit platforms and for the code we use on 32-bit Windows, the range of representable times is billions of years: however, not all can convert correctly times before 1902 or after 2037. A few benighted OSes used a unsigned type and so cannot represent times before 1970.
Where possible the platform limits are detected, and outside the limits we use our own C code. This uses the offset from GMT in use either for 1902 (when there was no DST) or that predicted for one of 2030 to 2037 (chosen so that the likely DST transition days are Sundays), and uses the alternate (daylight-saving) time zone only if `isdst` is positive or (if `-1`) if DST was predicted to be in operation in the 2030s on that day.
Note that there are places (e.g., Rome) whose offset from UTC varied in the years prior to 1902, and these will be handled correctly only where there is OS support.
There is no reason to suppose that the DST rules will remain the same in the future, and indeed the US legislated in 2005 to change its rules as from 2007, with a possible future reversion. So conversions for times more than a year or two ahead are speculative.
### Warnings
Some Unix-like systems (especially Linux ones) do not have environment variable TZ set, yet have internal code that expects it (as does POSIX). We have tried to work around this, but if you get unexpected results try setting TZ. See `[Sys.timezone](timezones)` for valid settings.
Great care is needed when comparing objects of class `"POSIXlt"`. Not only are components and attributes optional; several components may have values meaning ‘not yet determined’ and the same time represented in different time zones will look quite different.
Currently the *order* of the list components of `"POSIXlt"` objects must not be changed, as several C-based conversion methods rely on the order for efficiency.
### References
Ripley, B. D. and Hornik, K. (2001). “Date-time classes.” *R News*, **1**(2), 8–11. <https://www.r-project.org/doc/Rnews/Rnews_2001-2.pdf>.
### See Also
[Dates](dates) for dates without times.
`[as.POSIXct](as.posixlt)` and `[as.POSIXlt](as.posixlt)` for conversion between the classes.
`<strptime>` for conversion to and from character representations.
`[Sys.time](sys.time)` for clock time as a `"POSIXct"` object.
`<difftime>` for time intervals.
`[cut.POSIXt](cut.posixt)`, `[seq.POSIXt](seq.posixt)`, `[round.POSIXt](round.posixt)` and `[trunc.POSIXt](round.posixt)` for methods for these classes.
`[weekdays](weekday.posixt)` for convenience extraction functions.
### Examples
```
(z <- Sys.time()) # the current date, as class "POSIXct"
Sys.time() - 3600 # an hour ago
as.POSIXlt(Sys.time(), "GMT") # the current time in GMT
format(.leap.seconds) # the leap seconds in your time zone
print(.leap.seconds, tz = "PST8PDT") # and in Seattle's
## look at *internal* representation of "POSIXlt" :
leapS <- as.POSIXlt(.leap.seconds)
names(leapS) ; is.list(leapS)
## str() "too smart" --> need unclass(.):
utils::str(unclass(leapS), vec.len = 7)
## Extracting *single* components of POSIXlt objects:
leapS[1 : 5, "year"]
## length(.) <- n now works for "POSIXct" and "POSIXlt" :
for(lpS in list(.leap.seconds, leapS)) {
ls <- lpS; length(ls) <- 12
l2 <- lpS; length(l2) <- 5 + length(lpS)
stopifnot(exprs = {
## length(.) <- * is compatible to subsetting/indexing:
identical(ls, lpS[seq_along(ls)])
identical(l2, lpS[seq_along(l2)])
## has filled with NA's
is.na(l2[(length(lpS)+1):length(l2)])
})
}
```
| programming_docs |
r None
`with` Evaluate an Expression in a Data Environment
----------------------------------------------------
### Description
Evaluate an **R** expression in an environment constructed from data, possibly modifying (a copy of) the original data.
### Usage
```
with(data, expr, ...)
within(data, expr, ...)
## S3 method for class 'list'
within(data, expr, keepAttrs = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `data` | data to use for constructing an environment. For the default `with` method this may be an environment, a list, a data frame, or an integer as in `sys.call`. For `within`, it can be a list or a data frame. |
| `expr` | expression to evaluate; particularly for `within()` often a “compound” expression, i.e., of the form
```
{
a <- somefun()
b <- otherfun()
.....
rm(unused1, temp)
}
```
|
| `keepAttrs` | for the `<list>` method of `within()`, a `<logical>` specifying if the resulting list should keep the `<attributes>` from `data` and have its `<names>` in the same order. Often this is unneeded as the result is a *named* list anyway, and then `keepAttrs =
FALSE` is more efficient. |
| `...` | arguments to be passed to (future) methods. |
### Details
`with` is a generic function that evaluates `expr` in a local environment constructed from `data`. The environment has the caller's environment as its parent. This is useful for simplifying calls to modeling functions. (Note: if `data` is already an environment then this is used with its existing parent.)
Note that assignments within `expr` take place in the constructed environment and not in the user's workspace.
`within` is similar, except that it examines the environment after the evaluation of `expr` and makes the corresponding modifications to a copy of `data` (this may fail in the data frame case if objects are created which cannot be stored in a data frame), and returns it. `within` can be used as an alternative to `transform`.
### Value
For `with`, the value of the evaluated `expr`. For `within`, the modified object.
### Note
For *interactive* use this is very effective and nice to read. For *programming* however, i.e., in one's functions, more care is needed, and typically one should refrain from using `with()`, as, e.g., variables in `data` may accidentally override local variables, see the reference.
Further, when using modeling or graphics functions with an explicit `data` argument (and typically using `[formula](../../stats/html/formula)`s), it is typically preferred to use the `data` argument of that function rather than to use `with(data, ...)`.
### References
Thomas Lumley (2003) *Standard nonstandard evaluation rules*. <https://developer.r-project.org/nonstandard-eval.pdf>
### See Also
`[evalq](eval)`, `<attach>`, `<assign>`, `<transform>`.
### Examples
```
with(mtcars, mpg[cyl == 8 & disp > 350])
# is the same as, but nicer than
mtcars$mpg[mtcars$cyl == 8 & mtcars$disp > 350]
require(stats); require(graphics)
# examples from glm:
with(data.frame(u = c(5,10,15,20,30,40,60,80,100),
lot1 = c(118,58,42,35,27,25,21,19,18),
lot2 = c(69,35,26,21,18,16,13,12,12)),
list(summary(glm(lot1 ~ log(u), family = Gamma)),
summary(glm(lot2 ~ log(u), family = Gamma))))
aq <- within(airquality, { # Notice that multiple vars can be changed
lOzone <- log(Ozone)
Month <- factor(month.abb[Month])
cTemp <- round((Temp - 32) * 5/9, 1) # From Fahrenheit to Celsius
S.cT <- Solar.R / cTemp # using the newly created variable
rm(Day, Temp)
})
head(aq)
# example from boxplot:
with(ToothGrowth, {
boxplot(len ~ dose, boxwex = 0.25, at = 1:3 - 0.2,
subset = (supp == "VC"), col = "yellow",
main = "Guinea Pigs' Tooth Growth",
xlab = "Vitamin C dose mg",
ylab = "tooth length", ylim = c(0, 35))
boxplot(len ~ dose, add = TRUE, boxwex = 0.25, at = 1:3 + 0.2,
subset = supp == "OJ", col = "orange")
legend(2, 9, c("Ascorbic acid", "Orange juice"),
fill = c("yellow", "orange"))
})
# alternate form that avoids subset argument:
with(subset(ToothGrowth, supp == "VC"),
boxplot(len ~ dose, boxwex = 0.25, at = 1:3 - 0.2,
col = "yellow", main = "Guinea Pigs' Tooth Growth",
xlab = "Vitamin C dose mg",
ylab = "tooth length", ylim = c(0, 35)))
with(subset(ToothGrowth, supp == "OJ"),
boxplot(len ~ dose, add = TRUE, boxwex = 0.25, at = 1:3 + 0.2,
col = "orange"))
legend(2, 9, c("Ascorbic acid", "Orange juice"),
fill = c("yellow", "orange"))
```
r None
`cut` Convert Numeric to Factor
--------------------------------
### Description
`cut` divides the range of `x` into intervals and codes the values in `x` according to which interval they fall. The leftmost interval corresponds to level one, the next leftmost to level two and so on.
### Usage
```
cut(x, ...)
## Default S3 method:
cut(x, breaks, labels = NULL,
include.lowest = FALSE, right = TRUE, dig.lab = 3,
ordered_result = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a numeric vector which is to be converted to a factor by cutting. |
| `breaks` | either a numeric vector of two or more unique cut points or a single number (greater than or equal to 2) giving the number of intervals into which `x` is to be cut. |
| `labels` | labels for the levels of the resulting category. By default, labels are constructed using `"(a,b]"` interval notation. If `labels = FALSE`, simple integer codes are returned instead of a factor. |
| `include.lowest` | logical, indicating if an ‘x[i]’ equal to the lowest (or highest, for `right = FALSE`) ‘breaks’ value should be included. |
| `right` | logical, indicating if the intervals should be closed on the right (and open on the left) or vice versa. |
| `dig.lab` | integer which is used when labels are not given. It determines the number of digits used in formatting the break numbers. |
| `ordered_result` | logical: should the result be an ordered factor? |
| `...` | further arguments passed to or from other methods. |
### Details
When `breaks` is specified as a single number, the range of the data is divided into `breaks` pieces of equal length, and then the outer limits are moved away by 0.1% of the range to ensure that the extreme values both fall within the break intervals. (If `x` is a constant vector, equal-length intervals are created, one of which includes the single value.)
If a `labels` parameter is specified, its values are used to name the factor levels. If none is specified, the factor level labels are constructed as `"(b1, b2]"`, `"(b2, b3]"` etc. for `right = TRUE` and as `"[b1, b2)"`, ... if `right =
FALSE`. In this case, `dig.lab` indicates the minimum number of digits should be used in formatting the numbers `b1`, `b2`, .... A larger value (up to 12) will be used if needed to distinguish between any pair of endpoints: if this fails labels such as `"Range3"` will be used. Formatting is done by `[formatC](formatc)`.
The default method will sort a numeric vector of `breaks`, but other methods are not required to and `labels` will correspond to the intervals after sorting.
As from **R** 3.2.0, `getOption("OutDec")` is consulted when labels are constructed for `labels = NULL`.
### Value
A `<factor>` is returned, unless `labels = FALSE` which results in an integer vector of level codes.
Values which fall outside the range of `breaks` are coded as `NA`, as are `NaN` and `NA` values.
### Note
Instead of `table(cut(x, br))`, `hist(x, br, plot = FALSE)` is more efficient and less memory hungry. Instead of `cut(*,
labels = FALSE)`, `[findInterval](findinterval)()` is more efficient.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<split>` for splitting a variable according to a group factor; `<factor>`, `<tabulate>`, `<table>`, `[findInterval](findinterval)`.
`[quantile](../../stats/html/quantile)` for ways of choosing breaks of roughly equal content (rather than length).
`[.bincode](bincode)` for a bare-bones version.
### Examples
```
Z <- stats::rnorm(10000)
table(cut(Z, breaks = -6:6))
sum(table(cut(Z, breaks = -6:6, labels = FALSE)))
sum(graphics::hist(Z, breaks = -6:6, plot = FALSE)$counts)
cut(rep(1,5), 4) #-- dummy
tx0 <- c(9, 4, 6, 5, 3, 10, 5, 3, 5)
x <- rep(0:8, tx0)
stopifnot(table(x) == tx0)
table( cut(x, breaks = 8))
table( cut(x, breaks = 3*(-2:5)))
table( cut(x, breaks = 3*(-2:5), right = FALSE))
##--- some values OUTSIDE the breaks :
table(cx <- cut(x, breaks = 2*(0:4)))
table(cxl <- cut(x, breaks = 2*(0:4), right = FALSE))
which(is.na(cx)); x[is.na(cx)] #-- the first 9 values 0
which(is.na(cxl)); x[is.na(cxl)] #-- the last 5 values 8
## Label construction:
y <- stats::rnorm(100)
table(cut(y, breaks = pi/3*(-3:3)))
table(cut(y, breaks = pi/3*(-3:3), dig.lab = 4))
table(cut(y, breaks = 1*(-3:3), dig.lab = 4))
# extra digits don't "harm" here
table(cut(y, breaks = 1*(-3:3), right = FALSE))
#- the same, since no exact INT!
## sometimes the default dig.lab is not enough to be avoid confusion:
aaa <- c(1,2,3,4,5,2,3,4,5,6,7)
cut(aaa, 3)
cut(aaa, 3, dig.lab = 4, ordered_result = TRUE)
## one way to extract the breakpoints
labs <- levels(cut(aaa, 3))
cbind(lower = as.numeric( sub("\\((.+),.*", "\\1", labs) ),
upper = as.numeric( sub("[^,]*,([^]]*)\\]", "\\1", labs) ))
```
r None
`callCC` Call With Current Continuation
----------------------------------------
### Description
A downward-only version of Scheme's call with current continuation.
### Usage
```
callCC(fun)
```
### Arguments
| | |
| --- | --- |
| `fun` | function of one argument, the exit procedure. |
### Details
`callCC` provides a non-local exit mechanism that can be useful for early termination of a computation. `callCC` calls `fun` with one argument, an *exit function*. The exit function takes a single argument, the intended return value. If the body of `fun` calls the exit function then the call to `callCC` immediately returns, with the value supplied to the exit function as the value returned by `callCC`.
### Author(s)
Luke Tierney
### Examples
```
# The following all return the value 1
callCC(function(k) 1)
callCC(function(k) k(1))
callCC(function(k) {k(1); 2})
callCC(function(k) repeat k(1))
```
r None
`RdUtils` Utilities for Processing Rd Files
--------------------------------------------
### Description
Utilities for converting files in R documentation (Rd) format to other formats or create indices from them, and for converting documentation in other formats to Rd format.
### Usage
```
R CMD Rdconv [options] file
R CMD Rd2pdf [options] files
```
### Arguments
| | |
| --- | --- |
| `file` | the path to a file to be processed. |
| `files` | a list of file names specifying the R documentation sources to use, by either giving the paths to the files, or the path to a directory with the sources of a package. |
| `options` | further options to control the processing, or for obtaining information about usage and version of the utility. |
### Details
`R CMD Rdconv` converts Rd format to plain text, HTML or LaTeX formats: it can also extract the examples.
`R CMD Rd2pdf` is the user-level program for producing PDF output from Rd sources. It will make use of the environment variables R\_PAPERSIZE (set by `R CMD`, with a default set when **R** was installed: values for R\_PAPERSIZE are `a4`, `letter`, `legal` and `executive`) and R\_PDFVIEWER (the PDF previewer). Also, RD2PDF\_INPUTENC can be set to `inputenx` to make use of the LaTeX package of that name rather than `inputenc`: this might be needed for better support of the UTF-8 encoding.
`R CMD Rd2pdf` calls `tools::[texi2pdf](../../tools/html/texi2dvi)` to produce its PDF file: see its help for the possibilities for the `texi2dvi` command which that function uses (and which can be overridden by setting environment variable R\_TEXI2DVICMD).
Use `R CMD foo --help` to obtain usage information on utility `foo`.
### See Also
The chapter ‘Processing Rd format’ in the ‘Writing R Extensions’ manual.
r None
`vector` Vectors
-----------------
### Description
`vector` produces a vector of the given length and mode.
`as.vector`, a generic, attempts to coerce its argument into a vector of mode `mode` (the default is to coerce to whichever vector mode is most convenient): if the result is atomic all attributes are removed.
`is.vector` returns `TRUE` if `x` is a vector of the specified mode having no attributes *other than names*. It returns `FALSE` otherwise.
### Usage
```
vector(mode = "logical", length = 0)
as.vector(x, mode = "any")
is.vector(x, mode = "any")
```
### Arguments
| | |
| --- | --- |
| `mode` | character string naming an atomic mode or `"list"` or `"expression"` or (except for `vector`) `"any"`. Currently, `is.vector()` allows any type (see `<typeof>`) for `mode`, and when mode is not `"any"`, `is.vector(x, mode)` is almost the same as `typeof(x) == mode`. |
| | |
| --- | --- |
| `length` | a non-negative integer specifying the desired length. For a [long vector](longvectors), i.e., `length > .Machine$integer.max`, it has to be of type `"double"`. Supplying an argument of length other than one is an error. |
| `x` | an **R** object. |
### Details
The atomic modes are `"logical"`, `"integer"`, `"numeric"` (synonym `"double"`), `"complex"`, `"character"` and `"raw"`.
If `mode = "any"`, `is.vector` may return `TRUE` for the atomic modes, `<list>` and `<expression>`. For any `mode`, it will return `FALSE` if `x` has any attributes except names. (This is incompatible with S.) On the other hand, `as.vector` removes *all* attributes including names for results of atomic mode (but not those of mode `"list"` nor `"expression"`).
Note that factors are *not* vectors; `is.vector` returns `FALSE` and `as.vector` converts a factor to a character vector for `mode = "any"`.
### Value
For `vector`, a vector of the given length and mode. Logical vector elements are initialized to `FALSE`, numeric vector elements to `0`, character vector elements to `""`, raw vector elements to `nul` bytes and list/expression elements to `NULL`.
For `as.vector`, a vector (atomic or of type list or expression). All attributes are removed from the result if it is of an atomic mode, but not in general for a list result. The default method handles 24 input types and 12 values of `type`: the details of most coercions are undocumented and subject to change.
For `is.vector`, `TRUE` or `FALSE`. `is.vector(x, mode = "numeric")` can be true for vectors of types `"integer"` or `"double"` whereas `is.vector(x, mode =
"double")` can only be true for those of type `"double"`.
### Methods for `as.vector()`
Writers of methods for `as.vector` need to take care to follow the conventions of the default method. In particular
* Argument `mode` can be `"any"`, any of the atomic modes, `"list"`, `"expression"`, `"symbol"`, `"pairlist"` or one of the aliases `"double"` and `"name"`.
* The return value should be of the appropriate mode. For `mode = "any"` this means an atomic vector or list.
* Attributes should be treated appropriately: in particular when the result is an atomic vector there should be no attributes, not even names.
* `is.vector(as.vector(x, m), m)` should be true for any mode `m`, including the default `"any"`.
### Note
`as.vector` and `is.vector` are quite distinct from the meaning of the formal class `"vector"` in the methods package, and hence `[as](../../methods/html/as)(x, "vector")` and `[is](../../methods/html/is)(x, "vector")`.
Note that `as.vector(x)` is not necessarily a null operation if `is.vector(x)` is true: any names will be removed from an atomic vector.
Non-vector `mode`s `"symbol"` (synonym `"name"`) and `"pairlist"` are accepted but have long been undocumented: they are used to implement `[as.name](name)` and `[as.pairlist](list)`, and those functions should preferably be used directly. None of the description here applies to those `mode`s: see the help for the preferred forms.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<c>`, `[is.numeric](numeric)`, `[is.list](list)`, etc.
### Examples
```
df <- data.frame(x = 1:3, y = 5:7)
## Error:
try(as.vector(data.frame(x = 1:3, y = 5:7), mode = "numeric"))
x <- c(a = 1, b = 2)
is.vector(x)
as.vector(x)
all.equal(x, as.vector(x)) ## FALSE
###-- All the following are TRUE:
is.list(df)
! is.vector(df)
! is.vector(df, mode = "list")
is.vector(list(), mode = "list")
```
r None
`lower.tri` Lower and Upper Triangular Part of a Matrix
--------------------------------------------------------
### Description
Returns a matrix of logicals the same size of a given matrix with entries `TRUE` in the lower or upper triangle.
### Usage
```
lower.tri(x, diag = FALSE)
upper.tri(x, diag = FALSE)
```
### Arguments
| | |
| --- | --- |
| `x` | a matrix or other **R** object with `length(dim(x)) == 2`. For back compatibility reasons, when the above is not fulfilled, `[as.matrix](matrix)(x)` is called first. |
| `diag` | logical. Should the diagonal be included? |
### See Also
`<diag>`, `<matrix>`; further `<row>` and `<col>` on which `lower.tri()` and `upper.tri()` are built.
### Examples
```
(m2 <- matrix(1:20, 4, 5))
lower.tri(m2)
m2[lower.tri(m2)] <- NA
m2
```
r None
`eapply` Apply a Function Over Values in an Environment
--------------------------------------------------------
### Description
`eapply` applies `FUN` to the named values from an `<environment>` and returns the results as a list. The user can request that all named objects are used (normally names that begin with a dot are not). The output is not sorted and no enclosing environments are searched.
### Usage
```
eapply(env, FUN, ..., all.names = FALSE, USE.NAMES = TRUE)
```
### Arguments
| | |
| --- | --- |
| `env` | environment to be used. |
| `FUN` | the function to be applied, found *via* `<match.fun>`. In the case of functions like `+`, `%*%`, etc., the function name must be backquoted or quoted. |
| `...` | optional arguments to `FUN`. |
| `all.names` | a logical indicating whether to apply the function to all values. |
| `USE.NAMES` | logical indicating whether the resulting list should have `<names>`. |
### Value
A named (unless `USE.NAMES = FALSE`) list. Note that the order of the components is arbitrary for hashed environments.
### See Also
`<environment>`, `<lapply>`.
### Examples
```
require(stats)
env <- new.env(hash = FALSE) # so the order is fixed
env$a <- 1:10
env$beta <- exp(-3:3)
env$logic <- c(TRUE, FALSE, FALSE, TRUE)
# what have we there?
utils::ls.str(env)
# compute the mean for each list element
eapply(env, mean)
unlist(eapply(env, mean, USE.NAMES = FALSE))
# median and quartiles for each element (making use of "..." passing):
eapply(env, quantile, probs = 1:3/4)
eapply(env, quantile)
```
r None
`Hyperbolic` Hyperbolic Functions
----------------------------------
### Description
These functions give the obvious hyperbolic functions. They respectively compute the hyperbolic cosine, sine, tangent, and their inverses, arc-cosine, arc-sine, arc-tangent (or ‘*area cosine*’, etc).
### Usage
```
cosh(x)
sinh(x)
tanh(x)
acosh(x)
asinh(x)
atanh(x)
```
### Arguments
| | |
| --- | --- |
| `x` | a numeric or complex vector |
### Details
These are [internal generic](internalmethods) <primitive> functions: methods can be defined for them individually or via the `[Math](groupgeneric)` group generic.
Branch cuts are consistent with the inverse trigonometric functions `asin` *et seq*, and agree with those defined in Abramowitz and Stegun, figure 4.7, page 86. The behaviour actually on the cuts follows the C99 standard which requires continuity coming round the endpoint in a counter-clockwise direction.
### S4 methods
All are S4 generic functions: methods can be defined for them individually or via the `[Math](../../methods/html/s4groupgeneric)` group generic.
### References
Abramowitz, M. and Stegun, I. A. (1972) *Handbook of Mathematical Functions.* New York: Dover.
Chapter 4. Elementary Transcendental Functions: Logarithmic, Exponential, Circular and Hyperbolic Functions
### See Also
The trigonometric functions, `[cos](trig)`, `[sin](trig)`, `[tan](trig)`, and their inverses `[acos](trig)`, `[asin](trig)`, `[atan](trig)`.
The logistic distribution function `[plogis](../../stats/html/logistic)` is a shifted version of `tanh()` for numeric `x`.
| programming_docs |
r None
`strtoi` Convert Strings to Integers
-------------------------------------
### Description
Convert strings to integers according to the given base using the C function `strtol`, or choose a suitable base following the C rules.
### Usage
```
strtoi(x, base = 0L)
```
### Arguments
| | |
| --- | --- |
| `x` | a character vector, or something coercible to this by `[as.character](character)`. |
| `base` | an integer which is between 2 and 36 inclusive, or zero (default). |
### Details
Conversion is based on the C library function `strtol`.
For the default `base = 0L`, the base chosen from the string representation of that element of `x`, so different elements can have different bases (see the first example). The standard C rules for choosing the base are that octal constants (prefix `0` not followed by `x` or `X`) and hexadecimal constants (prefix `0x` or `0X`) are interpreted as base `8` and `16`; all other strings are interpreted as base `10`.
For a base greater than `10`, letters `a` to `z` (or `A` to `Z`) are used to represent `10` to `35`.
### Value
An integer vector of the same length as `x`. Values which cannot be interpreted as integers or would overflow are returned as `[NA\_integer\_](na)`.
### See Also
For decimal strings `[as.integer](integer)` is equally useful.
### Examples
```
strtoi(c("0xff", "077", "123"))
strtoi(c("ffff", "FFFF"), 16L)
strtoi(c("177", "377"), 8L)
```
r None
`octmode` Display Numbers in Octal
-----------------------------------
### Description
Convert or print integers in octal format, with as many digits as are needed to display the largest, using leading zeroes as necessary.
### Usage
```
as.octmode(x)
## S3 method for class 'octmode'
as.character(x, ...)
## S3 method for class 'octmode'
format(x, width = NULL, ...)
## S3 method for class 'octmode'
print(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | An object, for the methods inheriting from class `"octmode"`. |
| `width` | `NULL` or a positive integer specifying the minimum field width to be used, with padding by leading zeroes. |
| `...` | further arguments passed to or from other methods. |
### Details
Class `"octmode"` consists of integer vectors with that class attribute, used merely to ensure that they are printed in octal notation, specifically for Unix-like file permissions such as `755`. Subsetting (`[[](extract)`) works too.
If `width = NULL` (the default), the output is padded with leading zeroes to the smallest width needed for all the non-missing elements.
`as.octmode` can convert integers (of [type](typeof) `"integer"` or `"double"`) and character vectors whose elements contain only digits `0-7` (or are `NA`) to class `"octmode"`.
There is a `[!](logic)` method and methods for `[|](logic)` and `[&](logic)`: these recycle their arguments to the length of the longer and then apply the operators bitwise to each element.
### See Also
These are auxiliary functions for `<file.info>`.
`<hexmode>`, `<sprintf>` for other options in converting integers to octal, `<strtoi>` to convert octal strings to integers.
### Examples
```
(on <- as.octmode(c(16, 32, 127:129))) # "020" "040" "177" "200" "201"
unclass(on[3:4]) # subsetting
## manipulate file modes
fmode <- as.octmode("170")
(fmode | "644") & "755"
umask <- Sys.umask(NA) # depends on platform
c(fmode, "666", "755") & !umask
```
r None
`format.info` format(.) Information
------------------------------------
### Description
Information is returned on how `<format>(x, digits, nsmall)` would be formatted.
### Usage
```
format.info(x, digits = NULL, nsmall = 0)
```
### Arguments
| | |
| --- | --- |
| `x` | an atomic vector; a potential argument of `<format>(x, ...)`. |
| `digits` | how many significant digits are to be used for numeric and complex `x`. The default, `NULL`, uses `[getOption](options)("digits")`. |
| `nsmall` | (see `<format>(..., nsmall)`). |
### Value
An `<integer>` `<vector>` of length 1, 3 or 6, say `r`.
For logical, integer and character vectors a single element, the width which would be used by `format` if `width = NULL`.
For numeric vectors:
| | |
| --- | --- |
| `r[1]` | width (in characters) used by `format(x)` |
| `r[2]` | number of digits after decimal point. |
| `r[3]` | in `0:2`; if *≥*`1`, *exponential* representation would be used, with exponent length of `r[3]+1`. |
For a complex vector the first three elements refer to the real parts, and there are three further elements corresponding to the imaginary parts.
### See Also
`<format>` (notably about `digits >= 16`), `[formatC](formatc)`.
### Examples
```
dd <- options("digits") ; options(digits = 7) #-- for the following
format.info(123) # 3 0 0
format.info(pi) # 8 6 0
format.info(1e8) # 5 0 1 - exponential "1e+08"
format.info(1e222) # 6 0 2 - exponential "1e+222"
x <- pi*10^c(-10,-2,0:2,8,20)
names(x) <- formatC(x, width = 1, digits = 3, format = "g")
cbind(sapply(x, format))
t(sapply(x, format.info))
## using at least 8 digits right of "."
t(sapply(x, format.info, nsmall = 8))
# Reset old options:
options(dd)
```
r None
`UTF8filepaths` File Paths not in the Native Encoding
------------------------------------------------------
### Description
Most modern file systems store file-path components (names of directories and files) in a character encoding of wide scope: usually UTF-8 on a Unix-alike and UCS-2/UTF-16 on Windows. However, this was not true when **R** was first developed and there are still exceptions amongst file systems, e.g. FAT32.
This was not something anticipated by the C and POSIX standards which only provide means to access files *via* file paths encoded in the current locale, for example those specified in Latin-1 in a Latin-1 locale.
Everything here apart from the specific section on Windows is about Unix-alikes.
### Details
It is possible to mark character strings (elements of character vectors) as being in UTF-8 or Latin-1 (see `[Encoding](encoding)`). This allows file paths not in the native encoding to be expressed in **R** character vectors but there is almost no way to use them unless they can be translated to the native encoding. That is of course not a problem if that is UTF-8, so these details are really only relevant to the use of a non-UTF-8 locale (including a C locale) on a Unix-alike.
Functions to open a file such as `[file](connections)`, `[fifo](connections)`, `[pipe](connections)`, `[gzfile](connections)`, `[bzfile](connections)`, `[xzfile](connections)` and `[unz](connections)` give an error for non-native filepaths. Where functions look at existence such as `file.exists`, `[dir.exists](files2)`, `<unlink>`, `<file.info>` and `<list.files>`, non-native filepaths are treated as non-existent.
Many other functions use `file` or `gzfile` to open their files.
`<file.path>` allows non-native file paths to be combined, marking them as UTF-8 if needed.
`<path.expand>` only handles paths in the native encoding.
### Windows
Windows provides proprietary entry points to access its file systems, and these gained ‘wide’ versions in Windows NT that allowed file paths in UCS-2/UTF-16 to be accessed from any locale.
Some **R** functions use these entry points when file paths are marked as Latin-1 or UTF-8 to allow access to paths not in the current encoding. These include `[file](connections)`, `<file.access>`, `[file.append](files)`, `[file.copy](files)`, `[file.create](files)`, `[file.exists](files)`, `<file.info>`, `[file.link](files)`, `[file.remove](files)`, `[file.rename](files)`, `[file.symlink](files)` and `[dir.create](files2)`, `[dir.exists](files2)`, `[normalizePath](normalizepath)`, `<path.expand>`, `[pipe](connections)`, `[Sys.glob](sys.glob)`, `Sys.junction`, `<unlink>` but not `[gzfile](connections)` `[bzfile](connections)`, `[xzfile](connections)` nor `[unz](connections)`.
For functions using `[gzfile](connections)` (including `<load>`, `[readRDS](readrds)`, `[read.dcf](dcf)` and `[tar](../../utils/html/tar)`), it is often possible to use a `<gzcon>` connection wrapping a `[file](connections)` connection.
Other notable exceptions are `<list.files>`, `[list.dirs](list.files)`, `<system>` and file-path inputs for graphics devices.
### Historical comment
Before **R** 4.0.0, file paths marked as being in Latin-1 or UTF-8 were silently translated to the native encoding using escapes such as <e7> or <U+00e7>. This created valid file names but maybe not those intended.
### Note
This document is still a work-in-progress.
r None
`nrow` The Number of Rows/Columns of an Array
----------------------------------------------
### Description
`nrow` and `ncol` return the number of rows or columns present in `x`. `NCOL` and `NROW` do the same treating a vector as 1-column matrix, even a 0-length vector, compatibly with `[as.matrix](matrix)()` or `<cbind>()`, see the example.
### Usage
```
nrow(x)
ncol(x)
NCOL(x)
NROW(x)
```
### Arguments
| | |
| --- | --- |
| `x` | a vector, array, data frame, or `[NULL](null)`. |
### Value
an `<integer>` of length 1 or `[NULL](null)`, the latter only for `ncol` and `nrow`.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole (`ncol` and `nrow`.)
### See Also
`<dim>` which returns *all* dimensions, and `<length>` which gives a number (a ‘count’) also in cases where `dim()` is `NULL`, and hence `nrow()` and `ncol()` return `NULL`; `<array>`, `<matrix>`.
### Examples
```
ma <- matrix(1:12, 3, 4)
nrow(ma) # 3
ncol(ma) # 4
ncol(array(1:24, dim = 2:4)) # 3, the second dimension
NCOL(1:12) # 1
NROW(1:12) # 12, the length() of the vector
## as.matrix() produces 1-column matrices from 0-length vectors,
## and so does cbind() :
dim(as.matrix(numeric())) # 0 1
dim( cbind(numeric())) # ditto
## consequently, NCOL(.) gives 1, too :
NCOL(numeric()) # 1 and hence
NCOL(NULL) # 1
```
r None
`attributes` Object Attribute Lists
------------------------------------
### Description
These functions access an object's attributes. The first form below returns the object's attribute list. The replacement forms uses the list on the right-hand side of the assignment as the object's attributes (if appropriate).
### Usage
```
attributes(x)
attributes(x) <- value
mostattributes(x) <- value
```
### Arguments
| | |
| --- | --- |
| `x` | any **R** object |
| `value` | an appropriate named `<list>` of attributes, or `NULL`. |
### Details
Unlike `<attr>` it is not an error to set attributes on a `NULL` object: it will first be coerced to an empty list.
Note that some attributes (namely `<class>`, `<comment>`, `<dim>`, `<dimnames>`, `<names>`, `<row.names>` and `[tsp](../../stats/html/tsp)`) are treated specially and have restrictions on the values which can be set. (Note that this is not true of `<levels>` which should be set for factors via the `levels` replacement function.)
Attributes are not stored internally as a list and should be thought of as a set and not a vector, i.e, the *order* of the elements of `attributes()` does not matter. This is also reflected by `<identical>()`'s behaviour with the default argument `attrib.as.set = TRUE`. Attributes must have unique names (and `NA` is taken as `"NA"`, not a missing value).
Assigning attributes first removes all attributes, then sets any `dim` attribute and then the remaining attributes in the order given: this ensures that setting a `dim` attribute always precedes the `dimnames` attribute.
The `mostattributes` assignment takes special care for the `<dim>`, `<names>` and `<dimnames>` attributes, and assigns them only when known to be valid whereas an `attributes` assignment would give an error if any are not. It is principally intended for arrays, and should be used with care on classed objects. For example, it does not check that `<row.names>` are assigned correctly for data frames.
The names of a pairlist are not stored as attributes, but are reported as if they were (and can be set by the replacement form of `attributes`).
`[NULL](null)` objects cannot have attributes and attempts to assign them will promote the object to an empty list.
Both assignment and replacement forms of `attributes` are <primitive> functions.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<attr>`, `<structure>`.
### Examples
```
x <- cbind(a = 1:3, pi = pi) # simple matrix with dimnames
attributes(x)
## strip an object's attributes:
attributes(x) <- NULL
x # now just a vector of length 6
mostattributes(x) <- list(mycomment = "really special", dim = 3:2,
dimnames = list(LETTERS[1:3], letters[1:5]), names = paste(1:6))
x # dim(), but not {dim}names
```
r None
`system` Invoke a System Command
---------------------------------
### Description
`system` invokes the OS command specified by `command`.
### Usage
```
system(command, intern = FALSE,
ignore.stdout = FALSE, ignore.stderr = FALSE,
wait = TRUE, input = NULL, show.output.on.console = TRUE,
minimized = FALSE, invisible = TRUE, timeout = 0)
```
### Arguments
| | |
| --- | --- |
| `command` | the system command to be invoked, as a character string. |
| `intern` | a logical (not `NA`) which indicates whether to capture the output of the command as an **R** character vector. |
| `ignore.stdout, ignore.stderr` | a logical (not `NA`) indicating whether messages written to ‘stdout’ or ‘stderr’ should be ignored. |
| `wait` | a logical (not `NA`) indicating whether the **R** interpreter should wait for the command to finish, or run it asynchronously. This will be ignored (and the interpreter will always wait) if `intern = TRUE`. When running the command asynchronously, no output will be displayed on the `Rgui` console in Windows (it will be dropped, instead). |
| `input` | if a character vector is supplied, this is copied one string per line to a temporary file, and the standard input of `command` is redirected to the file. |
| `timeout` | timeout in seconds, ignored if 0. This is a limit for the elapsed time running `command` in a separate process. Fractions of seconds are ignored. |
| `show.output.on.console, minimized, invisible` | arguments that are accepted on Windows but ignored on this platform, with a warning. |
### Details
This interface has become rather complicated over the years: see `<system2>` for a more portable and flexible interface which is recommended for new code.
`command` is parsed as a command plus arguments separated by spaces. So if the path to the command (or a single argument such as a file path) contains spaces, it must be quoted e.g. by `[shQuote](shquote)`. Unix-alikes pass the command line to a shell (normally ‘/bin/sh’, and POSIX requires that shell), so `command` can be anything the shell regards as executable, including shell scripts, and it can contain multiple commands separated by `;`.
On Windows, `system` does not use a shell and there is a separate function `shell` which passes command lines to a shell.
If `intern` is `TRUE` then `popen` is used to invoke the command and the output collected, line by line, into an **R** `<character>` vector. If `intern` is `FALSE` then the C function `system` is used to invoke the command.
`wait` is implemented by appending `&` to the command: this is in principle shell-dependent, but required by POSIX and so widely supported.
When `timeout` is non-zero, the command is terminated after the given number of seconds. The termination works for typical commands, but is not guaranteed: it is possible to write a program that would keep running after the time is out. Timeouts can only be set with `wait = TRUE`.
Timeouts cannot be used with interactive commands: the command is run with standard input redirected from `/dev/null` and it must not modify terminal settings. As long as tty `tostop` option is disabled, which it usually is by default, the executed command may write to standard output and standard error. One cannot rely on that the execution time of the child processes will be included into `user.child` and `sys.child` element of `proc_time` returned by `proc.time`. For the time to be included, all child processes have to be waited for by their parents, which has to be implemented in the parent applications.
The ordering of arguments after the first two has changed from time to time: it is recommended to name all arguments after the first.
There are many pitfalls in using `system` to ascertain if a command can be run — `[Sys.which](sys.which)` is more suitable.
### Value
If `intern = TRUE`, a character vector giving the output of the command, one line per character string. (Output lines of more than 8095 bytes will be split on some systems.) If the command could not be run an **R** error is generated. If `command` runs but gives a non-zero exit status this will be reported with a warning and in the attribute `"status"` of the result: an attribute `"errmsg"` may also be available.
If `intern = FALSE`, the return value is an error code (`0` for success), given the invisible attribute (so needs to be printed explicitly). If the command could not be run for any reason, the value is `127` and a warning is issued (as from **R** 3.5.0). Otherwise if `wait = TRUE` the value is the exit status returned by the command, and if `wait = FALSE` it is `0` (the conventional success value).
If the command times out, a warning is reported and the exit status is `124`.
### Stdout and stderr
For command-line **R**, error messages written to ‘stderr’ will be sent to the terminal unless `ignore.stderr = TRUE`. They can be captured (in the most likely shells) by
```
system("some command 2>&1", intern = TRUE)
```
For GUIs, what happens to output sent to ‘stdout’ or ‘stderr’ if `intern = FALSE` is interface-specific, and it is unsafe to assume that such messages will appear on a GUI console (they do on the macOS GUI's console, but not on some others).
### Differences between Unix and Windows
How processes are launched differs fundamentally between Windows and Unix-alike operating systems, as do the higher-level OS functions on which this **R** function is built. So it should not be surprising that there are many differences between OSes in how `system` behaves. For the benefit of programmers, the more important ones are summarized in this section.
* The most important difference is that on a Unix-alike `system` launches a shell which then runs `command`. On Windows the command is run directly – use `shell` for an interface which runs `command` *via* a shell (by default the Windows shell `cmd.exe`, which has many differences from a POSIX shell).
This means that it cannot be assumed that redirection or piping will work in `system` (redirection sometimes does, but we have seen cases where it stopped working after a Windows security patch), and `<system2>` (or `shell`) must be used on Windows.
* What happens to `stdout` and `stderr` when not captured depends on how **R** is running: Windows batch commands behave like a Unix-alike, but from the Windows GUI they are generally lost. `system(intern = TRUE)` captures ‘stderr’ when run from the Windows GUI console unless `ignore.stderr =
TRUE`.
* The behaviour on error is different in subtle ways (and has differed between **R** versions).
* The quoting conventions for `command` differ, but `[shQuote](shquote)` is a portable interface.
* Arguments `show.output.on.console`, `minimized`, `invisible` only do something on Windows (and are most relevant to `Rgui` there).
### See Also
`man system` and `man sh` for how this is implemented on the OS in use.
`[.Platform](platform)` for platform-specific variables.
`[pipe](connections)` to set up a pipe connection.
### Examples
```
# list all files in the current directory using the -F flag
## Not run: system("ls -F")
# t1 is a character vector, each element giving a line of output from who
# (if the platform has who)
t1 <- try(system("who", intern = TRUE))
try(system("ls fizzlipuzzli", intern = TRUE, ignore.stderr = TRUE))
# zero-length result since file does not exist, and will give warning.
```
| programming_docs |
r None
`Extremes` Maxima and Minima
-----------------------------
### Description
Returns the (regular or **p**arallel) maxima and minima of the input values.
`pmax*()` and `pmin*()` take one or more vectors as arguments, recycle them to common length and return a single vector giving the *‘parallel’* maxima (or minima) of the argument vectors.
### Usage
```
max(..., na.rm = FALSE)
min(..., na.rm = FALSE)
pmax(..., na.rm = FALSE)
pmin(..., na.rm = FALSE)
pmax.int(..., na.rm = FALSE)
pmin.int(..., na.rm = FALSE)
```
### Arguments
| | |
| --- | --- |
| `...` | numeric or character arguments (see Note). |
| `na.rm` | a logical indicating whether missing values should be removed. |
### Details
`max` and `min` return the maximum or minimum of *all* the values present in their arguments, as `<integer>` if all are `logical` or `integer`, as `<double>` if all are numeric, and character otherwise.
If `na.rm` is `FALSE` an `NA` value in any of the arguments will cause a value of `NA` to be returned, otherwise `NA` values are ignored.
The minimum and maximum of a numeric empty set are `+Inf` and `-Inf` (in this order!) which ensures *transitivity*, e.g., `min(x1, min(x2)) == min(x1, x2)`. For numeric `x` `max(x) == -Inf` and `min(x) == +Inf` whenever `length(x) == 0` (after removing missing values if requested). However, `pmax` and `pmin` return `NA` if all the parallel elements are `NA` even for `na.rm = TRUE`.
`pmax` and `pmin` take one or more vectors (or matrices) as arguments and return a single vector giving the ‘parallel’ maxima (or minima) of the vectors. The first element of the result is the maximum (minimum) of the first elements of all the arguments, the second element of the result is the maximum (minimum) of the second elements of all the arguments and so on. Shorter inputs (of non-zero length) are recycled if necessary. Attributes (see `<attributes>`: such as `<names>` or `<dim>`) are copied from the first argument (if applicable, e.g., *not* for an `S4` object).
`pmax.int` and `pmin.int` are faster internal versions only used when all arguments are atomic vectors and there are no classes: they drop all attributes. (Note that all versions fail for raw and complex vectors since these have no ordering.)
`max` and `min` are generic functions: methods can be defined for them individually or via the `[Summary](groupgeneric)` group generic. For this to work properly, the arguments `...` should be unnamed, and dispatch is on the first argument.
By definition the min/max of a numeric vector containing an `NaN` is `NaN`, except that the min/max of any vector containing an `NA` is `NA` even if it also contains an `NaN`. Note that `max(NA, Inf) == NA` even though the maximum would be `Inf` whatever the missing value actually is.
Character versions are sorted lexicographically, and this depends on the collating sequence of the locale in use: the help for ‘[Comparison](comparison)’ gives details. The max/min of an empty character vector is defined to be character `NA`. (One could argue that as `""` is the smallest character element, the maximum should be `""`, but there is no obvious candidate for the minimum.)
### Value
For `min` or `max`, a length-one vector. For `pmin` or `pmax`, a vector of length the longest of the input vectors, or length zero if one of the inputs had zero length.
The type of the result will be that of the highest of the inputs in the hierarchy integer < double < character.
For `min` and `max` if there are only numeric inputs and all are empty (after possible removal of `NA`s), the result is double (`Inf` or `-Inf`).
### S4 methods
`max` and `min` are part of the S4 `[Summary](../../methods/html/s4groupgeneric)` group generic. Methods for them must use the signature `x, ..., na.rm`.
### Note
‘Numeric’ arguments are vectors of type integer and numeric, and logical (coerced to integer). For historical reasons, `NULL` is accepted as equivalent to `integer(0)`.
`pmax` and `pmin` will also work on classed S3 or S4 objects with appropriate methods for comparison, `is.na` and `rep` (if recycling of arguments is needed).
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<range>` (*both* min and max) and `<which.min>` (`which.max`) for the *arg min*, i.e., the location where an extreme value occurs.
‘[plotmath](../../grdevices/html/plotmath)’ for the use of `min` in plot annotation.
### Examples
```
require(stats); require(graphics)
min(5:1, pi) #-> one number
pmin(5:1, pi) #-> 5 numbers
x <- sort(rnorm(100)); cH <- 1.35
pmin(cH, quantile(x)) # no names
pmin(quantile(x), cH) # has names
plot(x, pmin(cH, pmax(-cH, x)), type = "b", main = "Huber's function")
cut01 <- function(x) pmax(pmin(x, 1), 0)
curve( x^2 - 1/4, -1.4, 1.5, col = 2)
curve(cut01(x^2 - 1/4), col = "blue", add = TRUE, n = 500)
## pmax(), pmin() preserve attributes of *first* argument
D <- diag(x = (3:1)/4) ; n0 <- numeric()
stopifnot(identical(D, cut01(D) ),
identical(n0, cut01(n0)),
identical(n0, cut01(NULL)),
identical(n0, pmax(3:1, n0, 2)),
identical(n0, pmax(n0, 4)))
```
r None
`interactive` Is R Running Interactively?
------------------------------------------
### Description
Return `TRUE` when **R** is being used interactively and `FALSE` otherwise.
### Usage
```
interactive()
```
### Details
An interactive **R** session is one in which it is assumed that there is a human operator to interact with, so for example **R** can prompt for corrections to incorrect input or ask what to do next or if it is OK to move to the next plot.
GUI consoles will arrange to start **R** in an interactive session. When **R** is run in a terminal (via `Rterm.exe` on Windows), it assumes that it is interactive if ‘stdin’ is connected to a (pseudo-)terminal and not if ‘stdin’ is redirected to a file or pipe. Command-line options --interactive (Unix) and --ess (Windows, `Rterm.exe`) override the default assumption. (On a Unix-alike, whether the `readline` command-line editor is used is **not** overridden by --interactive.)
Embedded uses of **R** can set a session to be interactive or not.
Internally, whether a session is interactive determines
* how some errors are handled and reported, e.g. see `<stop>` and `<options>("showWarnCalls")`.
* whether one of --save, --no-save or --vanilla is required, and if **R** ever asks whether to save the workspace.
* the choice of default graphics device launched when needed and by `[dev.new](../../grdevices/html/dev)`: see `<options>("device")`
* whether graphics devices ever ask for confirmation of a new page.
In addition, **R**'s own **R** code makes use of `interactive()`: for example `[help](../../utils/html/help)`, `[debugger](../../utils/html/debugger)` and `[install.packages](../../utils/html/install.packages)` do.
### Note
This is a <primitive> function.
### See Also
`<source>`, `[.First](startup)`
### Examples
```
.First <- function() if(interactive()) x11()
```
r None
`library.dynam` Loading DLLs from Packages
-------------------------------------------
### Description
Load the specified file of compiled code if it has not been loaded already, or unloads it.
### Usage
```
library.dynam(chname, package, lib.loc,
verbose = getOption("verbose"),
file.ext = .Platform$dynlib.ext, ...)
library.dynam.unload(chname, libpath,
verbose = getOption("verbose"),
file.ext = .Platform$dynlib.ext)
.dynLibs(new)
```
### Arguments
| | |
| --- | --- |
| `chname` | a character string naming a DLL (also known as a dynamic shared object or library) to load. |
| `package` | a character vector with the name of package. |
| `lib.loc` | a character vector describing the location of **R** library trees to search through. |
| `libpath` | the path to the loaded package whose DLL is to be unloaded. |
| `verbose` | a logical value indicating whether an announcement is printed on the console before loading the DLL. The default value is taken from the verbose entry in the system `<options>`. |
| `file.ext` | the extension (including . if used) to append to the file name to specify the library to be loaded. This defaults to the appropriate value for the operating system. |
| `...` | additional arguments needed by some libraries that are passed to the call to `[dyn.load](dynload)` to control how the library and its dependencies are loaded. |
| `new` | a list of `"DLLInfo"` objects corresponding to the DLLs loaded by packages. Can be missing. |
### Details
See `[dyn.load](dynload)` for what sort of objects these functions handle.
`library.dynam` is designed to be used inside a package rather than at the command line, and should really only be used inside `[.onLoad](ns-hooks)`. The system-specific extension for DLLs (e.g., ‘.so’ or ‘.sl’ on Unix-alike systems, ‘.dll’ on Windows) should not be added.
`library.dynam.unload` is designed for use in `[.onUnload](ns-hooks)`: it unloads the DLL and updates the value of `.dynLibs()`
`.dynLibs` is used for getting (with no argument) or setting the DLLs which are currently loaded by packages (using `library.dynam`).
### Value
If `chname` is not specified, `library.dynam` returns an object of class `"[DLLInfoList](getloadeddlls)"` corresponding to the DLLs loaded by packages.
If `chname` is specified, an object of class `"[DLLInfo](getloadeddlls)"` that identifies the DLL and which can be used in future calls is returned invisibly. Note that the class `"[DLLInfo](getloadeddlls)"` has a method for `$` which can be used to resolve native symbols within that DLL.
`library.dynam.unload` invisibly returns an object of class `"[DLLInfo](getloadeddlls)"` identifying the DLL successfully unloaded.
`.dynLibs` returns an object of class `"[DLLInfoList](getloadeddlls)"` corresponding corresponding to its current value.
### Warning
Do not use `[dyn.unload](dynload)` on a DLL loaded by `library.dynam`: use `library.dynam.unload` to ensure that `.dynLibs` gets updated. Otherwise a subsequent call to `library.dynam` will be told the object is already loaded.
Note that whether or not it is possible to unload a DLL and then reload a revised version of the same file is OS-dependent: see the ‘Value’ section of the help for `[dyn.unload](dynload)`.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[getLoadedDLLs](getloadeddlls)` for information on `"DLLInfo"` and `"DLLInfoList"` objects.
`[.onLoad](ns-hooks)`, `<library>`, `[dyn.load](dynload)`, `[.packages](zpackages)`, `[.libPaths](libpaths)`
`[SHLIB](../../utils/html/shlib)` for how to create suitable DLLs.
### Examples
```
## Which DLLs were dynamically loaded by packages?
library.dynam()
## More on library.dynam.unload() :
require(nlme)
nlme:::.onUnload # shows library.dynam.unload() call
detach("package:nlme") # by default, unload=FALSE , so,
tail(library.dynam(), 2)# nlme still there
## How to unload the DLL ?
## Best is to unload the namespace, unloadNamespace("nlme")
## If we need to do it separately which should be exceptional:
pd.file <- attr(packageDescription("nlme"), "file")
library.dynam.unload("nlme", libpath = sub("/Meta.*", '', pd.file))
tail(library.dynam(), 2)# 'nlme' is gone now
unloadNamespace("nlme") # now gives warning
```
r None
`browser` Environment Browser
------------------------------
### Description
Interrupt the execution of an expression and allow the inspection of the environment where `browser` was called from.
### Usage
```
browser(text = "", condition = NULL, expr = TRUE, skipCalls = 0L)
```
### Arguments
| | |
| --- | --- |
| `text` | a text string that can be retrieved once the browser is invoked. |
| `condition` | a condition that can be retrieved once the browser is invoked. |
| `expr` | An expression, which if it evaluates to `TRUE` the debugger will invoked, otherwise control is returned directly. |
| `skipCalls` | how many previous calls to skip when reporting the calling context. |
### Details
A call to `browser` can be included in the body of a function. When reached, this causes a pause in the execution of the current expression and allows access to the **R** interpreter.
The purpose of the `text` and `condition` arguments are to allow helper programs (e.g., external debuggers) to insert specific values here, so that the specific call to browser (perhaps its location in a source file) can be identified and special processing can be achieved. The values can be retrieved by calling `[browserText](browsertext)` and `[browserCondition](browsertext)`.
The purpose of the `expr` argument is to allow for the illusion of conditional debugging. It is an illusion, because execution is always paused at the call to browser, but control is only passed to the evaluator described below if `expr` evaluates to `TRUE`. In most cases it is going to be more efficient to use an `if` statement in the calling program, but in some cases using this argument will be simpler.
The `skipCalls` argument should be used when the `browser()` call is nested within another debugging function: it will look further up the call stack to report its location.
At the browser prompt the user can enter commands or **R** expressions, followed by a newline. The commands are
`c`
exit the browser and continue execution at the next statement.
`cont`
synonym for `c`.
`f`
finish execution of the current loop or function
`help`
print this list of commands
`n`
evaluate the next statement, stepping over function calls. For byte compiled functions interrupted by `browser` calls, `n` is equivalent to `c`.
`s`
evaluate the next statement, stepping into function calls. Again, byte compiled functions make `s` equivalent to `c`.
`where`
print a stack trace of all active function calls.
`r`
invoke a `"resume"` restart if one is available; interpreted as an **R** expression otherwise. Typically `"resume"` restarts are established for continuing from user interrupts.
`Q`
exit the browser and the current evaluation and return to the top-level prompt.
Leading and trailing whitespace is ignored, except for an empty line. Handling of empty lines depends on the `"browserNLdisabled"` [option](options); if it is `TRUE`, empty lines are ignored. If not, an empty line is the same as `n` (or `s`, if it was used most recently).
Anything else entered at the browser prompt is interpreted as an **R** expression to be evaluated in the calling environment: in particular typing an object name will cause the object to be printed, and `ls()` lists the objects in the calling frame. (If you want to look at an object with a name such as `n`, print it explicitly, or use autoprint via `(n)`.
The number of lines printed for the deparsed call can be limited by setting `<options>(deparse.max.lines)`.
The browser prompt is of the form `Browse[n]>`: here `var{n}` indicates the ‘browser level’. The browser can be called when browsing (and often is when `<debug>` is in use), and each recursive call increases the number. (The actual number is the number of ‘contexts’ on the context stack: this is usually `2` for the outer level of browsing and `1` when examining dumps in `[debugger](../../utils/html/debugger)`.)
This is a primitive function but does argument matching in the standard way.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Chambers, J. M. (1998) *Programming with Data. A Guide to the S Language*. Springer.
### See Also
`<debug>`, and `<traceback>` for the stack on error. `[browserText](browsertext)` for how to retrieve the text and condition.
r None
`Extract.factor` Extract or Replace Parts of a Factor
------------------------------------------------------
### Description
Extract or replace subsets of factors.
### Usage
```
## S3 method for class 'factor'
x[..., drop = FALSE]
## S3 method for class 'factor'
x[[...]]
## S3 replacement method for class 'factor'
x[...] <- value
## S3 replacement method for class 'factor'
x[[...]] <- value
```
### Arguments
| | |
| --- | --- |
| `x` | a factor |
| `...` | a specification of indices – see `[Extract](extract)`. |
| `drop` | logical. If true, unused levels are dropped. |
| `value` | character: a set of levels. Factor values are coerced to character. |
### Details
When unused levels are dropped the ordering of the remaining levels is preserved.
If `value` is not in `levels(x)`, a missing value is assigned with a warning.
Any `[contrasts](../../stats/html/contrasts)` assigned to the factor are preserved unless `drop = TRUE`.
The `[[` method supports argument `exact`.
### Value
A factor with the same set of levels as `x` unless `drop = TRUE`.
### See Also
`<factor>`, `[Extract](extract)`.
### Examples
```
## following example(factor)
(ff <- factor(substring("statistics", 1:10, 1:10), levels = letters))
ff[, drop = TRUE]
factor(letters[7:10])[2:3, drop = TRUE]
```
r None
`any` Are Some Values True?
----------------------------
### Description
Given a set of logical vectors, is at least one of the values true?
### Usage
```
any(..., na.rm = FALSE)
```
### Arguments
| | |
| --- | --- |
| `...` | zero or more logical vectors. Other objects of zero length are ignored, and the rest are coerced to logical ignoring any class. |
| `na.rm` | logical. If true `NA` values are removed before the result is computed. |
### Details
This is a generic function: methods can be defined for it directly or via the `[Summary](groupgeneric)` group generic. For this to work properly, the arguments `...` should be unnamed, and dispatch is on the first argument.
Coercion of types other than integer (raw, double, complex, character, list) gives a warning as this is often unintentional.
This is a <primitive> function.
### Value
The value is a logical vector of length one.
Let `x` denote the concatenation of all the logical vectors in `...` (after coercion), after removing `NA`s if requested by `na.rm = TRUE`.
The value returned is `TRUE` if at least one of the values in `x` is `TRUE`, and `FALSE` if all of the values in `x` are `FALSE` (including if there are no values). Otherwise the value is `NA` (which can only occur if `na.rm = FALSE` and `...` contains no `TRUE` values and at least one `NA` value).
### S4 methods
This is part of the S4 `[Summary](../../methods/html/s4groupgeneric)` group generic. Methods for it must use the signature `x, ..., na.rm`.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<all>`, the ‘complement’ of `any`.
### Examples
```
range(x <- sort(round(stats::rnorm(10) - 1.2, 1)))
if(any(x < 0)) cat("x contains negative values\n")
```
r None
`numeric` Numeric Vectors
--------------------------
### Description
Creates or coerces objects of type `"numeric"`. `is.numeric` is a more general test of an object being interpretable as numbers.
### Usage
```
numeric(length = 0)
as.numeric(x, ...)
is.numeric(x)
```
### Arguments
| | |
| --- | --- |
| `length` | A non-negative integer specifying the desired length. Double values will be coerced to integer: supplying an argument of length other than one is an error. |
| `x` | object to be coerced or tested. |
| `...` | further arguments passed to or from other methods. |
### Details
`numeric` is identical to `<double>` (and `real`). It creates a double-precision vector of the specified length with each element equal to `0`.
`as.numeric` is a generic function, but S3 methods must be written for `[as.double](double)`. It is identical to `as.double`.
`is.numeric` is an [internal generic](internalmethods) `primitive` function: you can write methods to handle specific classes of objects, see [InternalMethods](internalmethods). It is **not** the same as `[is.double](double)`. Factors are handled by the default method, and there are methods for classes `"[Date](dates)"`, `"[POSIXt](datetimeclasses)"` and `"<difftime>"` (all of which return false). Methods for `is.numeric` should only return true if the base type of the class is `double` or `integer` *and* values can reasonably be regarded as numeric (e.g., arithmetic on them makes sense, and comparison should be done via the base type).
### Value
for `numeric` and `as.numeric` see `<double>`.
The default method for `is.numeric` returns `TRUE` if its argument is of <mode> `"numeric"` ([type](typeof) `"double"` or type `"integer"`) and not a factor, and `FALSE` otherwise. That is, `is.integer(x) || is.double(x)`, or `(mode(x) == "numeric") && !is.factor(x)`.
### Warning
If `x` is a `<factor>`, `as.numeric` will return the underlying numeric (integer) representation, which is often meaningless as it may not correspond to the `factor` `<levels>`, see the ‘Warning’ section in `<factor>` (and the 2nd example below).
### S4 methods
`as.numeric` and `is.numeric` are internally S4 generic and so methods can be set for them *via* `setMethod`.
To ensure that `as.numeric` and `as.double` remain identical, S4 methods can only be set for `as.numeric`.
### Note on names
It is a historical anomaly that **R** has two names for its floating-point vectors, `<double>` and `<numeric>` (and formerly had `real`).
`double` is the name of the [type](typeof). `numeric` is the name of the <mode> and also of the implicit <class>. As an S4 formal class, use `"numeric"`.
The potential confusion is that **R** has used *<mode>* `"numeric"` to mean ‘double or integer’, which conflicts with the S4 usage. Thus `is.numeric` tests the mode, not the class, but `as.numeric` (which is identical to `as.double`) coerces to the class.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<double>`, `<integer>`, `[storage.mode](mode)`.
### Examples
```
## Conversion does trim whitespace; non-numeric strings give NA + warning
as.numeric(c("-.1"," 2.7 ","B"))
## Numeric values are sometimes accidentally converted to factors.
## Converting them back to numeric is trickier than you'd expect.
f <- factor(5:10)
as.numeric(f) # not what you might expect, probably not what you want
## what you typically meant and want:
as.numeric(as.character(f))
## the same, considerably more efficient (for long vectors):
as.numeric(levels(f))[f]
```
| programming_docs |
r None
`regex` Regular Expressions as used in R
-----------------------------------------
### Description
This help page documents the regular expression patterns supported by `<grep>` and related functions `grepl`, `regexpr`, `gregexpr`, `sub` and `gsub`, as well as by `<strsplit>` and optionally by `<agrep>` and `[agrepl](agrep)`.
### Details
A ‘regular expression’ is a pattern that describes a set of strings. Two types of regular expressions are used in **R**, *extended* regular expressions (the default) and *Perl-like* regular expressions used by `perl = TRUE`. There is also `fixed = TRUE` which can be considered to use a *literal* regular expression.
Other functions which use regular expressions (often via the use of `grep`) include `apropos`, `browseEnv`, `help.search`, `list.files` and `ls`. These will all use *extended* regular expressions.
Patterns are described here as they would be printed by `cat`: (*do remember that backslashes need to be doubled when entering **R** character strings*, e.g. from the keyboard).
Long regular expression patterns may or may not be accepted: the POSIX standard only requires up to 256 *bytes*.
### Extended Regular Expressions
This section covers the regular expressions allowed in the default mode of `grep`, `grepl`, `regexpr`, `gregexpr`, `sub`, `gsub`, `regexec` and `strsplit`. They use an implementation of the POSIX 1003.2 standard: that allows some scope for interpretation and the interpretations here are those currently used by **R**. The implementation supports some extensions to the standard.
Regular expressions are constructed analogously to arithmetic expressions, by using various operators to combine smaller expressions. The whole expression matches zero or more characters (read ‘character’ as ‘byte’ if `useBytes = TRUE`).
The fundamental building blocks are the regular expressions that match a single character. Most characters, including all letters and digits, are regular expressions that match themselves. Any metacharacter with special meaning may be quoted by preceding it with a backslash. The metacharacters in extended regular expressions are . \ | ( ) [ { ^ $ \* + ?, but note that whether these have a special meaning depends on the context.
Escaping non-metacharacters with a backslash is implementation-dependent. The current implementation interprets \a as BEL, \e as ESC, \f as FF, \n as LF, \r as CR and \t as TAB. (Note that these will be interpreted by **R**'s parser in literal character strings.)
A *character class* is a list of characters enclosed between [ and ] which matches any single character in that list; unless the first character of the list is the caret ^, when it matches any character *not* in the list. For example, the regular expression [0123456789] matches any single digit, and [^abc] matches anything except the characters a, b or c. A range of characters may be specified by giving the first and last characters, separated by a hyphen. (Because their interpretation is locale- and implementation-dependent, character ranges are best avoided. Some but not all implementations include both cases in ranges when doing caseless matching.) The only portable way to specify all ASCII letters is to list them all as the character class
[ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz].
(The current implementation uses numerical order of the encoding, normally a single-byte encoding or Unicode points.)
Certain named classes of characters are predefined. Their interpretation depends on the *locale* (see <locales>); the interpretation below is that of the POSIX locale.
[:alnum:]
Alphanumeric characters: [:alpha:] and [:digit:].
[:alpha:]
Alphabetic characters: [:lower:] and [:upper:].
[:blank:]
Blank characters: space and tab, and possibly other locale-dependent characters such as non-breaking space.
[:cntrl:]
Control characters. In ASCII, these characters have octal codes 000 through 037, and 177 (`DEL`). In another character set, these are the equivalent characters, if any.
[:digit:]
Digits: 0 1 2 3 4 5 6 7 8 9.
[:graph:]
Graphical characters: [:alnum:] and [:punct:].
[:lower:]
Lower-case letters in the current locale.
[:print:]
Printable characters: [:alnum:], [:punct:] and space.
[:punct:]
Punctuation characters:
! " # $ % & ' ( ) \* + , - . / : ; < = > ? @ [ \ ] ^ \_ ` { | } ~.
[:space:]
Space characters: tab, newline, vertical tab, form feed, carriage return, space and possibly other locale-dependent characters.
[:upper:]
Upper-case letters in the current locale.
[:xdigit:]
Hexadecimal digits:
0 1 2 3 4 5 6 7 8 9 A B C D E F a b c d e f.
For example, [[:alnum:]] means [0-9A-Za-z], except the latter depends upon the locale and the character encoding, whereas the former is independent of locale and character set. (Note that the brackets in these class names are part of the symbolic names, and must be included in addition to the brackets delimiting the bracket list.) Most metacharacters lose their special meaning inside a character class. To include a literal ], place it first in the list. Similarly, to include a literal ^, place it anywhere but first. Finally, to include a literal -, place it first or last (or, for `perl = TRUE` only, precede it by a backslash). (Only ^ - \ ] are special inside character classes.)
The period . matches any single character. The symbol \w matches a ‘word’ character (a synonym for [[:alnum:]\_], an extension) and \W is its negation ([^[:alnum:]\_]). Symbols \d, \s, \D and \S denote the digit and space classes and their negations (these are all extensions).
The caret ^ and the dollar sign $ are metacharacters that respectively match the empty string at the beginning and end of a line. The symbols \< and \> match the empty string at the beginning and end of a word. The symbol \b matches the empty string at either edge of a word, and \B matches the empty string provided it is not at an edge of a word. (The interpretation of ‘word’ depends on the locale and implementation: these are all extensions.)
A regular expression may be followed by one of several repetition quantifiers:
?
The preceding item is optional and will be matched at most once.
\*
The preceding item will be matched zero or more times.
+
The preceding item will be matched one or more times.
{n}
The preceding item is matched exactly `n` times.
{n,}
The preceding item is matched `n` or more times.
{n,m}
The preceding item is matched at least `n` times, but not more than `m` times.
By default repetition is greedy, so the maximal possible number of repeats is used. This can be changed to ‘minimal’ by appending `?` to the quantifier. (There are further quantifiers that allow approximate matching: see the TRE documentation.)
Regular expressions may be concatenated; the resulting regular expression matches any string formed by concatenating the substrings that match the concatenated subexpressions.
Two regular expressions may be joined by the infix operator |; the resulting regular expression matches any string matching either subexpression. For example, abba|cde matches either the string `abba` or the string `cde`. Note that alternation does not work inside character classes, where | has its literal meaning.
Repetition takes precedence over concatenation, which in turn takes precedence over alternation. A whole subexpression may be enclosed in parentheses to override these precedence rules.
The backreference \N, where N = 1 ... 9, matches the substring previously matched by the Nth parenthesized subexpression of the regular expression. (This is an extension for extended regular expressions: POSIX defines them only for basic ones.)
### Perl-like Regular Expressions
The `perl = TRUE` argument to `grep`, `regexpr`, `gregexpr`, `sub`, `gsub` and `strsplit` switches to the PCRE library that implements regular expression pattern matching using the same syntax and semantics as Perl 5.x, with just a few differences.
For complete details please consult the man pages for PCRE, especially `man pcrepattern` and `man pcreapi`, on your system or from the sources at <https://www.pcre.org>. (The version in use can be found by calling `[extSoftVersion](extsoftversion)`. It need not be the version described in the system's man page. PCRE1 (reported as version < 10.00 by `[extSoftVersion](extsoftversion)`) has been feature-frozen for some time (essentially 2012), the man pages at <https://www.pcre.org/original/doc/html/> should be a good match. PCRE2 (PCRE version >= 10.00) has man pages at <https://www.pcre.org/current/doc/html/>).
Perl regular expressions can be computed byte-by-byte or (UTF-8) character-by-character: the latter is used in all multibyte locales and if any of the inputs are marked as UTF-8 (see `[Encoding](encoding)`, or as Latin-1 except in a Latin-1 locale.
All the regular expressions described for extended regular expressions are accepted except \< and \>: in Perl all backslashed metacharacters are alphanumeric and backslashed symbols always are interpreted as a literal character. { is not special if it would be the start of an invalid interval specification. There can be more than 9 backreferences (but the replacement in `[sub](grep)` can only refer to the first 9).
Character ranges are interpreted in the numerical order of the characters, either as bytes in a single-byte locale or as Unicode code points in UTF-8 mode. So in either case [A-Za-z] specifies the set of ASCII letters.
In UTF-8 mode the named character classes only match ASCII characters: see \p below for an alternative.
The construct (?...) is used for Perl extensions in a variety of ways depending on what immediately follows the ?.
Perl-like matching can work in several modes, set by the options (?i) (caseless, equivalent to Perl's /i), (?m) (multiline, equivalent to Perl's /m), (?s) (single line, so a dot matches all characters, even new lines: equivalent to Perl's /s) and (?x) (extended, whitespace data characters are ignored unless escaped and comments are allowed: equivalent to Perl's /x). These can be concatenated, so for example, (?im) sets caseless multiline matching. It is also possible to unset these options by preceding the letter with a hyphen, and to combine setting and unsetting such as (?im-sx). These settings can be applied within patterns, and then apply to the remainder of the pattern. Additional options not in Perl include (?U) to set ‘ungreedy’ mode (so matching is minimal unless ? is used as part of the repetition quantifier, when it is greedy). Initially none of these options are set.
If you want to remove the special meaning from a sequence of characters, you can do so by putting them between \Q and \E. This is different from Perl in that $ and @ are handled as literals in \Q...\E sequences in PCRE, whereas in Perl, $ and @ cause variable interpolation.
The escape sequences \d, \s and \w represent any decimal digit, space character and ‘word’ character (letter, digit or underscore in the current locale: in UTF-8 mode only ASCII letters and digits are considered) respectively, and their upper-case versions represent their negation. Vertical tab was not regarded as a space character in a `C` locale before PCRE 8.34. Sequences \h, \v, \H and \V match horizontal and vertical space or the negation. (In UTF-8 mode, these do match non-ASCII Unicode code points.)
There are additional escape sequences: \cx is cntrl-x for any x, \ddd is the octal character (for up to three digits unless interpretable as a backreference, as \1 to \7 always are), and \xhh specifies a character by two hex digits. In a UTF-8 locale, \x{h...} specifies a Unicode code point by one or more hex digits. (Note that some of these will be interpreted by **R**'s parser in literal character strings.)
Outside a character class, \A matches at the start of a subject (even in multiline mode, unlike ^), \Z matches at the end of a subject or before a newline at the end, \z matches only at end of a subject. and \G matches at first matching position in a subject (which is subtly different from Perl's end of the previous match). \C matches a single byte, including a newline, but its use is warned against. In UTF-8 mode, \R matches any Unicode newline character (not just CR), and \X matches any number of Unicode characters that form an extended Unicode sequence. \X, \R and \B cannot be used inside a character class (with PCRE1, they are treated as characters X, R and B; with PCRE2 they cause an error).
A hyphen (minus) inside a character class is treated as a range, unless it is first or last character in the class definition. It can be quoted to represent the hyphen literal (\-). PCRE1 allows an unquoted hyphen at some other locations inside a character class where it cannot represent a valid range, but PCRE2 reports an error in such cases.
In UTF-8 mode, some Unicode properties may be supported via \p{xx} and \P{xx} which match characters with and without property xx respectively. For a list of supported properties see the PCRE documentation, but for example Lu is ‘upper case letter’ and Sc is ‘currency symbol’. (This support depends on the PCRE library being compiled with ‘Unicode property support’ which can be checked *via* `<pcre_config>`. PCRE2 when compiled with Unicode support always supports also Unicode properties.)
The sequence (?# marks the start of a comment which continues up to the next closing parenthesis. Nested parentheses are not permitted. The characters that make up a comment play no part at all in the pattern matching.
If the extended option is set, an unescaped # character outside a character class introduces a comment that continues up to the next newline character in the pattern.
The pattern (?:...) groups characters just as parentheses do but does not make a backreference.
Patterns (?=...) and (?!...) are zero-width positive and negative lookahead *assertions*: they match if an attempt to match the `...` forward from the current position would succeed (or not), but use up no characters in the string being processed. Patterns (?<=...) and (?<!...) are the lookbehind equivalents: they do not allow repetition quantifiers nor \C in `...`.
`regexpr` and `gregexpr` support ‘named capture’. If groups are named, e.g., `"(?<first>[A-Z][a-z]+)"` then the positions of the matches are also returned by name. (Named backreferences are not supported by `sub`.)
Atomic grouping, possessive qualifiers and conditional and recursive patterns are not covered here.
### Author(s)
This help page is based on the TRE documentation and the POSIX standard, and the `pcre2pattern` man page from PCRE2 10.35.
### See Also
`<grep>`, `[apropos](../../utils/html/apropos)`, `[browseEnv](../../utils/html/browseenv)`, `[glob2rx](../../utils/html/glob2rx)`, `[help.search](../../utils/html/help.search)`, `<list.files>`, `<ls>`, `<strsplit>` and `<agrep>`.
The [TRE regexp syntax](https://htmlpreview.github.io/?https://raw.githubusercontent.com/laurikari/tre/master/doc/tre-syntax.html).
The POSIX 1003.2 standard at <https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap09.html>.
The `pcre2pattern` or `pcrepattern` `man` page (found as part of <https://www.pcre.org/original/pcre.txt>), and details of Perl's own implementation at <https://perldoc.perl.org/perlre>.
r None
`conflicts` Search for Masked Objects on the Search Path
---------------------------------------------------------
### Description
`conflicts` reports on objects that exist with the same name in two or more places on the `<search>` path, usually because an object in the user's workspace or a package is masking a system object of the same name. This helps discover unintentional masking.
### Usage
```
conflicts(where = search(), detail = FALSE)
```
### Arguments
| | |
| --- | --- |
| `where` | A subset of the search path, by default the whole search path. |
| `detail` | If `TRUE`, give the masked or masking functions for all members of the search path. |
### Value
If `detail = FALSE`, a character vector of masked objects. If `detail = TRUE`, a list of character vectors giving the masked or masking objects in that member of the search path. Empty vectors are omitted.
### Examples
```
lm <- 1:3
conflicts(, TRUE)
## gives something like
# $.GlobalEnv
# [1] "lm"
#
# $package:base
# [1] "lm"
## Remove things from your "workspace" that mask others:
remove(list = conflicts(detail = TRUE)$.GlobalEnv)
```
r None
`polyroot` Find Zeros of a Real or Complex Polynomial
------------------------------------------------------
### Description
Find zeros of a real or complex polynomial.
### Usage
```
polyroot(z)
```
### Arguments
| | |
| --- | --- |
| `z` | the vector of polynomial coefficients in increasing order. |
### Details
A polynomial of degree *n - 1*,
*p(x) = z1 + z2 \* x + … + z[n] \* x^(n-1)*
is given by its coefficient vector `z[1:n]`. `polyroot` returns the *n-1* complex zeros of *p(x)* using the Jenkins-Traub algorithm.
If the coefficient vector `z` has zeroes for the highest powers, these are discarded.
There is no maximum degree, but numerical stability may be an issue for all but low-degree polynomials.
### Value
A complex vector of length *n - 1*, where *n* is the position of the largest non-zero element of `z`.
### Source
C translation by Ross Ihaka of Fortran code in the reference, with modifications by the R Core Team.
### References
Jenkins, M. A. and Traub, J. F. (1972). Algorithm 419: zeros of a complex polynomial. *Communications of the ACM*, **15**(2), 97–99. doi: [10.1145/361254.361262](https://doi.org/10.1145/361254.361262).
### See Also
`[uniroot](../../stats/html/uniroot)` for numerical root finding of arbitrary functions; `<complex>` and the `zero` example in the demos directory.
### Examples
```
polyroot(c(1, 2, 1))
round(polyroot(choose(8, 0:8)), 11) # guess what!
for (n1 in 1:4) print(polyroot(1:n1), digits = 4)
polyroot(c(1, 2, 1, 0, 0)) # same as the first
```
r None
`which.min` Where is the Min() or Max() or first TRUE or FALSE ?
-----------------------------------------------------------------
### Description
Determines the location, i.e., index of the (first) minimum or maximum of a numeric (or logical) vector.
### Usage
```
which.min(x)
which.max(x)
```
### Arguments
| | |
| --- | --- |
| `x` | numeric (logical, integer or double) vector or an **R** object for which the internal coercion to `<double>` works whose `[min](extremes)` or `[max](extremes)` is searched for. |
### Value
Missing and `NaN` values are discarded.
an `<integer>` or on 64-bit platforms, if `<length>(x) =: n`*>= 2^31* an integer valued `<double>` of length 1 or 0 (iff `x` has no non-`NA`s), giving the index of the *first* minimum or maximum respectively of `x`.
If this extremum is unique (or empty), the results are the same as (but more efficient than) `which(x == min(x, na.rm = TRUE))` or `which(x == max(x, na.rm = TRUE))` respectively.
### Logical `x` – First `TRUE` or `FALSE`
For a `<logical>` vector `x` with both `FALSE` and `TRUE` values, `which.min(x)` and `which.max(x)` return the index of the first `FALSE` or `TRUE`, respectively, as `FALSE < TRUE`. However, `match(FALSE, x)` or `match(TRUE, x)` are typically *preferred*, as they do indicate mismatches.
### Author(s)
Martin Maechler
### See Also
`<which>`, `[max.col](maxcol)`, `[max](extremes)`, etc.
Use `[arrayInd](which)()`, if you need array/matrix indices instead of 1D vector ones.
`[which.is.max](../../nnet/html/which.is.max)` in package [nnet](https://CRAN.R-project.org/package=nnet) differs in breaking ties at random (and having a ‘fuzz’ in the definition of ties).
### Examples
```
x <- c(1:4, 0:5, 11)
which.min(x)
which.max(x)
## it *does* work with NA's present, by discarding them:
presidents[1:30]
range(presidents, na.rm = TRUE)
which.min(presidents) # 28
which.max(presidents) # 2
## Find the first occurrence, i.e. the first TRUE, if there is at least one:
x <- rpois(10000, lambda = 10); x[sample.int(50, 20)] <- NA
## where is the first value >= 20 ?
which.max(x >= 20)
## Also works for lists (which can be coerced to numeric vectors):
which.min(list(A = 7, pi = pi)) ## -> c(pi = 2L)
```
| programming_docs |
r None
`withVisible` Return both a Value and its Visibility
-----------------------------------------------------
### Description
This function evaluates an expression, returning it in a two element list containing its value and a flag showing whether it would automatically print.
### Usage
```
withVisible(x)
```
### Arguments
| | |
| --- | --- |
| `x` | an expression to be evaluated. |
### Details
The argument, *not* an `<expression>` object, rather an (unevaluated function) `<call>`, is evaluated in the caller's context.
This is a <primitive> function.
### Value
| | |
| --- | --- |
| `value` | The value of `x` after evaluation. |
| `visible` | logical; whether the value would auto-print. |
### See Also
`<invisible>`, `<eval>`; `[withAutoprint](source)()` calls `<source>()` which itself uses `withVisible()` in order to correctly “auto print”.
### Examples
```
x <- 1
withVisible(x <- 1) # *$visible is FALSE
x
withVisible(x) # *$visible is TRUE
# Wrap the call in evalq() for special handling
df <- data.frame(a = 1:5, b = 1:5)
evalq(withVisible(a + b), envir = df)
```
r None
`groupGeneric` S3 Group Generic Functions
------------------------------------------
### Description
Group generic methods can be defined for four pre-specified groups of functions, `Math`, `Ops`, `Summary` and `Complex`. (There are no objects of these names in base **R**, but there are in the methods package.)
A method defined for an individual member of the group takes precedence over a method defined for the group as a whole.
### Usage
```
## S3 methods for group generics have prototypes:
Math(x, ...)
Ops(e1, e2)
Complex(z)
Summary(..., na.rm = FALSE)
```
### Arguments
| | |
| --- | --- |
| `x, z, e1, e2` | objects. |
| `...` | further arguments passed to methods. |
| `na.rm` | logical: should missing values be removed? |
### Details
There are four *groups* for which S3 methods can be written, namely the `"Math"`, `"Ops"`, `"Summary"` and `"Complex"` groups. These are not **R** objects in base **R**, but methods can be supplied for them and base **R** contains `<factor>`, `<data.frame>` and `<difftime>` methods for the first three groups. (There is also a `[ordered](factor)` method for `Ops`, `[POSIXt](datetimeclasses)` and `[Date](dates)` methods for `Math` and `Ops`, `[package\_version](numeric_version)` methods for `Ops` and `Summary`, as well as a `[ts](../../stats/html/ts)` method for `Ops` in package stats.)
1. Group `"Math"`:
* `abs`, `sign`, `sqrt`,
`floor`, `ceiling`, `trunc`,
`round`, `signif`
* `exp`, `log`, `expm1`, `log1p`,
`cos`, `sin`, `tan`,
`cospi`, `sinpi`, `tanpi`,
`acos`, `asin`, `atan`
`cosh`, `sinh`, `tanh`,
`acosh`, `asinh`, `atanh`
* `lgamma`, `gamma`, `digamma`, `trigamma`
* `cumsum`, `cumprod`, `cummax`, `cummin`Members of this group dispatch on `x`. Most members accept only one argument, but members `log`, `round` and `signif` accept one or two arguments, and `trunc` accepts one or more.
2. Group `"Ops"`:
* `"+"`, `"-"`, `"*"`, `"/"`, `"^"`, `"%%"`, `"%/%"`
* `"&"`, `"|"`, `"!"`
* `"=="`, `"!="`, `"<"`, `"<="`, `">="`, `">"`This group contains both binary and unary operators (`+`, `-` and `!`): when a unary operator is encountered the `Ops` method is called with one argument and `e2` is missing.
The classes of both arguments are considered in dispatching any member of this group. For each argument its vector of classes is examined to see if there is a matching specific (preferred) or `Ops` method. If a method is found for just one argument or the same method is found for both, it is used. If different methods are found, there is a warning about ‘incompatible methods’: in that case or if no method is found for either argument the internal method is used.
Note that the `<data.frame>` methods for the comparison (`"Compare"`: `==`, `<`, ...) and logic (`"Logic"`: `&` `|` and `!`) operators return a logical `<matrix>` instead of a data frame, for convenience and back compatibility.
If the members of this group are called as functions, any argument names are removed to ensure that positional matching is always used.
3. Group `"Summary"`:
* `all`, `any`
* `sum`, `prod`
* `min`, `max`
* `range`Members of this group dispatch on the first argument supplied.
Note that the `<data.frame>` methods for the `"Summary"` and `"Math"` groups require “numeric-alike” columns `x`, i.e., fulfilling
```
is.numeric(x) || is.logical(x) || is.complex(x)
```
4. Group `"Complex"`:
* `Arg`, `Conj`, `Im`, `Mod`, `Re`Members of this group dispatch on `z`.
Note that a method will be used for one of these groups or one of its members *only* if it corresponds to a `"class"` attribute, as the internal code dispatches on `[oldClass](class)` and not on `<class>`. This is for efficiency: having to dispatch on, say, `Ops.integer` would be too slow.
The number of arguments supplied for primitive members of the `"Math"` group generic methods is not checked prior to dispatch.
There is no lazy evaluation of arguments for group-generic functions.
### Technical Details
These functions are all primitive and [internal generic](internalmethods).
The details of method dispatch and variables such as `.Generic` are discussed in the help for `[UseMethod](usemethod)`. There are a few small differences:
* For the operators of group `Ops`, the object `.Method` is a length-two character vector with elements the methods selected for the left and right arguments respectively. (If no method was selected, the corresponding element is `""`.)
* Object `.Group` records the group used for dispatch (if a specific method is used this is `""`).
### Note
Package methods does contain objects with these names, which it has re-used in confusing similar (but different) ways. See the help for that package.
### References
Appendix A, *Classes and Methods* of
Chambers, J. M. and Hastie, T. J. eds (1992) *Statistical Models in S.* Wadsworth & Brooks/Cole.
### See Also
`[methods](../../utils/html/methods)` for methods of non-internal generic functions.
[S4groupGeneric](../../methods/html/s4groupgeneric) for group generics for S4 methods.
### Examples
```
require(utils)
d.fr <- data.frame(x = 1:9, y = stats::rnorm(9))
class(1 + d.fr) == "data.frame" ##-- add to d.f. ...
methods("Math")
methods("Ops")
methods("Summary")
methods("Complex") # none in base R
```
r None
`levels` Levels Attributes
---------------------------
### Description
`levels` provides access to the levels attribute of a variable. The first form returns the value of the levels of its argument and the second sets the attribute.
### Usage
```
levels(x)
levels(x) <- value
```
### Arguments
| | |
| --- | --- |
| `x` | an object, for example a factor. |
| `value` | A valid value for `levels(x)`. For the default method, `NULL` or a character vector. For the `factor` method, a vector of character strings with length at least the number of levels of `x`, or a named list specifying how to rename the levels. |
### Details
Both the extractor and replacement forms are generic and new methods can be written for them. The most important method for the replacement function is that for `<factor>`s.
For the factor replacement method, a `NA` in `value` causes that level to be removed from the levels and the elements formerly with that level to be replaced by `NA`.
Note that for a factor, replacing the levels via `levels(x) <- value` is not the same as (and is preferred to) `attr(x, "levels") <- value`.
The replacement function is <primitive>.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<nlevels>`, `[relevel](../../stats/html/relevel)`, `[reorder](../../stats/html/reorder.factor)`.
### Examples
```
## assign individual levels
x <- gl(2, 4, 8)
levels(x)[1] <- "low"
levels(x)[2] <- "high"
x
## or as a group
y <- gl(2, 4, 8)
levels(y) <- c("low", "high")
y
## combine some levels
z <- gl(3, 2, 12, labels = c("apple", "salad", "orange"))
z
levels(z) <- c("fruit", "veg", "fruit")
z
## same, using a named list
z <- gl(3, 2, 12, labels = c("apple", "salad", "orange"))
z
levels(z) <- list("fruit" = c("apple","orange"),
"veg" = "salad")
z
## we can add levels this way:
f <- factor(c("a","b"))
levels(f) <- c("c", "a", "b")
f
f <- factor(c("a","b"))
levels(f) <- list(C = "C", A = "a", B = "b")
f
```
r None
`tracemem` Trace Copying of Objects
------------------------------------
### Description
This function marks an object so that a message is printed whenever the internal code copies the object. It is a major cause of hard-to-predict memory use in R.
### Usage
```
tracemem(x)
untracemem(x)
retracemem(x, previous = NULL)
```
### Arguments
| | |
| --- | --- |
| `x` | An R object, not a function or environment or `NULL`. |
| `previous` | A value as returned by `tracemem` or `retracemem`. |
### Details
This functionality is optional, determined at compilation, because it makes R run a little more slowly even when no objects are being traced. `tracemem` and `untracemem` give errors when R is not compiled with memory profiling; `retracemem` does not (so it can be left in code during development).
It is enabled in the CRAN macOS and Windows builds of **R**.
When an object is traced any copying of the object by the C function `duplicate` produces a message to standard output, as does type coercion and copying when passing arguments to `.C` or `.Fortran`.
The message consists of the string `tracemem`, the identifying strings for the object being copied and the new object being created, and a stack trace showing where the duplication occurred. `retracemem()` is used to indicate that a variable should be considered a copy of a previous variable (e.g., after subscripting).
The messages can be turned off with `[tracingState](trace)`.
It is not possible to trace functions, as this would conflict with `<trace>` and it is not useful to trace `NULL`, environments, promises, weak references, or external pointer objects, as these are not duplicated.
These functions are <primitive>.
### Value
A character string for identifying the object in the trace output (an address in hex enclosed in angle brackets), or `NULL` (invisibly).
### See Also
`<capabilities>("profmem")` to see if this was enabled for this build of **R**.
`<trace>`, `[Rprofmem](../../utils/html/rprofmem)`
<https://developer.r-project.org/memory-profiling.html>
### Examples
```
## Not run:
a <- 1:10
tracemem(a)
## b and a share memory
b <- a
b[1] <- 1
untracemem(a)
## copying in lm: less than R <= 2.15.0
d <- stats::rnorm(10)
tracemem(d)
lm(d ~ a+log(b))
## f is not a copy and is not traced
f <- d[-1]
f+1
## indicate that f should be traced as a copy of d
retracemem(f, retracemem(d))
f+1
## End(Not run)
```
r None
`qraux` Reconstruct the Q, R, or X Matrices from a QR Object
-------------------------------------------------------------
### Description
Returns the original matrix from which the object was constructed or the components of the decomposition.
### Usage
```
qr.X(qr, complete = FALSE, ncol =)
qr.Q(qr, complete = FALSE, Dvec =)
qr.R(qr, complete = FALSE)
```
### Arguments
| | |
| --- | --- |
| `qr` | object representing a QR decomposition. This will typically have come from a previous call to `<qr>` or `[lsfit](../../stats/html/lsfit)`. |
| `complete` | logical expression of length 1. Indicates whether an arbitrary orthogonal completion of the *\bold{Q}* or *\bold{X}* matrices is to be made, or whether the *\bold{R}* matrix is to be completed by binding zero-value rows beneath the square upper triangle. |
| `ncol` | integer in the range `1:nrow(qr$qr)`. The number of columns to be in the reconstructed *\bold{X}*. The default when `complete` is `FALSE` is the first `min(ncol(X), nrow(X))` columns of the original *\bold{X}* from which the qr object was constructed. The default when `complete` is `TRUE` is a square matrix with the original *\bold{X}* in the first `ncol(X)` columns and an arbitrary orthogonal completion (unitary completion in the complex case) in the remaining columns. |
| `Dvec` | vector (not matrix) of diagonal values. Each column of the returned *\bold{Q}* will be multiplied by the corresponding diagonal value. Defaults to all `1`s. |
### Value
`qr.X` returns *\bold{X}*, the original matrix from which the qr object was constructed, provided `ncol(X) <= nrow(X)`. If `complete` is `TRUE` or the argument `ncol` is greater than `ncol(X)`, additional columns from an arbitrary orthogonal (unitary) completion of `X` are returned.
`qr.Q` returns part or all of **Q**, the order-nrow(X) orthogonal (unitary) transformation represented by `qr`. If `complete` is `TRUE`, **Q** has `nrow(X)` columns. If `complete` is `FALSE`, **Q** has `ncol(X)` columns. When `Dvec` is specified, each column of **Q** is multiplied by the corresponding value in `Dvec`.
Note that `qr.Q(qr, *)` is a special case of `[qr.qy](qr)(qr, y)` (with a “diagonal” `y`), and `qr.X(qr, *)` is basically `[qr.qy](qr)(qr, R)` (apart from pivoting and `dimnames` setting).
`qr.R` returns **R**. This may be pivoted, e.g., if `a <- qr(x)` then `x[, a$pivot]` = **QR**. The number of rows of **R** is either `nrow(X)` or `ncol(X)` (and may depend on whether `complete` is `TRUE` or `FALSE`).
### See Also
`<qr>`, `[qr.qy](qr)`.
### Examples
```
p <- ncol(x <- LifeCycleSavings[, -1]) # not the 'sr'
qrstr <- qr(x) # dim(x) == c(n,p)
qrstr $ rank # = 4 = p
Q <- qr.Q(qrstr) # dim(Q) == dim(x)
R <- qr.R(qrstr) # dim(R) == ncol(x)
X <- qr.X(qrstr) # X == x
range(X - as.matrix(x)) # ~ < 6e-12
## X == Q %*% R if there has been no pivoting, as here:
all.equal(unname(X),
unname(Q %*% R))
# example of pivoting
x <- cbind(int = 1,
b1 = rep(1:0, each = 3), b2 = rep(0:1, each = 3),
c1 = rep(c(1,0,0), 2), c2 = rep(c(0,1,0), 2), c3 = rep(c(0,0,1),2))
x # is singular, columns "b2" and "c3" are "extra"
a <- qr(x)
zapsmall(qr.R(a)) # columns are int b1 c1 c2 b2 c3
a$pivot
pivI <- sort.list(a$pivot) # the inverse permutation
all.equal (x, qr.Q(a) %*% qr.R(a)) # no, no
stopifnot(
all.equal(x[, a$pivot], qr.Q(a) %*% qr.R(a)), # TRUE
all.equal(x , qr.Q(a) %*% qr.R(a)[, pivI])) # TRUE too!
```
r None
`sets` Set Operations
----------------------
### Description
Performs **set** union, intersection, (asymmetric!) difference, equality and membership on two vectors.
### Usage
```
union(x, y)
intersect(x, y)
setdiff(x, y)
setequal(x, y)
is.element(el, set)
```
### Arguments
| | |
| --- | --- |
| `x, y, el, set` | vectors (of the same mode) containing a sequence of items (conceptually) with no duplicated values. |
### Details
Each of `union`, `intersect`, `setdiff` and `setequal` will discard any duplicated values in the arguments, and they apply `[as.vector](vector)` to their arguments (and so in particular coerce factors to character vectors).
`is.element(x, y)` is identical to `x %in% y`.
### Value
A vector of the same `<mode>` as `x` or `y` for `setdiff` and `intersect`, respectively, and of a common mode for `union`.
A logical scalar for `setequal` and a logical of the same length as `x` for `is.element`.
### See Also
`[%in%](match)`
‘[plotmath](../../grdevices/html/plotmath)’ for the use of `union` and `intersect` in plot annotation.
### Examples
```
(x <- c(sort(sample(1:20, 9)), NA))
(y <- c(sort(sample(3:23, 7)), NA))
union(x, y)
intersect(x, y)
setdiff(x, y)
setdiff(y, x)
setequal(x, y)
## True for all possible x & y :
setequal( union(x, y),
c(setdiff(x, y), intersect(x, y), setdiff(y, x)))
is.element(x, y) # length 10
is.element(y, x) # length 8
```
r None
`order` Ordering Permutation
-----------------------------
### Description
`order` returns a permutation which rearranges its first argument into ascending or descending order, breaking ties by further arguments. `sort.list` does the same, using only one argument.
See the examples for how to use these functions to sort data frames, etc.
### Usage
```
order(..., na.last = TRUE, decreasing = FALSE,
method = c("auto", "shell", "radix"))
sort.list(x, partial = NULL, na.last = TRUE, decreasing = FALSE,
method = c("auto", "shell", "quick", "radix"))
```
### Arguments
| | |
| --- | --- |
| `...` | a sequence of numeric, complex, character or logical vectors, all of the same length, or a classed **R** object. |
| `x` | an atomic vector for `method`s `"shell"` and `"quick"`. When `x` is a non-atomic **R** object, the default `"auto"` and `"radix"` methods may work if `order(x,..)` does. |
| `partial` | vector of indices for partial sorting. (Non-`NULL` values are not implemented.) |
| `decreasing` | logical. Should the sort order be increasing or decreasing? For the `"radix"` method, this can be a vector of length equal to the number of arguments in `...`. For the other methods, it must be length one. |
| `na.last` | for controlling the treatment of `NA`s. If `TRUE`, missing values in the data are put last; if `FALSE`, they are put first; if `NA`, they are removed (see ‘Note’.) |
| `method` | the method to be used: partial matches are allowed. The default (`"auto"`) implies `"radix"` for short numeric vectors, integer vectors, logical vectors and factors. Otherwise, it implies `"shell"`. For details of methods `"shell"`, `"quick"`, and `"radix"`, see the help for `<sort>`. |
### Details
In the case of ties in the first vector, values in the second are used to break the ties. If the values are still tied, values in the later arguments are used to break the tie (see the first example). The sort used is *stable* (except for `method = "quick"`), so any unresolved ties will be left in their original ordering.
Complex values are sorted first by the real part, then the imaginary part.
Except for method `"radix"`, the sort order for character vectors will depend on the collating sequence of the locale in use: see `[Comparison](comparison)`.
The `"shell"` method is generally the safest bet and is the default method, except for short factors, numeric vectors, integer vectors and logical vectors, where `"radix"` is assumed. Method `"radix"` stably sorts logical, numeric and character vectors in linear time. It outperforms the other methods, although there are caveats (see `<sort>`). Method `"quick"` for `sort.list` is only supported for numeric `x` with `na.last = NA`, is not stable, and is slower than `"radix"`.
`partial = NULL` is supported for compatibility with other implementations of S, but no other values are accepted and ordering is always complete.
For a classed **R** object, the sort order is taken from `<xtfrm>`: as its help page notes, this can be slow unless a suitable method has been defined or `[is.numeric](numeric)(x)` is true. For factors, this sorts on the internal codes, which is particularly appropriate for ordered factors.
### Value
An integer vector unless any of the inputs has *2^31* or more elements, when it is a double vector.
### Warning
In programmatic use it is unsafe to name the `...` arguments, as the names could match current or future control arguments such as `decreasing`. A sometimes-encountered unsafe practice is to call `do.call('order', df_obj)` where `df_obj` might be a data frame: copy `df_obj` and remove any names, for example using `<unname>`.
### Note
`sort.list` can get called by mistake as a method for `<sort>` with a list argument: it gives a suitable error message for list `x`.
There is a historical difference in behaviour for `na.last = NA`: `sort.list` removes the `NA`s and then computes the order amongst the remaining elements: `order` computes the order amongst the non-`NA` elements of the original vector. Thus
```
x[order(x, na.last = NA)]
zz <- x[!is.na(x)]; zz[sort.list(x, na.last = NA)]
```
both sort the non-`NA` values of `x`.
Prior to **R** 3.3.0 `method = "radix"` was only supported for integers of range less than 100,000.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Knuth, D. E. (1998) *The Art of Computer Programming, Volume 3: Sorting and Searching.* 2nd ed. Addison-Wesley.
### See Also
`<sort>`, `<rank>`, `<xtfrm>`.
### Examples
```
require(stats)
(ii <- order(x <- c(1,1,3:1,1:4,3), y <- c(9,9:1), z <- c(2,1:9)))
## 6 5 2 1 7 4 10 8 3 9
rbind(x, y, z)[,ii] # shows the reordering (ties via 2nd & 3rd arg)
## Suppose we wanted descending order on y.
## A simple solution for numeric 'y' is
rbind(x, y, z)[, order(x, -y, z)]
## More generally we can make use of xtfrm
cy <- as.character(y)
rbind(x, y, z)[, order(x, -xtfrm(cy), z)]
## The radix sort supports multiple 'decreasing' values:
rbind(x, y, z)[, order(x, cy, z, decreasing = c(FALSE, TRUE, FALSE),
method="radix")]
## Sorting data frames:
dd <- transform(data.frame(x, y, z),
z = factor(z, labels = LETTERS[9:1]))
## Either as above {for factor 'z' : using internal coding}:
dd[ order(x, -y, z), ]
## or along 1st column, ties along 2nd, ... *arbitrary* no.{columns}:
dd[ do.call(order, dd), ]
set.seed(1) # reproducible example:
d4 <- data.frame(x = round( rnorm(100)), y = round(10*runif(100)),
z = round( 8*rnorm(100)), u = round(50*runif(100)))
(d4s <- d4[ do.call(order, d4), ])
(i <- which(diff(d4s[, 3]) == 0))
# in 2 places, needed 3 cols to break ties:
d4s[ rbind(i, i+1), ]
## rearrange matched vectors so that the first is in ascending order
x <- c(5:1, 6:8, 12:9)
y <- (x - 5)^2
o <- order(x)
rbind(x[o], y[o])
## tests of na.last
a <- c(4, 3, 2, NA, 1)
b <- c(4, NA, 2, 7, 1)
z <- cbind(a, b)
(o <- order(a, b)); z[o, ]
(o <- order(a, b, na.last = FALSE)); z[o, ]
(o <- order(a, b, na.last = NA)); z[o, ]
## speed examples on an average laptop for long vectors:
## factor/small-valued integers:
x <- factor(sample(letters, 1e7, replace = TRUE))
system.time(o <- sort.list(x, method = "quick", na.last = NA)) # 0.1 sec
stopifnot(!is.unsorted(x[o]))
system.time(o <- sort.list(x, method = "radix")) # 0.05 sec, 2X faster
stopifnot(!is.unsorted(x[o]))
## large-valued integers:
xx <- sample(1:200000, 1e7, replace = TRUE)
system.time(o <- sort.list(xx, method = "quick", na.last = NA)) # 0.3 sec
system.time(o <- sort.list(xx, method = "radix")) # 0.2 sec
## character vectors:
xx <- sample(state.name, 1e6, replace = TRUE)
system.time(o <- sort.list(xx, method = "shell")) # 2 sec
system.time(o <- sort.list(xx, method = "radix")) # 0.007 sec, 300X faster
## double vectors:
xx <- rnorm(1e6)
system.time(o <- sort.list(xx, method = "shell")) # 0.4 sec
system.time(o <- sort.list(xx, method = "quick", na.last = NA)) # 0.1 sec
system.time(o <- sort.list(xx, method = "radix")) # 0.05 sec, 2X faster
```
| programming_docs |
r None
`format` Encode in a Common Format
-----------------------------------
### Description
Format an **R** object for pretty printing.
### Usage
```
format(x, ...)
## Default S3 method:
format(x, trim = FALSE, digits = NULL, nsmall = 0L,
justify = c("left", "right", "centre", "none"),
width = NULL, na.encode = TRUE, scientific = NA,
big.mark = "", big.interval = 3L,
small.mark = "", small.interval = 5L,
decimal.mark = getOption("OutDec"),
zero.print = NULL, drop0trailing = FALSE, ...)
## S3 method for class 'data.frame'
format(x, ..., justify = "none")
## S3 method for class 'factor'
format(x, ...)
## S3 method for class 'AsIs'
format(x, width = 12, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | any **R** object (conceptually); typically numeric. |
| `trim` | logical; if `FALSE`, logical, numeric and complex values are right-justified to a common width: if `TRUE` the leading blanks for justification are suppressed. |
| `digits` | how many significant digits are to be used for numeric and complex `x`. The default, `NULL`, uses `[getOption](options)("digits")`. This is a suggestion: enough decimal places will be used so that the smallest (in magnitude) number has this many significant digits, and also to satisfy `nsmall`. (For the interpretation for complex numbers see `[signif](round)`.) |
| `nsmall` | the minimum number of digits to the right of the decimal point in formatting real/complex numbers in non-scientific formats. Allowed values are `0 <= nsmall <= 20`. |
| `justify` | should a *character* vector be left-justified (the default), right-justified, centred or left alone. Can be abbreviated. |
| `width` | `default` method: the *minimum* field width or `NULL` or `0` for no restriction. `AsIs` method: the *maximum* field width for non-character objects. `NULL` corresponds to the default `12`. |
| `na.encode` | logical: should `NA` strings be encoded? Note this only applies to elements of character vectors, not to numerical, complex nor logical `NA`s, which are always encoded as `"NA"`. |
| | |
| --- | --- |
| `scientific` | Either a logical specifying whether elements of a real or complex vector should be encoded in scientific format, or an integer penalty (see `<options>("scipen")`). Missing values correspond to the current default penalty. |
| `...` | further arguments passed to or from other methods. |
| `big.mark, big.interval, small.mark,
small.interval, decimal.mark, zero.print, drop0trailing` | used for prettying (longish) numerical and complex sequences. Passed to `[prettyNum](formatc)`: that help page explains the details. |
### Details
`format` is a generic function. Apart from the methods described here there are methods for dates (see `[format.Date](as.date)`), date-times (see `[format.POSIXct](strptime)`) and for other classes such as `format.octmode` and `format.dist`.
`format.data.frame` formats the data frame column by column, applying the appropriate method of `format` for each column. Methods for columns are often similar to `as.character` but offer more control. Matrix and data-frame columns will be converted to separate columns in the result, and character columns (normally all) will be given class `"[AsIs](asis)"`.
`format.factor` converts the factor to a character vector and then calls the default method (and so `justify` applies).
`format.AsIs` deals with columns of complicated objects that have been extracted from a data frame. Character objects and (atomic) matrices are passed to the default method (and so `width` does not apply). Otherwise it calls `[toString](tostring)` to convert the object to character (if a vector or list, element by element) and then right-justifies the result.
Justification for character vectors (and objects converted to character vectors by their methods) is done on display width (see `<nchar>`), taking double-width characters and the rendering of special characters (as escape sequences, including escaping backslash but not double quote: see `<print.default>`) into account. Thus the width is as displayed by `print(quote =
FALSE)` and not as displayed by `<cat>`. Character strings are padded with blanks to the display width of the widest. (If `na.encode = FALSE` missing character strings are not included in the width computations and are not encoded.)
Numeric vectors are encoded with the minimum number of decimal places needed to display all the elements to at least the `digits` significant digits. However, if all the elements then have trailing zeroes, the number of decimal places is reduced until `nsmall` is reached or at least one element has a non-zero final digit; see also the argument documentation for `big.*`, `small.*` etc, above. See the note in `<print.default>` about `digits >= 16`.
Raw vectors are converted to their 2-digit hexadecimal representation by `[as.character](character)`.
`format.default(x)` now provides a “minimal” string when `[isS4](iss4)(x)` is true.
The internal code respects the option `[getOption](options)("OutDec")` for the ‘decimal mark’, so if this is set to something other than `"."` then it takes precedence over argument `decimal.mark`.
### Value
An object of similar structure to `x` containing character representations of the elements of the first argument `x` in a common format, and in the current locale's encoding.
For character, numeric, complex or factor `x`, dims and dimnames are preserved on matrices/arrays and names on vectors: no other attributes are copied.
If `x` is a list, the result is a character vector obtained by applying `format.default(x, ...)` to each element of the list (after `<unlist>`ing elements which are themselves lists), and then collapsing the result for each element with `paste(collapse = ", ")`. The defaults in this case are `trim = TRUE, justify = "none"` since one does not usually want alignment in the collapsed strings.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<format.info>` indicates how an atomic vector would be formatted.
`[formatC](formatc)`, `<paste>`, `[as.character](character)`, `<sprintf>`, `<print>`, `[prettyNum](formatc)`, `[toString](tostring)`, `[encodeString](encodestring)`.
### Examples
```
format(1:10)
format(1:10, trim = TRUE)
zz <- data.frame("(row names)"= c("aaaaa", "b"), check.names = FALSE)
format(zz)
format(zz, justify = "left")
## use of nsmall
format(13.7)
format(13.7, nsmall = 3)
format(c(6.0, 13.1), digits = 2)
format(c(6.0, 13.1), digits = 2, nsmall = 1)
## use of scientific
format(2^31-1)
format(2^31-1, scientific = TRUE)
## a list
z <- list(a = letters[1:3], b = (-pi+0i)^((-2:2)/2), c = c(1,10,100,1000),
d = c("a", "longer", "character", "string"),
q = quote( a + b ), e = expression(1+x))
## can you find the "2" small differences?
(f1 <- format(z, digits = 2))
(f2 <- format(z, digits = 2, justify = "left", trim = FALSE))
f1 == f2 ## 2 FALSE, 4 TRUE
## A "minimal" format() for S4 objects without their own format() method:
cc <- methods::getClassDef("standardGeneric")
format(cc) ## "<S4 class ......>"
```
r None
`tabulate` Tabulation for Vectors
----------------------------------
### Description
`tabulate` takes the integer-valued vector `bin` and counts the number of times each integer occurs in it.
### Usage
```
tabulate(bin, nbins = max(1, bin, na.rm = TRUE))
```
### Arguments
| | |
| --- | --- |
| `bin` | a numeric vector (of positive integers), or a factor. [Long vectors](longvectors) are supported. |
| `nbins` | the number of bins to be used. |
### Details
`tabulate` is the workhorse for the `<table>` function.
If `bin` is a factor, its internal integer representation is tabulated.
If the elements of `bin` are numeric but not integers, they are truncated by `[as.integer](integer)`.
### Value
An integer valued `<integer>` or `<double>` vector (without names). There is a bin for each of the values `1,
..., nbins`; values outside that range and `NA`s are (silently) ignored.
On 64-bit platforms `bin` can have *2^31* or more elements (i.e., `length(bin) > .Machine$integer.max`), and hence a count could exceed the maximum integer. For this reason, the return value is of type double for such long `bin` vectors.
### See Also
`<table>`, `<factor>`.
### Examples
```
tabulate(c(2,3,5))
tabulate(c(2,3,3,5), nbins = 10)
tabulate(c(-2,0,2,3,3,5)) # -2 and 0 are ignored
tabulate(c(-2,0,2,3,3,5), nbins = 3)
tabulate(factor(letters[1:10]))
```
r None
`rawConnection` Raw Connections
--------------------------------
### Description
Input and output raw connections.
### Usage
```
rawConnection(object, open = "r")
rawConnectionValue(con)
```
### Arguments
| | |
| --- | --- |
| `object` | character or raw vector. A description of the connection. For an input this is an **R** raw vector object, and for an output connection the name for the connection. |
| `open` | character. Any of the standard connection open modes. |
| `con` | An output raw connection. |
### Details
An input raw connection is opened and the raw vector is copied at the time the connection object is created, and `close` destroys the copy.
An output raw connection is opened and creates an **R** raw vector internally. The raw vector can be retrieved *via* `rawConnectionValue`.
If a connection is open for both input and output the initial raw vector supplied is copied when the connections is open
### Value
For `rawConnection`, a connection object of class `"rawConnection"` which inherits from class `"connection"`.
For `rawConnectionValue`, a raw vector.
### Note
As output raw connections keep the internal raw vector up to date call-by-call, they are relatively expensive to use (although over-allocation is used), and it may be better to use an anonymous `[file](connections)()` connection to collect output.
On (rare) platforms where `vsnprintf` does not return the needed length of output there is a 100,000 character limit on the length of line for output connections: longer lines will be truncated with a warning.
### See Also
`<connections>`, `[showConnections](showconnections)`.
### Examples
```
zz <- rawConnection(raw(0), "r+") # start with empty raw vector
writeBin(LETTERS, zz)
seek(zz, 0)
readLines(zz) # raw vector has embedded nuls
seek(zz, 0)
writeBin(letters[1:3], zz)
rawConnectionValue(zz)
close(zz)
```
r None
`sequence` Create A Vector of Sequences
----------------------------------------
### Description
The default method for `sequence` generates the sequence `<seq>(from[i], by = by[i], length.out = nvec[i])` for each element `i` in the parallel (and recycled) vectors `from`, `by` and `nvec`. It then returns the result of concatenating those sequences.
### Usage
```
sequence(nvec, ...)
## Default S3 method:
sequence(nvec, from = 1L, by = 1L, ...)
```
### Arguments
| | |
| --- | --- |
| `nvec` | coerced to a non-negative integer vector each element of which specifies the length of a sequence. |
| `from` | coerced to an integer vector each element of which specifies the first element of a sequence. |
| `by` | coerced to an integer vector each element of which specifies the step size between elements of a sequence. |
| `...` | additional arguments passed to methods. |
### Details
Negative values are supported for `from` and `by`. `sequence(nvec, from, by=0L)` is equivalent to `rep(from, each=nvec)`.
This function was originally implemented in R with fewer features, but it has since become more flexible, and the default method is implemented in C for speed.
### Author(s)
Of the current version, Michael Lawrence based on code from the S4Vectors Bioconductor package
### See Also
`<gl>`, `<seq>`, `<rep>`.
### Examples
```
sequence(c(3, 2)) # the concatenated sequences 1:3 and 1:2.
#> [1] 1 2 3 1 2
sequence(c(3, 2), from=2L)
#> [1] 2 3 4 2 3
sequence(c(3, 2), from=2L, by=2L)
#> [1] 2 4 6 2 4
sequence(c(3, 2), by=c(-1L, 1L))
#> [1] 1 0 -1 1 2
```
r None
`print.dataframe` Printing Data Frames
---------------------------------------
### Description
Print a data frame.
### Usage
```
## S3 method for class 'data.frame'
print(x, ..., digits = NULL,
quote = FALSE, right = TRUE, row.names = TRUE, max = NULL)
```
### Arguments
| | |
| --- | --- |
| `x` | object of class `data.frame`. |
| `...` | optional arguments to `print` methods. |
| `digits` | the minimum number of significant digits to be used: see `<print.default>`. |
| `quote` | logical, indicating whether or not entries should be printed with surrounding quotes. |
| `right` | logical, indicating whether or not strings should be right-aligned. The default is right-alignment. |
| `row.names` | logical (or character vector), indicating whether (or what) row names should be printed. |
| `max` | numeric or `NULL`, specifying the maximal number of entries to be printed. By default, when `NULL`, `[getOption](options)("max.print")` used. |
### Details
This calls `<format>` which formats the data frame column-by-column, then converts to a character matrix and dispatches to the `print` method for matrices.
When `quote = TRUE` only the entries are quoted not the row names nor the column names.
### See Also
`<data.frame>`.
### Examples
```
(dd <- data.frame(x = 1:8, f = gl(2,4), ch = I(letters[1:8])))
# print() with defaults
print(dd, quote = TRUE, row.names = FALSE)
# suppresses row.names and quotes all entries
```
r None
`stop` Stop Function Execution
-------------------------------
### Description
`stop` stops execution of the current expression and executes an error action.
`geterrmessage` gives the last error message.
### Usage
```
stop(..., call. = TRUE, domain = NULL)
geterrmessage()
```
### Arguments
| | |
| --- | --- |
| `...` | zero or more objects which can be coerced to character (and which are pasted together with no separator) or a single condition object. |
| `call.` | logical, indicating if the call should become part of the error message. |
| `domain` | see `<gettext>`. If `NA`, messages will not be translated. |
### Details
The error action is controlled by error handlers established within the executing code and by the current default error handler set by `options(error=)`. The error is first signaled as if using `[signalCondition](conditions)()`. If there are no handlers or if all handlers return, then the error message is printed (if `options("show.error.messages")` is true) and the default error handler is used. The default behaviour (the `NULL` error-handler) in interactive use is to return to the top level prompt or the top level browser, and in non-interactive use to (effectively) call `[q](quit)("no", status = 1, runLast = FALSE`). The default handler stores the error message in a buffer; it can be retrieved by `geterrmessage()`. It also stores a trace of the call stack that can be retrieved by `<traceback>()`.
Errors will be truncated to `getOption("warning.length")` characters, default 1000.
If a condition object is supplied it should be the only argument, and further arguments will be ignored, with a warning.
### Value
`geterrmessage` gives the last error message, as a character string ending in `"\n"`.
### Note
Use `domain = NA` whenever `...` contain a result from `[gettextf](sprintf)()` as that is translated already.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<warning>`, `<try>` to catch errors and retry, and `<options>` for setting error handlers. `<stopifnot>` for validity testing. `tryCatch` and `withCallingHandlers` can be used to establish custom handlers while executing an expression.
`<gettext>` for the mechanisms for the automated translation of messages.
### Examples
```
iter <- 12
try(if(iter > 10) stop("too many iterations"))
tst1 <- function(...) stop("dummy error")
try(tst1(1:10, long, calling, expression))
tst2 <- function(...) stop("dummy error", call. = FALSE)
try(tst2(1:10, longcalling, expression, but.not.seen.in.Error))
```
r None
`Syntax` Operator Syntax and Precedence
----------------------------------------
### Description
Outlines **R** syntax and gives the precedence of operators.
### Details
The following unary and binary operators are defined. They are listed in precedence groups, from highest to lowest.
| | |
| --- | --- |
| `:: :::` | access variables in a namespace |
| `$ @` | component / slot extraction |
| `[ [[` | indexing |
| `^` | exponentiation (right to left) |
| `- +` | unary minus and plus |
| `:` | sequence operator |
| `%any%` | special operators (including `%%` and `%/%`) |
| `* /` | multiply, divide |
| `+ -` | (binary) add, subtract |
| `< > <= >= == !=` | ordering and comparison |
| `!` | negation |
| `& &&` | and |
| `| ||` | or |
| `~` | as in formulae |
| `-> ->>` | rightwards assignment |
| `<- <<-` | assignment (right to left) |
| `=` | assignment (right to left) |
| `?` | help (unary and binary) |
| |
Within an expression operators of equal precedence are evaluated from left to right except where indicated. (Note that `=` is not necessarily an operator.)
The binary operators `::`, `:::`, `$` and `@` require names or string constants on the right hand side, and the first two also require them on the left.
The links in the **See Also** section cover most other aspects of the basic syntax.
### Note
There are substantial precedence differences between **R** and S. In particular, in S `?` has the same precedence as (binary) `+ -` and `& && | ||` have equal precedence.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[Arithmetic](arithmetic)`, `[Comparison](comparison)`, `[Control](control)`, `[Extract](extract)`, `[Logic](logic)`, `[NumericConstants](numericconstants)`, `[Paren](paren)`, `[Quotes](quotes)`, `[Reserved](reserved)`.
The ‘R Language Definition’ manual.
### Examples
```
## Logical AND ("&&") has higher precedence than OR ("||"):
TRUE || TRUE && FALSE # is the same as
TRUE || (TRUE && FALSE) # and different from
(TRUE || TRUE) && FALSE
## Special operators have higher precedence than "!" (logical NOT).
## You can use this for %in% :
! 1:10 %in% c(2, 3, 5, 7) # same as !(1:10 %in% c(2, 3, 5, 7))
## but we strongly advise to use the "!( ... )" form in this case!
## '=' has lower precedence than '<-' ... so you should not mix them
## (and '<-' is considered better style anyway):
## Consequently, this gives a ("non-catchable") error
x <- y = 5 #-> Error in (x <- y) = 5 : ....
```
r None
`exists` Is an Object Defined?
-------------------------------
### Description
Look for an **R** object of the given name and possibly return it
### Usage
```
exists(x, where = -1, envir = , frame, mode = "any",
inherits = TRUE)
get0(x, envir = pos.to.env(-1L), mode = "any", inherits = TRUE,
ifnotfound = NULL)
```
### Arguments
| | |
| --- | --- |
| `x` | a variable name (given as a character string or a symbol). |
| `where` | where to look for the object (see the details section); if omitted, the function will search as if the name of the object appeared unquoted in an expression. |
| `envir` | an alternative way to specify an environment to look in, but it is usually simpler to just use the `where` argument. |
| `frame` | a frame in the calling list. Equivalent to giving `where` as `sys.frame(frame)`. |
| `mode` | the mode or type of object sought: see the ‘Details’ section. |
| `inherits` | should the enclosing frames of the environment be searched? |
| `ifnotfound` | the return value of `get0(x, *)` when `x` does not exist. |
### Details
The `where` argument can specify the environment in which to look for the object in any of several ways: as an integer (the position in the `<search>` list); as the character string name of an element in the search list; or as an `<environment>` (including using `[sys.frame](sys.parent)` to access the currently active function calls). The `envir` argument is an alternative way to specify an environment, but is primarily there for back compatibility.
This function looks to see if the name `x` has a value bound to it in the specified environment. If `inherits` is `TRUE` and a value is not found for `x` in the specified environment, the enclosing frames of the environment are searched until the name `x` is encountered. See `<environment>` and the ‘R Language Definition’ manual for details about the structure of environments and their enclosures.
**Warning:** `inherits = TRUE` is the default behaviour for **R** but not for S.
If `mode` is specified then only objects of that type are sought. The `mode` may specify one of the collections `"numeric"` and `"function"` (see `<mode>`): any member of the collection will suffice. (This is true even if a member of a collection is specified, so for example `mode = "special"` will seek any type of function.)
### Value
`exists():` Logical, true if and only if an object of the correct name and mode is found.
`get0():` The object—as from `<get>(x, *)`— if `exists(x, *)` is true, otherwise `ifnotfound`.
### Note
With `get0()`, instead of the easy to read but somewhat inefficient
```
if (exists(myVarName, envir = myEnvir)) {
r <- get(myVarName, envir = myEnvir)
## ... deal with r ...
}
```
you now can use the more efficient (and slightly harder to read)
```
if (!is.null(r <- get0(myVarName, envir = myEnvir))) {
## ... deal with r ...
}
```
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<get>` and `[hasName](../../utils/html/hasname)`. For quite a different kind of “existence” checking, namely if function arguments were specified, `<missing>`; and for yet a different kind, namely if a file exists, `[file.exists](files)`.
### Examples
```
## Define a substitute function if necessary:
if(!exists("some.fun", mode = "function"))
some.fun <- function(x) { cat("some.fun(x)\n"); x }
search()
exists("ls", 2) # true even though ls is in pos = 3
exists("ls", 2, inherits = FALSE) # false
## These are true (in most circumstances):
identical(ls, get0("ls"))
identical(NULL, get0(".foo.bar.")) # default ifnotfound = NULL (!)
```
| programming_docs |
r None
`file.info` Extract File Information
-------------------------------------
### Description
Utility function to extract information about files on the user's file systems.
### Usage
```
file.info(..., extra_cols = TRUE)
file.mode(...)
file.mtime(...)
file.size(...)
```
### Arguments
| | |
| --- | --- |
| `...` | character vectors containing file paths. Tilde-expansion is done: see `<path.expand>`. |
| `extra_cols` | Logical: return all cols rather than just the first six. |
### Details
What constitutes a ‘file’ is OS-dependent but includes directories. (However, directory names must not include a trailing backslash or slash on Windows.) See also the section in the help for `[file.exists](files)` on case-insensitive file systems.
The file ‘mode’ follows POSIX conventions, giving three octal digits summarizing the permissions for the file owner, the owner's group and for anyone respectively. Each digit is the logical *or* of read (4), write (2) and execute/search (1) permissions.
See <files> for how file paths with marked encodings are interpreted.
On most systems symbolic links are followed, so information is given about the file to which the link points rather than about the link.
### Value
For `file.info`, data frame with row names the file names and columns
| | |
| --- | --- |
| `size` | double: File size in bytes. |
| `isdir` | logical: Is the file a directory? |
| `mode` | integer of class `"octmode"`. The file permissions, printed in octal, for example `644`. |
| `mtime, ctime, atime` | object of class `"POSIXct"`: file modification, ‘last status change’ and last access times. |
| `uid` | integer: the user ID of the file's owner. |
| `gid` | integer: the group ID of the file's group. |
| `uname` | character: `uid` interpreted as a user name. |
| `grname` | character: `gid` interpreted as a group name. |
Unknown user and group names will be `NA`.
If `extra_cols` is false, only the first six columns are returned: as these can all be found from a single C system call this can be faster. (However, properly configured systems will use a ‘name service cache daemon’ to speed up the name lookups.)
Entries for non-existent or non-readable files will be `NA`. The `uid`, `gid`, `uname` and `grname` columns may not be supplied on a non-POSIX Unix-alike system, and will not be on Windows.
What is meant by the three file times depends on the OS and file system. On Windows native file systems `ctime` is the file creation time (something which is not recorded on most Unix-alike file systems). What is meant by ‘file access’ and hence the ‘last access time’ is system-dependent.
The resolution of the file times depends on both the OS and the type of the file system. Modern file systems typically record times to an accuracy of a microsecond or better: notable exceptions are HFS+ on macOS (recorded in seconds) and modification time on older FAT systems (recorded in increments of 2 seconds). Note that `"POSIXct"` times are by default printed in whole seconds: to change that see `[strftime](strptime)`.
`file.mode`, `file.mtime` and `file.size` are convenience wrappers returning just one of the columns.
### Note
Some (now old) systems allow files of more than 2Gb to be created but not accessed by the `stat` system call. Such files may show up as non-readable (and very likely not be readable by any of **R**'s input functions).
### See Also
`[Sys.readlink](sys.readlink)` to find out about symbolic links, `<files>`, `<file.access>`, `<list.files>`, and `[DateTimeClasses](datetimeclasses)` for the date formats.
`[Sys.chmod](files2)` to change permissions.
### Examples
```
ncol(finf <- file.info(dir())) # at least six
finf # the whole list
## Those that are more than 100 days old :
finf <- file.info(dir(), extra_cols = FALSE)
finf[difftime(Sys.time(), finf[,"mtime"], units = "days") > 100 , 1:4]
file.info("no-such-file-exists")
```
r None
`Arithmetic` Arithmetic Operators
----------------------------------
### Description
These unary and binary operators perform arithmetic on numeric or complex vectors (or objects which can be coerced to them).
### Usage
```
+ x
- x
x + y
x - y
x * y
x / y
x ^ y
x %% y
x %/% y
```
### Arguments
| | |
| --- | --- |
| `x, y` | numeric or complex vectors or objects which can be coerced to such, or other objects for which methods have been written. |
### Details
The unary and binary arithmetic operators are generic functions: methods can be written for them individually or via the `[Ops](groupgeneric)` group generic function. (See `[Ops](groupgeneric)` for how dispatch is computed.)
If applied to arrays the result will be an array if this is sensible (for example it will not if the recycling rule has been invoked).
Logical vectors will be coerced to integer or numeric vectors, `FALSE` having value zero and `TRUE` having value one.
`1 ^ y` and `y ^ 0` are `1`, *always*. `x ^ y` should also give the proper limit result when either (numeric) argument is [infinite](is.finite) (one of `Inf` or `-Inf`).
Objects such as arrays or time-series can be operated on this way provided they are conformable.
For double arguments, `%%` can be subject to catastrophic loss of accuracy if `x` is much larger than `y`, and a warning is given if this is detected.
`%%` and `x %/% y` can be used for non-integer `y`, e.g. `1 %/% 0.2`, but the results are subject to representation error and so may be platform-dependent. Because the IEC 60559 representation of `0.2` is a binary fraction slightly larger than `0.2`, the answer to `1 %/% 0.2` should be `4` but most platforms give `5`.
Users are sometimes surprised by the value returned, for example why `(-8)^(1/3)` is `NaN`. For <double> inputs, **R** makes use of IEC 60559 arithmetic on all platforms, together with the C system function pow for the `^` operator. The relevant standards define the result in many corner cases. In particular, the result in the example above is mandated by the C99 standard. On many Unix-alike systems the command `man pow` gives details of the values in a large number of corner cases.
Arithmetic on type <double> in **R** is supposed to be done in ‘round to nearest, ties to even’ mode, but this does depend on the compiler and FPU being set up correctly.
### Value
Unary `+` and unary `-` return a numeric or complex vector. All attributes (including class) are preserved if there is no coercion: logical `x` is coerced to integer and names, dims and dimnames are preserved.
The binary operators return vectors containing the result of the element by element operations. If involving a zero-length vector the result has length zero. Otherwise, the elements of shorter vectors are recycled as necessary (with a `<warning>` when they are recycled only *fractionally*). The operators are `+` for addition, `-` for subtraction, `*` for multiplication, `/` for division and `^` for exponentiation.
`%%` indicates `x mod y` (“x modulo y”) and `%/%` indicates integer division. It is guaranteed that
`x == (x %% y) + y * (x %/% y)`
(up to rounding error)
unless `y == 0` where the result of `%%` is `[NA\_integer\_](na)` or `[NaN](is.finite)` (depending on the `<typeof>` of the arguments) or for some non-[finite](is.finite) arguments, e.g., when the RHS of the identity above amounts to `Inf - Inf`.
If either argument is complex the result will be complex, otherwise if one or both arguments are numeric, the result will be numeric. If both arguments are of type <integer>, the type of the result of `/` and `^` is <numeric> and for the other operators it is integer (with overflow, which occurs at *+/- (2^31 - 1)*, returned as `NA_integer_` with a warning).
The rules for determining the attributes of the result are rather complicated. Most attributes are taken from the longer argument. Names will be copied from the first if it is the same length as the answer, otherwise from the second if that is. If the arguments are the same length, attributes will be copied from both, with those of the first argument taking precedence when the same attribute is present in both arguments. For time series, these operations are allowed only if the series are compatible, when the class and `[tsp](../../stats/html/tsp)` attribute of whichever is a time series (the same, if both are) are used. For arrays (and an array result) the dimensions and dimnames are taken from first argument if it is an array, otherwise the second.
### S4 methods
These operators are members of the S4 `[Arith](../../methods/html/s4groupgeneric)` group generic, and so methods can be written for them individually as well as for the group generic (or the `Ops` group generic), with arguments `c(e1, e2)` (with `e2` missing for a unary operator).
### Implementation limits
**R** is dependent on OS services (and they on FPUs) for floating-point arithmetic. On all current **R** platforms IEC 60559 (also known as IEEE 754) arithmetic is used, but some things in those standards are optional. In particular, the support for *denormal* aka *subnormal* numbers (those outside the range given by `[.Machine](zmachine)`) may differ between platforms and even between calculations on a single platform.
Another potential issue is signed zeroes: on IEC 60559 platforms there are two zeroes with internal representations differing by sign. Where possible **R** treats them as the same, but for example direct output from C code often does not do so and may output -0.0 (and on Windows whether it does so or not depends on the version of Windows). One place in **R** where the difference might be seen is in division by zero: `1/x` is `Inf` or `-Inf` depending on the sign of zero `x`. Another place is `<identical>(0, -0, num.eq = FALSE)`.
### Note
All logical operations involving a zero-length vector have a zero-length result.
The binary operators are sometimes called as functions as e.g. ``&`(x, y)`: see the description of how argument-matching is done in `[Ops](groupgeneric)`.
`**` is translated in the parser to `^`, but this was undocumented for many years. It appears as an index entry in Becker *et al* (1988), pointing to the help for `Deprecated` but is not actually mentioned on that page. Even though it had been deprecated in S for 20 years, it was still accepted in **R** in 2008.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
D. Goldberg (1991). What Every Computer Scientist Should Know about Floating-Point Arithmetic. *ACM Computing Surveys*, **23**(1), 5–48. doi: [10.1145/103162.103163](https://doi.org/10.1145/103162.103163).
Also available at <https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html>.
For the IEC 60559 (aka IEEE 754) standard: <https://www.iso.org/standard/57469.html> and <https://en.wikipedia.org/wiki/IEEE_754>.
### See Also
`[sqrt](mathfun)` for miscellaneous and `[Special](special)` for special mathematical functions.
`[Syntax](syntax)` for operator precedence.
`[%\*%](matmult)` for matrix multiplication.
### Examples
```
x <- -1:12
x + 1
2 * x + 3
x %% 2 #-- is periodic
x %/% 5
x %% Inf # now is defined by limit (gave NaN in earlier versions of R)
```
r None
`builtins` Returns the Names of All Built-in Objects
-----------------------------------------------------
### Description
Return the names of all the built-in objects. These are fetched directly from the symbol table of the **R** interpreter.
### Usage
```
builtins(internal = FALSE)
```
### Arguments
| | |
| --- | --- |
| `internal` | a logical indicating whether only ‘internal’ functions (which can be called via `[.Internal](internal)`) should be returned. |
### Details
`builtins()` returns an unsorted list of the objects in the symbol table, that is all the objects in the base environment. These are the built-in objects plus any that have been added subsequently when the base package was loaded. It is less confusing to use `ls(baseenv(), all.names = TRUE)`.
`builtins(TRUE)` returns an unsorted list of the names of internal functions, that is those which can be accessed as `.Internal(foo(args ...))` for foo in the list.
### Value
A character vector.
r None
`integer` Integer Vectors
--------------------------
### Description
Creates or tests for objects of type `"integer"`.
### Usage
```
integer(length = 0)
as.integer(x, ...)
is.integer(x)
```
### Arguments
| | |
| --- | --- |
| `length` | A non-negative integer specifying the desired length. Double values will be coerced to integer: supplying an argument of length other than one is an error. |
| `x` | object to be coerced or tested. |
| `...` | further arguments passed to or from other methods. |
### Details
Integer vectors exist so that data can be passed to C or Fortran code which expects them, and so that (small) integer data can be represented exactly and compactly.
Note that current implementations of **R** use 32-bit integers for integer vectors, so the range of representable integers is restricted to about *+/-2\*10^9*: `<double>`s can hold much larger integers exactly.
### Value
`integer` creates a integer vector of the specified length. Each element of the vector is equal to `0`.
`as.integer` attempts to coerce its argument to be of integer type. The answer will be `NA` unless the coercion succeeds. Real values larger in modulus than the largest integer are coerced to `NA` (unlike S which gives the most extreme integer of the same sign). Non-integral numeric values are truncated towards zero (i.e., `as.integer(x)` equals `[trunc](round)(x)` there), and imaginary parts of complex numbers are discarded (with a warning). Character strings containing optional whitespace followed by either a decimal representation or a hexadecimal representation (starting with `0x` or `0X`) can be converted, as well as any allowed by the platform for real numbers. Like `[as.vector](vector)` it strips attributes including names. (To ensure that an object `x` is of integer type without stripping attributes, use `[storage.mode](mode)(x) <- "integer"`.)
`is.integer` returns `TRUE` or `FALSE` depending on whether its argument is of integer [type](typeof) or not, unless it is a factor when it returns `FALSE`.
### Note
`is.integer(x)` does **not** test if `x` contains integer numbers! For that, use `<round>`, as in the function `is.wholenumber(x)` in the examples.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<numeric>`, `[storage.mode](mode)`.
`<round>` (and `ceiling` and `floor` on that help page) to convert to integral values.
### Examples
```
## as.integer() truncates:
x <- pi * c(-1:1, 10)
as.integer(x)
is.integer(1) # is FALSE !
is.wholenumber <-
function(x, tol = .Machine$double.eps^0.5) abs(x - round(x)) < tol
is.wholenumber(1) # is TRUE
(x <- seq(1, 5, by = 0.5) )
is.wholenumber( x ) #--> TRUE FALSE TRUE ...
```
r None
`Vectorize` Vectorize a Scalar Function
----------------------------------------
### Description
`Vectorize` creates a function wrapper that vectorizes the action of its argument `FUN`.
### Usage
```
Vectorize(FUN, vectorize.args = arg.names, SIMPLIFY = TRUE,
USE.NAMES = TRUE)
```
### Arguments
| | |
| --- | --- |
| `FUN` | function to apply, found via `<match.fun>`. |
| `vectorize.args` | a character vector of arguments which should be vectorized. Defaults to all arguments of `FUN`. |
| `SIMPLIFY` | logical or character string; attempt to reduce the result to a vector, matrix or higher dimensional array; see the `simplify` argument of `[sapply](lapply)`. |
| `USE.NAMES` | logical; use names if the first ... argument has names, or if it is a character vector, use that character vector as the names. |
### Details
The arguments named in the `vectorize.args` argument to `Vectorize` are the arguments passed in the `...` list to `<mapply>`. Only those that are actually passed will be vectorized; default values will not. See the examples.
`Vectorize` cannot be used with primitive functions as they do not have a value for `<formals>`.
It also cannot be used with functions that have arguments named `FUN`, `vectorize.args`, `SIMPLIFY` or `USE.NAMES`, as they will interfere with the `Vectorize` arguments. See the `combn` example below for a workaround.
### Value
A function with the same arguments as `FUN`, wrapping a call to `<mapply>`.
### Examples
```
# We use rep.int as rep is primitive
vrep <- Vectorize(rep.int)
vrep(1:4, 4:1)
vrep(times = 1:4, x = 4:1)
vrep <- Vectorize(rep.int, "times")
vrep(times = 1:4, x = 42)
f <- function(x = 1:3, y) c(x, y)
vf <- Vectorize(f, SIMPLIFY = FALSE)
f(1:3, 1:3)
vf(1:3, 1:3)
vf(y = 1:3) # Only vectorizes y, not x
# Nonlinear regression contour plot, based on nls() example
require(graphics)
SS <- function(Vm, K, resp, conc) {
pred <- (Vm * conc)/(K + conc)
sum((resp - pred)^2 / pred)
}
vSS <- Vectorize(SS, c("Vm", "K"))
Treated <- subset(Puromycin, state == "treated")
Vm <- seq(140, 310, length.out = 50)
K <- seq(0, 0.15, length.out = 40)
SSvals <- outer(Vm, K, vSS, Treated$rate, Treated$conc)
contour(Vm, K, SSvals, levels = (1:10)^2, xlab = "Vm", ylab = "K")
# combn() has an argument named FUN
combnV <- Vectorize(function(x, m, FUNV = NULL) combn(x, m, FUN = FUNV),
vectorize.args = c("x", "m"))
combnV(4, 1:4)
combnV(4, 1:4, sum)
```
r None
`memlimits` Query and Set Heap Size Limits
-------------------------------------------
### Description
Query and set the maximal size of the vector heap and the maximal number of heap nodes for the current **R** process.
### Usage
```
mem.maxVSize(vsize = 0)
mem.maxNSize(nsize = 0)
```
### Arguments
| | |
| --- | --- |
| `vsize` | numeric; new size limit in Mb. |
| `nsize` | numeric; new maximal node number. |
### Details
New Limits lower than current usage are ignored. Specifying a size of `Inf` sets the limit to the maximal possible value for the platform.
The default maximal values are unlimited on most platforms, but can be adjusted using environment variables as described in `[Memory](memory)`. On macOS a lower default vector heap limit is used to protect against the **R** process being killed when macOS over-commits memory.
Adjusting the maximal number of nodes is rarely necessary. Adjusting the vector heap size limit can be useful on macOS in particular but should be done with caution.
### Value
The current or new value, in Mb for `mem.maxVSize`. `Inf` is returned if the current value is unlimited.
### See Also
`[Memory](memory)`.
r None
`environment` Environment Access
---------------------------------
### Description
Get, set, test for and create environments.
### Usage
```
environment(fun = NULL)
environment(fun) <- value
is.environment(x)
.GlobalEnv
globalenv()
.BaseNamespaceEnv
emptyenv()
baseenv()
new.env(hash = TRUE, parent = parent.frame(), size = 29L)
parent.env(env)
parent.env(env) <- value
environmentName(env)
env.profile(env)
```
### Arguments
| | |
| --- | --- |
| `fun` | a `<function>`, a `[formula](../../stats/html/formula)`, or `NULL`, which is the default. |
| `value` | an environment to associate with the function |
| `x` | an arbitrary **R** object. |
| `hash` | a logical, if `TRUE` the environment will use a hash table. |
| `parent` | an environment to be used as the enclosure of the environment created. |
| `env` | an environment |
| `size` | an integer specifying the initial size for a hashed environment. An internal default value will be used if `size` is `NA` or zero. This argument is ignored if `hash` is `FALSE`. |
### Details
Environments consist of a *frame*, or collection of named objects, and a pointer to an *enclosing environment*. The most common example is the frame of variables local to a function call; its *enclosure* is the environment where the function was defined (unless changed subsequently). The enclosing environment is distinguished from the *parent frame*: the latter (returned by `[parent.frame](sys.parent)`) refers to the environment of the caller of a function. Since confusion is so easy, it is best never to use ‘parent’ in connection with an environment (despite the presence of the function `parent.env`).
When `<get>` or `<exists>` search an environment with the default `inherits = TRUE`, they look for the variable in the frame, then in the enclosing frame, and so on.
The global environment `.GlobalEnv`, more often known as the user's workspace, is the first item on the search path. It can also be accessed by `globalenv()`. On the search path, each item's enclosure is the next item.
The object `.BaseNamespaceEnv` is the namespace environment for the base package. The environment of the base package itself is available as `baseenv()`.
If one follows the chain of enclosures found by repeatedly calling `parent.env` from any environment, eventually one reaches the empty environment `emptyenv()`, into which nothing may be assigned.
The replacement function `parent.env<-` is extremely dangerous as it can be used to destructively change environments in ways that violate assumptions made by the internal C code. It may be removed in the near future.
The replacement form of `environment`, `is.environment`, `baseenv`, `emptyenv` and `globalenv` are <primitive> functions.
System environments, such as the base, global and empty environments, have names as do the package and namespace environments and those generated by `attach()`. Other environments can be named by giving a `"name"` attribute, but this needs to be done with care as environments have unusual copying semantics.
### Value
If `fun` is a function or a formula then `environment(fun)` returns the environment associated with that function or formula. If `fun` is `NULL` then the current evaluation environment is returned.
The replacement form sets the environment of the function or formula `fun` to the `value` given.
`is.environment(obj)` returns `TRUE` if and only if `obj` is an `environment`.
`new.env` returns a new (empty) environment with (by default) enclosure the parent frame.
`parent.env` returns the enclosing environment of its argument.
`parent.env<-` sets the enclosing environment of its first argument.
`environmentName` returns a character string, that given when the environment is printed or `""` if it is not a named environment.
`env.profile` returns a list with the following components: `size` the number of chains that can be stored in the hash table, `nchains` the number of non-empty chains in the table (as reported by `HASHPRI`), and `counts` an integer vector giving the length of each chain (zero for empty chains). This function is intended to assess the performance of hashed environments. When `env` is a non-hashed environment, `NULL` is returned.
### See Also
For the performance implications of hashing or not, see <https://en.wikipedia.org/wiki/Hash_table>.
The `envir` argument of `<eval>`, `<get>`, and `<exists>`.
`<ls>` may be used to view the objects in an environment, and hence `[ls.str](../../utils/html/ls_str)` may be useful for an overview.
`<sys.source>` can be used to populate an environment.
### Examples
```
f <- function() "top level function"
##-- all three give the same:
environment()
environment(f)
.GlobalEnv
ls(envir = environment(stats::approxfun(1:2, 1:2, method = "const")))
is.environment(.GlobalEnv) # TRUE
e1 <- new.env(parent = baseenv()) # this one has enclosure package:base.
e2 <- new.env(parent = e1)
assign("a", 3, envir = e1)
ls(e1)
ls(e2)
exists("a", envir = e2) # this succeeds by inheritance
exists("a", envir = e2, inherits = FALSE)
exists("+", envir = e2) # this succeeds by inheritance
eh <- new.env(hash = TRUE, size = NA)
with(env.profile(eh), stopifnot(size == length(counts)))
```
| programming_docs |
r None
`by` Apply a Function to a Data Frame Split by Factors
-------------------------------------------------------
### Description
Function `by` is an object-oriented wrapper for `<tapply>` applied to data frames.
### Usage
```
by(data, INDICES, FUN, ..., simplify = TRUE)
```
### Arguments
| | |
| --- | --- |
| `data` | an **R** object, normally a data frame, possibly a matrix. |
| `INDICES` | a factor or a list of factors, each of length `nrow(data)`. |
| `FUN` | a function to be applied to (usually data-frame) subsets of `data`. |
| `...` | further arguments to `FUN`. |
| `simplify` | logical: see `<tapply>`. |
### Details
A data frame is split by row into data frames subsetted by the values of one or more factors, and function `FUN` is applied to each subset in turn.
For the default method, an object with dimensions (e.g., a matrix) is coerced to a data frame and the data frame method applied. Other objects are also coerced to a data frame, but `FUN` is applied separately to (subsets of) each column of the data frame.
### Value
An object of class `"by"`, giving the results for each subset. This is always a list if `simplify` is false, otherwise a list or array (see `<tapply>`).
### See Also
`<tapply>`, `[simplify2array](lapply)`. `[ave](../../stats/html/ave)` also applies a function block-wise.
### Examples
```
require(stats)
by(warpbreaks[, 1:2], warpbreaks[,"tension"], summary)
by(warpbreaks[, 1], warpbreaks[, -1], summary)
by(warpbreaks, warpbreaks[,"tension"],
function(x) lm(breaks ~ wool, data = x))
## now suppose we want to extract the coefficients by group
tmp <- with(warpbreaks,
by(warpbreaks, tension,
function(x) lm(breaks ~ wool, data = x)))
sapply(tmp, coef)
```
r None
`NumericConstants` Numeric Constants
-------------------------------------
### Description
How **R** parses numeric constants.
### Details
**R** parses numeric constants in its input in a very similar way to C99 floating-point constants.
`[Inf](is.finite)` and `[NaN](is.finite)` are numeric constants (with `<typeof>(.) "double"`). In text input (e.g., in `<scan>` and `[as.double](double)`), these are recognized ignoring case as is `infinity` as an alternative to `Inf`. `[NA\_real\_](na)` and `[NA\_integer\_](na)` are constants of types `"double"` and `"integer"` representing missing values. All other numeric constants start with a digit or period and are either a decimal or hexadecimal constant optionally followed by `L`.
Hexadecimal constants start with `0x` or `0X` followed by a nonempty sequence from `0-9 a-f A-F .` which is interpreted as a hexadecimal number, optionally followed by a binary exponent. A binary exponent consists of a `P` or `p` followed by an optional plus or minus sign followed by a non-empty sequence of (decimal) digits, and indicates multiplication by a power of two. Thus `0x123p456` is *291 \* 2^456*.
Decimal constants consist of a nonempty sequence of digits possibly containing a period (the decimal point), optionally followed by a decimal exponent. A decimal exponent consists of an `E` or `e` followed by an optional plus or minus sign followed by a non-empty sequence of digits, and indicates multiplication by a power of ten.
Values which are too large or too small to be representable will overflow to `Inf` or underflow to `0.0`.
A numeric constant immediately followed by `i` is regarded as an imaginary <complex> number.
An numeric constant immediately followed by `L` is regarded as an `<integer>` number when possible (and with a warning if it contains a `"."`).
Only the ASCII digits 0–9 are recognized as digits, even in languages which have other representations of digits. The ‘decimal separator’ is always a period and never a comma.
Note that a leading plus or minus is not regarded by the parser as part of a numeric constant but as a unary operator applied to the constant.
### Note
When a string is parsed to input a numeric constant, the number may or may not be representable exactly in the C double type used. If not one of the nearest representable numbers will be returned.
**R**'s own C code is used to convert constants to binary numbers, so the effect can be expected to be the same on all platforms implementing full IEC 600559 arithmetic (the most likely area of difference being the handling of numbers less than `[.Machine](zmachine)$double.xmin`). The same code is used by `<scan>`.
### See Also
`[Syntax](syntax)`. For complex numbers, see `<complex>`. `[Quotes](quotes)` for the parsing of character constants, `[Reserved](reserved)` for the “reserved words” in **R**.
### Examples
```
## You can create numbers using fixed or scientific formatting.
2.1
2.1e10
-2.1E-10
## The resulting objects have class numeric and type double.
class(2.1)
typeof(2.1)
## This holds even if what you typed looked like an integer.
class(2)
typeof(2)
## If you actually wanted integers, use an "L" suffix.
class(2L)
typeof(2L)
## These are equal but not identical
2 == 2L
identical(2, 2L)
## You can write numbers between 0 and 1 without a leading "0"
## (but typically this makes code harder to read)
.1234
sqrt(1i) # remember elementary math?
utils::str(0xA0)
identical(1L, as.integer(1))
## You can combine the "0x" prefix with the "L" suffix :
identical(0xFL, as.integer(15))
```
r None
`eval` Evaluate an (Unevaluated) Expression
--------------------------------------------
### Description
Evaluate an **R** expression in a specified environment.
### Usage
```
eval(expr, envir = parent.frame(),
enclos = if(is.list(envir) || is.pairlist(envir))
parent.frame() else baseenv())
evalq(expr, envir, enclos)
eval.parent(expr, n = 1)
local(expr, envir = new.env())
```
### Arguments
| | |
| --- | --- |
| `expr` | an object to be evaluated. See ‘Details’. |
| `envir` | the `<environment>` in which `expr` is to be evaluated. May also be `NULL`, a list, a data frame, a pairlist or an integer as specified to `[sys.call](sys.parent)`. |
| `enclos` | Relevant when `envir` is a (pair)list or a data frame. Specifies the enclosure, i.e., where **R** looks for objects not found in `envir`. This can be `NULL` (interpreted as the base package environment, `[baseenv](environment)()`) or an environment. |
| `n` | number of parent generations to go back |
### Details
`eval` evaluates the `expr` argument in the environment specified by `envir` and returns the computed value. If `envir` is not specified, then the default is `[parent.frame](sys.parent)()` (the environment where the call to `eval` was made).
Objects to be evaluated can be of types `<call>` or `<expression>` or <name> (when the name is looked up in the current scope and its binding is evaluated), a [promise](delayedassign) or any of the basic types such as vectors, functions and environments (which are returned unchanged).
The `evalq` form is equivalent to `eval(quote(expr), ...)`. `eval` evaluates its first argument in the current scope before passing it to the evaluator: `evalq` avoids this.
`eval.parent(expr, n)` is a shorthand for `eval(expr, parent.frame(n))`.
If `envir` is a list (such as a data frame) or pairlist, it is copied into a temporary environment (with enclosure `enclos`), and the temporary environment is used for evaluation. So if `expr` changes any of the components named in the (pair)list, the changes are lost.
If `envir` is `NULL` it is interpreted as an empty list so no values could be found in `envir` and look-up goes directly to `enclos`.
`local` evaluates an expression in a local environment. It is equivalent to `evalq` except that its default argument creates a new, empty environment. This is useful to create anonymous recursive functions and as a kind of limited namespace feature since variables defined in the environment are not visible from the outside.
### Value
The result of evaluating the object: for an expression vector this is the result of evaluating the last element.
### Note
Due to the difference in scoping rules, there are some differences between **R** and S in this area. In particular, the default enclosure in S is the global environment.
When evaluating expressions in a data frame that has been passed as an argument to a function, the relevant enclosure is often the caller's environment, i.e., one needs `eval(x, data, parent.frame())`.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. (`eval` only.)
### See Also
`<expression>`, `[quote](substitute)`, `[sys.frame](sys.parent)`, `[parent.frame](sys.parent)`, `<environment>`.
Further, `<force>` to *force* evaluation, typically of function arguments.
### Examples
```
eval(2 ^ 2 ^ 3)
mEx <- expression(2^2^3); mEx; 1 + eval(mEx)
eval({ xx <- pi; xx^2}) ; xx
a <- 3 ; aa <- 4 ; evalq(evalq(a+b+aa, list(a = 1)), list(b = 5)) # == 10
a <- 3 ; aa <- 4 ; evalq(evalq(a+b+aa, -1), list(b = 5)) # == 12
ev <- function() {
e1 <- parent.frame()
## Evaluate a in e1
aa <- eval(expression(a), e1)
## evaluate the expression bound to a in e1
a <- expression(x+y)
list(aa = aa, eval = eval(a, e1))
}
tst.ev <- function(a = 7) { x <- pi; y <- 1; ev() }
tst.ev() #-> aa : 7, eval : 4.14
a <- list(a = 3, b = 4)
with(a, a <- 5) # alters the copy of a from the list, discarded.
##
## Example of evalq()
##
N <- 3
env <- new.env()
assign("N", 27, envir = env)
## this version changes the visible copy of N only, since the argument
## passed to eval is '4'.
eval(N <- 4, env)
N
get("N", envir = env)
## this version does the assignment in env, and changes N only there.
evalq(N <- 5, env)
N
get("N", envir = env)
##
## Uses of local()
##
# Mutually recursive.
# gg gets value of last assignment, an anonymous version of f.
gg <- local({
k <- function(y)f(y)
f <- function(x) if(x) x*k(x-1) else 1
})
gg(10)
sapply(1:5, gg)
# Nesting locals: a is private storage accessible to k
gg <- local({
k <- local({
a <- 1
function(y){print(a <<- a+1);f(y)}
})
f <- function(x) if(x) x*k(x-1) else 1
})
sapply(1:5, gg)
ls(envir = environment(gg))
ls(envir = environment(get("k", envir = environment(gg))))
```
r None
`drop` Drop Redundant Extent Information
-----------------------------------------
### Description
Delete the dimensions of an array which have only one level.
### Usage
```
drop(x)
```
### Arguments
| | |
| --- | --- |
| `x` | an array (including a matrix). |
### Value
If `x` is an object with a `dim` attribute (e.g., a matrix or `<array>`), then `drop` returns an object like `x`, but with any extents of length one removed. Any accompanying `dimnames` attribute is adjusted and returned with `x`: if the result is a vector the `names` are taken from the `dimnames` (if any). If the result is a length-one vector, the names are taken from the first dimension with a dimname.
Array subsetting (`[[](extract)`) performs this reduction unless used with `drop = FALSE`, but sometimes it is useful to invoke `drop` directly.
### See Also
`[drop1](../../stats/html/add1)` which is used for dropping terms in models.
### Examples
```
dim(drop(array(1:12, dim = c(1,3,1,1,2,1,2)))) # = 3 2 2
drop(1:3 %*% 2:4) # scalar product
```
r None
`isR` Are we using R, rather than S?
-------------------------------------
### Description
Test if running under **R**.
### Usage
```
is.R()
```
### Details
The function has been written such as to correctly run in all versions of **R**, S and S-PLUS. In order for code to be runnable in both **R** and S dialects previous to S-PLUS 8.0, your code must either define `is.R` or use it as
`if (exists("is.R") && is.function(is.R) && is.R()) {`
*## R-specific code*
`} else {`
*## S-version of code*
`}`
### Value
`is.R` returns `TRUE` if we are using **R** and `FALSE` otherwise.
### See Also
`[R.version](version)`, `<system>`.
### Examples
```
x <- stats::runif(20); small <- x < 0.4
## In the early years of R, 'which()' only existed in R:
if(is.R()) which(small) else seq(along = small)[small]
```
r None
`Extract.data.frame` Extract or Replace Parts of a Data Frame
--------------------------------------------------------------
### Description
Extract or replace subsets of data frames.
### Usage
```
## S3 method for class 'data.frame'
x[i, j, drop = ]
## S3 replacement method for class 'data.frame'
x[i, j] <- value
## S3 method for class 'data.frame'
x[[..., exact = TRUE]]
## S3 replacement method for class 'data.frame'
x[[i, j]] <- value
## S3 replacement method for class 'data.frame'
x$name <- value
```
### Arguments
| | |
| --- | --- |
| `x` | data frame. |
| `i, j, ...` | elements to extract or replace. For `[` and `[[`, these are `numeric` or `character` or, for `[` only, empty or `logical`. Numeric values are coerced to integer as if by `[as.integer](integer)`. For replacement by `[`, a logical matrix is allowed. |
| `name` | A literal character string or a <name> (possibly [backtick](quotes) quoted). |
| `drop` | logical. If `TRUE` the result is coerced to the lowest possible dimension. The default is to drop if only one column is left, but **not** to drop if only one row is left. |
| `value` | A suitable replacement value: it will be repeated a whole number of times if necessary and it may be coerced: see the Coercion section. If `NULL`, deletes the column if a single column is selected. |
| `exact` | logical: see `[[](extract)`, and applies to column names. |
### Details
Data frames can be indexed in several modes. When `[` and `[[` are used with a single vector index (`x[i]` or `x[[i]]`), they index the data frame as if it were a list. In this usage a `drop` argument is ignored, with a warning.
There is no `data.frame` method for `$`, so `x$name` uses the default method which treats `x` as a list (with partial matching of column names if the match is unique, see `[Extract](extract)`). The replacement method (for `$`) checks `value` for the correct number of rows, and replicates it if necessary.
When `[` and `[[` are used with two indices (`x[i, j]` and `x[[i, j]]`) they act like indexing a matrix: `[[` can only be used to select one element. Note that for each selected column, `xj` say, typically (if it is not matrix-like), the resulting column will be `xj[i]`, and hence rely on the corresponding `[` method, see the examples section.
If `[` returns a data frame it will have unique (and non-missing) row names, if necessary transforming the row names using `<make.unique>`. Similarly, if columns are selected column names will be transformed to be unique if necessary (e.g., if columns are selected more than once, or if more than one column of a given name is selected if the data frame has duplicate column names).
When `drop = TRUE`, this is applied to the subsetting of any matrices contained in the data frame as well as to the data frame itself.
The replacement methods can be used to add whole column(s) by specifying non-existent column(s), in which case the column(s) are added at the right-hand edge of the data frame and numerical indices must be contiguous to existing indices. On the other hand, rows can be added at any row after the current last row, and the columns will be in-filled with missing values. Missing values in the indices are not allowed for replacement.
For `[` the replacement value can be a list: each element of the list is used to replace (part of) one column, recycling the list as necessary. If columns specified by number are created, the names (if any) of the corresponding list elements are used to name the columns. If the replacement is not selecting rows, list values can contain `NULL` elements which will cause the corresponding columns to be deleted. (See the Examples.)
Matrix indexing (`x[i]` with a logical or a 2-column integer matrix `i`) using `[` is not recommended. For extraction, `x` is first coerced to a matrix. For replacement, logical matrix indices must be of the same dimension as `x`. Replacements are done one column at a time, with multiple type coercions possibly taking place.
Both `[` and `[[` extraction methods partially match row names. By default neither partially match column names, but `[[` will if `exact = FALSE` (and with a warning if `exact =
NA`). If you want to exact matching on row names use `<match>`, as in the examples.
### Value
For `[` a data frame, list or a single column (the latter two only when dimensions have been dropped). If matrix indexing is used for extraction a vector results. If the result would be a data frame an error results if undefined columns are selected (as there is no general concept of a 'missing' column in a data frame). Otherwise if a single column is selected and this is undefined the result is `NULL`.
For `[[` a column of the data frame or `NULL` (extraction with one index) or a length-one vector (extraction with two indices).
For `$`, a column of the data frame (or `NULL`).
For `[<-`, `[[<-` and `$<-`, a data frame.
### Coercion
The story over when replacement values are coerced is a complicated one, and one that has changed during **R**'s development. This section is a guide only.
When `[` and `[[` are used to add or replace a whole column, no coercion takes place but `value` will be replicated (by calling the generic function `<rep>`) to the right length if an exact number of repeats can be used.
When `[` is used with a logical matrix, each value is coerced to the type of the column into which it is to be placed.
When `[` and `[[` are used with two indices, the column will be coerced as necessary to accommodate the value.
Note that when the replacement value is an array (including a matrix) it is *not* treated as a series of columns (as `<data.frame>` and `<as.data.frame>` do) but inserted as a single column.
### Warning
The default behaviour when only one *row* is left is equivalent to specifying `drop = FALSE`. To drop from a data frame to a list, `drop = TRUE` has to be specified explicitly.
Arguments other than `drop` and `exact` should not be named: there is a warning if they are and the behaviour differs from the description here.
### See Also
`<subset>` which is often easier for extraction, `<data.frame>`, `[Extract](extract)`.
### Examples
```
sw <- swiss[1:5, 1:4] # select a manageable subset
sw[1:3] # select columns
sw[, 1:3] # same
sw[4:5, 1:3] # select rows and columns
sw[1] # a one-column data frame
sw[, 1, drop = FALSE] # the same
sw[, 1] # a (unnamed) vector
sw[[1]] # the same
sw$Fert # the same (possibly w/ warning, see ?Extract)
sw[1,] # a one-row data frame
sw[1,, drop = TRUE] # a list
sw["C", ] # partially matches
sw[match("C", row.names(sw)), ] # no exact match
try(sw[, "Ferti"]) # column names must match exactly
sw[sw$Fertility > 90,] # logical indexing, see also ?subset
sw[c(1, 1:2), ] # duplicate row, unique row names are created
sw[sw <= 6] <- 6 # logical matrix indexing
sw
## adding a column
sw["new1"] <- LETTERS[1:5] # adds a character column
sw[["new2"]] <- letters[1:5] # ditto
sw[, "new3"] <- LETTERS[1:5] # ditto
sw$new4 <- 1:5
sapply(sw, class)
sw$new # -> NULL: no unique partial match
sw$new4 <- NULL # delete the column
sw
sw[6:8] <- list(letters[10:14], NULL, aa = 1:5)
# update col. 6, delete 7, append
sw
## matrices in a data frame
A <- data.frame(x = 1:3, y = I(matrix(4:9, 3, 2)),
z = I(matrix(letters[1:9], 3, 3)))
A[1:3, "y"] # a matrix
A[1:3, "z"] # a matrix
A[, "y"] # a matrix
stopifnot(identical(colnames(A), c("x", "y", "z")), ncol(A) == 3L,
identical(A[,"y"], A[1:3, "y"]),
inherits (A[,"y"], "AsIs"))
## keeping special attributes: use a class with a
## "as.data.frame" and "[" method;
## "avector" := vector that keeps attributes. Could provide a constructor
## avector <- function(x) { class(x) <- c("avector", class(x)); x }
as.data.frame.avector <- as.data.frame.vector
`[.avector` <- function(x,i,...) {
r <- NextMethod("[")
mostattributes(r) <- attributes(x)
r
}
d <- data.frame(i = 0:7, f = gl(2,4),
u = structure(11:18, unit = "kg", class = "avector"))
str(d[2:4, -1]) # 'u' keeps its "unit"
```
| programming_docs |
r None
`copyright` Copyrights of Files Used to Build R
------------------------------------------------
### Description
**R** is released under the ‘GNU Public License’: see `<license>` for details. The license describes your right to use **R**. Copyright is concerned with ownership of intellectual rights, and some of the software used has conditions that the copyright must be explicitly stated: see the ‘Details’ section. We are grateful to these people and other contributors (see `<contributors>`) for the ability to use their work.
### Details
The file ‘[R\_HOME](rhome)/COPYRIGHTS’ lists the copyrights in full detail.
r None
`agrep` Approximate String Matching (Fuzzy Matching)
-----------------------------------------------------
### Description
Searches for approximate matches to `pattern` (the first argument) within each element of the string `x` (the second argument) using the generalized Levenshtein edit distance (the minimal possibly weighted number of insertions, deletions and substitutions needed to transform one string into another).
### Usage
```
agrep(pattern, x, max.distance = 0.1, costs = NULL,
ignore.case = FALSE, value = FALSE, fixed = TRUE,
useBytes = FALSE)
agrepl(pattern, x, max.distance = 0.1, costs = NULL,
ignore.case = FALSE, fixed = TRUE, useBytes = FALSE)
```
### Arguments
| | |
| --- | --- |
| `pattern` | a non-empty character string to be matched. For `fixed = FALSE` this should contain an extended [regular expression](regex). Coerced by `[as.character](character)` to a string if possible. |
| `x` | character vector where matches are sought. Coerced by `[as.character](character)` to a character vector if possible. |
| `max.distance` | Maximum distance allowed for a match. Expressed either as integer, or as a fraction of the *pattern* length times the maximal transformation cost (will be replaced by the smallest integer not less than the corresponding fraction), or a list with possible components
`cost`:
maximum number/fraction of match cost (generalized Levenshtein distance)
`all`:
maximal number/fraction of *all* transformations (insertions, deletions and substitutions)
`insertions`:
maximum number/fraction of insertions
`deletions`:
maximum number/fraction of deletions
`substitutions`:
maximum number/fraction of substitutions If `cost` is not given, `all` defaults to 10%, and the other transformation number bounds default to `all`. The component names can be abbreviated. |
| `costs` | a numeric vector or list with names partially matching insertions, deletions and substitutions giving the respective costs for computing the generalized Levenshtein distance, or `NULL` (default) indicating using unit cost for all three possible transformations. Coerced to integer via `[as.integer](integer)` if possible. |
| `ignore.case` | if `FALSE`, the pattern matching is *case sensitive* and if `TRUE`, case is ignored during matching. |
| `value` | if `FALSE`, a vector containing the (integer) indices of the matches determined is returned and if `TRUE`, a vector containing the matching elements themselves is returned. |
| `fixed` | logical. If `TRUE` (default), the pattern is matched literally (as is). Otherwise, it is matched as a regular expression. |
| `useBytes` | logical. in a multibyte locale, should the comparison be character-by-character (the default) or byte-by-byte. |
### Details
The Levenshtein edit distance is used as measure of approximateness: it is the (possibly cost-weighted) total number of insertions, deletions and substitutions required to transform one string into another.
This uses the `tre` code by Ville Laurikari (<https://github.com/laurikari/tre>), which supports MBCS character matching.
The main effect of `useBytes` is to avoid errors/warnings about invalid inputs and spurious matches in multibyte locales. It inhibits the conversion of inputs with marked encodings, and is forced if any input is found which is marked as `"bytes"` (see `[Encoding](encoding)`).
### Value
`agrep` returns a vector giving the indices of the elements that yielded a match, or, if `value` is `TRUE`, the matched elements (after coercion, preserving names but no other attributes).
`agrepl` returns a logical vector.
### Note
Since someone who read the description carelessly even filed a bug report on it, do note that this matches substrings of each element of `x` (just as `<grep>` does) and **not** whole elements. See also `[adist](../../utils/html/adist)` in package utils, which optionally returns the offsets of the matched substrings.
### Author(s)
Original version in **R** < 2.10.0 by David Meyer. Current version by Brian Ripley and Kurt Hornik.
### See Also
`<grep>`, `[adist](../../utils/html/adist)`. A different interface to approximate string matching is provided by `[aregexec](../../utils/html/aregexec)()`.
### Examples
```
agrep("lasy", "1 lazy 2")
agrep("lasy", c(" 1 lazy 2", "1 lasy 2"), max.distance = list(sub = 0))
agrep("laysy", c("1 lazy", "1", "1 LAZY"), max.distance = 2)
agrep("laysy", c("1 lazy", "1", "1 LAZY"), max.distance = 2, value = TRUE)
agrep("laysy", c("1 lazy", "1", "1 LAZY"), max.distance = 2, ignore.case = TRUE)
```
r None
`sprintf` Use C-style String Formatting Commands
-------------------------------------------------
### Description
A wrapper for the C function `sprintf`, that returns a character vector containing a formatted combination of text and variable values.
### Usage
```
sprintf(fmt, ...)
gettextf(fmt, ..., domain = NULL)
```
### Arguments
| | |
| --- | --- |
| `fmt` | a character vector of format strings, each of up to 8192 bytes. |
| `...` | values to be passed into `fmt`. Only logical, integer, real and character vectors are supported, but some coercion will be done: see the ‘Details’ section. Up to 100. |
| `domain` | see `<gettext>`. |
### Details
`sprintf` is a wrapper for the system `sprintf` C-library function. Attempts are made to check that the mode of the values passed match the format supplied, and **R**'s special values (`NA`, `Inf`, `-Inf` and `NaN`) are handled correctly.
`gettextf` is a convenience function which provides C-style string formatting with possible translation of the format string.
The arguments (including `fmt`) are recycled if possible a whole number of times to the length of the longest, and then the formatting is done in parallel. Zero-length arguments are allowed and will give a zero-length result. All arguments are evaluated even if unused, and hence some types (e.g., `"symbol"` or `"language"`, see `<typeof>`) are not allowed. Arguments unused by `fmt` result in a warning. (The format `%.0s` can be used to “skip” an argument.)
The following is abstracted from Kernighan and Ritchie (see References): however the actual implementation will follow the C99 standard and fine details (especially the behaviour under user error) may depend on the platform. References to numbered arguments come from POSIX.
The string `fmt` contains normal characters, which are passed through to the output string, and also conversion specifications which operate on the arguments provided through `...`. The allowed conversion specifications start with a `%` and end with one of the letters in the set `aAdifeEgGosxX%`. These letters denote the following types:
`d`, `i`, `o`, `x`, `X`
Integer value, `o` being octal, `x` and `X` being hexadecimal (using the same case for `a-f` as the code). Numeric variables with exactly integer values will be coerced to integer. Formats `d` and `i` can also be used for logical variables, which will be converted to `0`, `1` or `NA`.
`f`
Double precision value, in “**f**ixed point” decimal notation of the form "[-]mmm.ddd". The number of decimal places ("d") is specified by the precision: the default is 6; a precision of 0 suppresses the decimal point. Non-finite values are converted to `NA`, `NaN` or (perhaps a sign followed by) `Inf`.
`e`, `E`
Double precision value, in “**e**xponential” decimal notation of the form `[-]m.ddde[+-]xx` or `[-]m.dddE[+-]xx`.
`g`, `G`
Double precision value, in `%e` or `%E` format if the exponent is less than -4 or greater than or equal to the precision, and `%f` format otherwise. (The precision (default 6) specifies the number of *significant* digits here, whereas in `%f, %e`, it is the number of digits after the decimal point.)
`a`, `A`
Double precision value, in binary notation of the form `[-]0xh.hhhp[+-]d`. This is a binary fraction expressed in hex multiplied by a (decimal) power of 2. The number of hex digits after the decimal point is specified by the precision: the default is enough digits to represent exactly the internal binary representation. Non-finite values are converted to `NA`, `NaN` or (perhaps a sign followed by) `Inf`. Format `%a` uses lower-case for `x`, `p` and the hex values: format `%A` uses upper-case.
This should be supported on all platforms as it is a feature of C99. The format is not uniquely defined: although it would be possible to make the leading `h` always zero or one, this is not always done. Most systems will suppress trailing zeros, but a few do not. On a well-written platform, for normal numbers there will be a leading one before the decimal point plus (by default) 13 hexadecimal digits, hence 53 bits. The treatment of denormalized (aka ‘subnormal’) numbers is very platform-dependent.
`s`
Character string. Character `NA`s are converted to `"NA"`.
`%`
Literal `%` (none of the extra formatting characters given below are permitted in this case).
Conversion by `[as.character](character)` is used for non-character arguments with `s` and by `[as.double](double)` for non-double arguments with `f, e, E, g, G`. NB: the length is determined before conversion, so do not rely on the internal coercion if this would change the length. The coercion is done only once, so if `length(fmt) > 1` then all elements must expect the same types of arguments.
In addition, between the initial `%` and the terminating conversion character there may be, in any order:
`m.n`
Two numbers separated by a period, denoting the field width (`m`) and the precision (`n`).
`-`
Left adjustment of converted argument in its field.
`+`
Always print number with sign: by default only negative numbers are printed with a sign.
a space
Prefix a space if the first character is not a sign.
`0`
For numbers, pad to the field width with leading zeros. For characters, this zero-pads on some platforms and is ignored on others.
`#`
specifies “alternate output” for numbers, its action depending on the type: For `x` or `X`, `0x` or `0X` will be prefixed to a non-zero result. For `e`, `e`, `f`, `g` and `G`, the output will always have a decimal point; for `g` and `G`, trailing zeros will not be removed.
Further, immediately after `%` may come `1$` to `99$` to refer to a numbered argument: this allows arguments to be referenced out of order and is mainly intended for translators of error messages. If this is done it is best if all formats are numbered: if not the unnumbered ones process the arguments in order. See the examples. This notation allows arguments to be used more than once, in which case they must be used as the same type (integer, double or character).
A field width or precision (but not both) may be indicated by an asterisk `*`: in this case an argument specifies the desired number. A negative field width is taken as a '-' flag followed by a positive field width. A negative precision is treated as if the precision were omitted. The argument should be integer, but a double argument will be coerced to integer.
There is a limit of 8192 bytes on elements of `fmt`, and on strings included from a single `%`*letter* conversion specification.
Field widths and precisions of `%s` conversions are interpreted as bytes, not characters, as described in the C standard.
The C doubles used for **R** numerical vectors have signed zeros, which `sprintf` may output as `-0`, `-0.000` ....
### Value
A character vector of length that of the longest input. If any element of `fmt` or any character argument is declared as UTF-8, the element of the result will be in UTF-8 and have the encoding declared as UTF-8. Otherwise it will be in the current locale's encoding.
### Warning
The format string is passed down the OS's `sprintf` function, and incorrect formats can cause the latter to crash the **R** process . **R** does perform sanity checks on the format, but not all possible user errors on all platforms have been tested, and some might be terminal.
The behaviour on inputs not documented here is ‘undefined’, which means it is allowed to differ by platform.
### Author(s)
Original code by Jonathan Rougier.
### References
Kernighan, B. W. and Ritchie, D. M. (1988) *The C Programming Language.* Second edition, Prentice Hall. Describes the format options in table B-1 in the Appendix.
The C Standards, especially ISO/IEC 9899:1999 for ‘C99’. Links can be found at <https://developer.r-project.org/Portability.html>.
<https://pubs.opengroup.org/onlinepubs/9699919799/functions/snprintf.html> for POSIX extensions such as numbered arguments.
`man sprintf` on a Unix-alike system.
### See Also
`[formatC](formatc)` for a way of formatting vectors of numbers in a similar fashion.
`<paste>` for another way of creating a vector combining text and values.
`<gettext>` for the mechanisms for the automated translation of text.
### Examples
```
## be careful with the format: most things in R are floats
## only integer-valued reals get coerced to integer.
sprintf("%s is %f feet tall\n", "Sven", 7.1) # OK
try(sprintf("%s is %i feet tall\n", "Sven", 7.1)) # not OK
sprintf("%s is %i feet tall\n", "Sven", 7 ) # OK
## use a literal % :
sprintf("%.0f%% said yes (out of a sample of size %.0f)", 66.666, 3)
## various formats of pi :
sprintf("%f", pi)
sprintf("%.3f", pi)
sprintf("%1.0f", pi)
sprintf("%5.1f", pi)
sprintf("%05.1f", pi)
sprintf("%+f", pi)
sprintf("% f", pi)
sprintf("%-10f", pi) # left justified
sprintf("%e", pi)
sprintf("%E", pi)
sprintf("%g", pi)
sprintf("%g", 1e6 * pi) # -> exponential
sprintf("%.9g", 1e6 * pi) # -> "fixed"
sprintf("%G", 1e-6 * pi)
## no truncation:
sprintf("%1.f", 101)
## re-use one argument three times, show difference between %x and %X
xx <- sprintf("%1$d %1$x %1$X", 0:15)
xx <- matrix(xx, dimnames = list(rep("", 16), "%d%x%X"))
noquote(format(xx, justify = "right"))
## More sophisticated:
sprintf("min 10-char string '%10s'",
c("a", "ABC", "and an even longer one"))
## Platform-dependent bad example from qdapTools 1.0.0:
## may pad with spaces or zeroes.
sprintf("%09s", month.name)
n <- 1:18
sprintf(paste0("e with %2d digits = %.", n, "g"), n, exp(1))
## Using arguments out of order
sprintf("second %2$1.0f, first %1$5.2f, third %3$1.0f", pi, 2, 3)
## Using asterisk for width or precision
sprintf("precision %.*f, width '%*.3f'", 3, pi, 8, pi)
## Asterisk and argument re-use, 'e' example reiterated:
sprintf("e with %1$2d digits = %2$.*1$g", n, exp(1))
## re-cycle arguments
sprintf("%s %d", "test", 1:3)
## binary output showing rounding/representation errors
x <- seq(0, 1.0, 0.1); y <- c(0,.1,.2,.3,.4,.5,.6,.7,.8,.9,1)
cbind(x, sprintf("%a", x), sprintf("%a", y))
```
r None
`merge` Merge Two Data Frames
------------------------------
### Description
Merge two data frames by common columns or row names, or do other versions of database *join* operations.
### Usage
```
merge(x, y, ...)
## Default S3 method:
merge(x, y, ...)
## S3 method for class 'data.frame'
merge(x, y, by = intersect(names(x), names(y)),
by.x = by, by.y = by, all = FALSE, all.x = all, all.y = all,
sort = TRUE, suffixes = c(".x",".y"), no.dups = TRUE,
incomparables = NULL, ...)
```
### Arguments
| | |
| --- | --- |
| `x, y` | data frames, or objects to be coerced to one. |
| `by, by.x, by.y` | specifications of the columns used for merging. See ‘Details’. |
| `all` | logical; `all = L` is shorthand for `all.x = L` and `all.y = L`, where `L` is either `[TRUE](logical)` or `FALSE`. |
| `all.x` | logical; if `TRUE`, then extra rows will be added to the output, one for each row in `x` that has no matching row in `y`. These rows will have `NA`s in those columns that are usually filled with values from `y`. The default is `FALSE`, so that only rows with data from both `x` and `y` are included in the output. |
| `all.y` | logical; analogous to `all.x`. |
| `sort` | logical. Should the result be sorted on the `by` columns? |
| `suffixes` | a character vector of length 2 specifying the suffixes to be used for making unique the names of columns in the result which are not used for merging (appearing in `by` etc). |
| `no.dups` | logical indicating that `suffixes` are appended in more cases to avoid duplicated column names in the result. This was implicitly false before **R** version 3.5.0. |
| `incomparables` | values which cannot be matched. See `<match>`. This is intended to be used for merging on one column, so these are incomparable values of that column. |
| `...` | arguments to be passed to or from methods. |
### Details
`merge` is a generic function whose principal method is for data frames: the default method coerces its arguments to data frames and calls the `"data.frame"` method.
By default the data frames are merged on the columns with names they both have, but separate specifications of the columns can be given by `by.x` and `by.y`. The rows in the two data frames that match on the specified columns are extracted, and joined together. If there is more than one match, all possible matches contribute one row each. For the precise meaning of ‘match’, see `<match>`.
Columns to merge on can be specified by name, number or by a logical vector: the name `"row.names"` or the number `0` specifies the row names. If specified by name it must correspond uniquely to a named column in the input.
If `by` or both `by.x` and `by.y` are of length 0 (a length zero vector or `NULL`), the result, `r`, is the *Cartesian product* of `x` and `y`, i.e., `dim(r) = c(nrow(x)*nrow(y), ncol(x) + ncol(y))`.
If `all.x` is true, all the non matching cases of `x` are appended to the result as well, with `NA` filled in the corresponding columns of `y`; analogously for `all.y`.
If the columns in the data frames not used in merging have any common names, these have `suffixes` (`".x"` and `".y"` by default) appended to try to make the names of the result unique. If this is not possible, an error is thrown.
If a `by.x` column name matches one of `y`, and if `no.dups` is true (as by default), the y version gets suffixed as well, avoiding duplicate column names in the result.
The complexity of the algorithm used is proportional to the length of the answer.
In SQL database terminology, the default value of `all = FALSE` gives a *natural join*, a special case of an *inner join*. Specifying `all.x = TRUE` gives a *left (outer) join*, `all.y = TRUE` a *right (outer) join*, and both (`all = TRUE`) a *(full) outer join*. DBMSes do not match `NULL` records, equivalent to `incomparables = NA` in **R**.
### Value
A data frame. The rows are by default lexicographically sorted on the common columns, but for `sort = FALSE` are in an unspecified order. The columns are the common columns followed by the remaining columns in `x` and then those in `y`. If the matching involved row names, an extra character column called `Row.names` is added at the left, and in all cases the result has ‘automatic’ row names.
### Note
This is intended to work with data frames with vector-like columns: some aspects work with data frames containing matrices, but not all.
Currently long vectors are not accepted for inputs, which are thus restricted to less than 2^31 rows. That restriction also applies to the result for 32-bit platforms.
### See Also
`<data.frame>`, `<by>`, `<cbind>`.
`[dendrogram](../../stats/html/dendrogram)` for a class which has a `merge` method.
### Examples
```
authors <- data.frame(
## I(*) : use character columns of names to get sensible sort order
surname = I(c("Tukey", "Venables", "Tierney", "Ripley", "McNeil")),
nationality = c("US", "Australia", "US", "UK", "Australia"),
deceased = c("yes", rep("no", 4)))
authorN <- within(authors, { name <- surname; rm(surname) })
books <- data.frame(
name = I(c("Tukey", "Venables", "Tierney",
"Ripley", "Ripley", "McNeil", "R Core")),
title = c("Exploratory Data Analysis",
"Modern Applied Statistics ...",
"LISP-STAT",
"Spatial Statistics", "Stochastic Simulation",
"Interactive Data Analysis",
"An Introduction to R"),
other.author = c(NA, "Ripley", NA, NA, NA, NA,
"Venables & Smith"))
(m0 <- merge(authorN, books))
(m1 <- merge(authors, books, by.x = "surname", by.y = "name"))
m2 <- merge(books, authors, by.x = "name", by.y = "surname")
stopifnot(exprs = {
identical(m0, m2[, names(m0)])
as.character(m1[, 1]) == as.character(m2[, 1])
all.equal(m1[, -1], m2[, -1][ names(m1)[-1] ])
identical(dim(merge(m1, m2, by = NULL)),
c(nrow(m1)*nrow(m2), ncol(m1)+ncol(m2)))
})
## "R core" is missing from authors and appears only here :
merge(authors, books, by.x = "surname", by.y = "name", all = TRUE)
## example of using 'incomparables'
x <- data.frame(k1 = c(NA,NA,3,4,5), k2 = c(1,NA,NA,4,5), data = 1:5)
y <- data.frame(k1 = c(NA,2,NA,4,5), k2 = c(NA,NA,3,4,5), data = 1:5)
merge(x, y, by = c("k1","k2")) # NA's match
merge(x, y, by = "k1") # NA's match, so 6 rows
merge(x, y, by = "k2", incomparables = NA) # 2 rows
```
| programming_docs |
r None
`Extract` Extract or Replace Parts of an Object
------------------------------------------------
### Description
Operators acting on vectors, matrices, arrays and lists to extract or replace parts.
### Usage
```
x[i]
x[i, j, ... , drop = TRUE]
x[[i, exact = TRUE]]
x[[i, j, ..., exact = TRUE]]
x$name
getElement(object, name)
x[i] <- value
x[i, j, ...] <- value
x[[i]] <- value
x$name <- value
```
### Arguments
| | |
| --- | --- |
| `x, object` | object from which to extract element(s) or in which to replace element(s). |
| `i, j, ...` | indices specifying elements to extract or replace. Indices are `numeric` or `character` vectors or empty (missing) or `NULL`. Numeric values are coerced to integer as by `[as.integer](integer)` (and hence truncated towards zero). Character vectors will be matched to the `<names>` of the object (or for matrices/arrays, the `<dimnames>`): see ‘Character indices’ below for further details. For `[`-indexing only: `i`, `j`, `...` can be logical vectors, indicating elements/slices to select. Such vectors are recycled if necessary to match the corresponding extent. `i`, `j`, `...` can also be negative integers, indicating elements/slices to leave out of the selection. When indexing arrays by `[` a single argument `i` can be a matrix with as many columns as there are dimensions of `x`; the result is then a vector with elements corresponding to the sets of indices in each row of `i`. An index value of `NULL` is treated as if it were `integer(0)`. |
| `name` | A literal character string or a <name> (possibly [backtick](quotes) quoted). For extraction, this is normally (see under ‘Environments’) partially matched to the `<names>` of the object. |
| `drop` | For matrices and arrays. If `TRUE` the result is coerced to the lowest possible dimension (see the examples). This only works for extracting elements, not for the replacement. See `<drop>` for further details. |
| `exact` | Controls possible partial matching of `[[` when extracting by a character vector (for most objects, but see under ‘Environments’). The default is no partial matching. Value `NA` allows partial matching but issues a warning when it occurs. Value `FALSE` allows partial matching without any warning. |
| `value` | typically an array-like **R** object of a similar class as `x`. |
### Details
These operators are generic. You can write methods to handle indexing of specific classes of objects, see [InternalMethods](internalmethods) as well as `[[.data.frame](extract.data.frame)` and `[[.factor](extract.factor)`. The descriptions here apply only to the default methods. Note that separate methods are required for the replacement functions `[<-`, `[[<-` and `$<-` for use when indexing occurs on the assignment side of an expression.
The most important distinction between `[`, `[[` and `$` is that the `[` can select more than one element whereas the other two select a single element.
The default methods work somewhat differently for atomic vectors, matrices/arrays and for recursive (list-like, see `<is.recursive>`) objects. `$` is only valid for recursive objects (and `[NULL](null)`), and is only discussed in the section below on recursive objects.
Subsetting (except by an empty index) will drop all attributes except `names`, `dim` and `dimnames`.
Indexing can occur on the right-hand-side of an expression for extraction, or on the left-hand-side for replacement. When an index expression appears on the left side of an assignment (known as *subassignment*) then that part of `x` is set to the value of the right hand side of the assignment. In this case no partial matching of character indices is done, and the left-hand-side is coerced as needed to accept the values. For vectors, the answer will be of the higher of the types of `x` and `value` in the hierarchy raw < logical < integer < double < complex < character < list < expression. Attributes are preserved (although `names`, `dim` and `dimnames` will be adjusted suitably). Subassignment is done sequentially, so if an index is specified more than once the latest assigned value for an index will result.
It is an error to apply any of these operators to an object which is not subsettable (e.g., a function).
### Atomic vectors
The usual form of indexing is `[`. `[[` can be used to select a single element *dropping* `<names>`, whereas `[` keeps them, e.g., in `c(abc = 123)[1]`.
The index object `i` can be numeric, logical, character or empty. Indexing by factors is allowed and is equivalent to indexing by the numeric codes (see `<factor>`) and not by the character values which are printed (for which use `[as.character(i)]`).
An empty index selects all values: this is most often used to replace all the entries but keep the `<attributes>`.
### Matrices and arrays
Matrices and arrays are vectors with a dimension attribute and so all the vector forms of indexing can be used with a single index. The result will be an unnamed vector unless `x` is one-dimensional when it will be a one-dimensional array.
The most common form of indexing a *k*-dimensional array is to specify *k* indices to `[`. As for vector indexing, the indices can be numeric, logical, character, empty or even factor. And again, indexing by factors is equivalent to indexing by the numeric codes, see ‘Atomic vectors’ above.
An empty index (a comma separated blank) indicates that all entries in that dimension are selected. The argument `drop` applies to this form of indexing.
A third form of indexing is via a numeric matrix with the one column for each dimension: each row of the index matrix then selects a single element of the array, and the result is a vector. Negative indices are not allowed in the index matrix. `NA` and zero values are allowed: rows of an index matrix containing a zero are ignored, whereas rows containing an `NA` produce an `NA` in the result.
Indexing via a character matrix with one column per dimensions is also supported if the array has dimension names. As with numeric matrix indexing, each row of the index matrix selects a single element of the array. Indices are matched against the appropriate dimension names. `NA` is allowed and will produce an `NA` in the result. Unmatched indices as well as the empty string (`""`) are not allowed and will result in an error.
A vector obtained by matrix indexing will be unnamed unless `x` is one-dimensional when the row names (if any) will be indexed to provide names for the result.
### Recursive (list-like) objects
Indexing by `[` is similar to atomic vectors and selects a list of the specified element(s).
Both `[[` and `$` select a single element of the list. The main difference is that `$` does not allow computed indices, whereas `[[` does. `x$name` is equivalent to `x[["name", exact = FALSE]]`. Also, the partial matching behavior of `[[` can be controlled using the `exact` argument.
`getElement(x, name)` is a version of `x[[name, exact = TRUE]]` which for formally classed (S4) objects returns `[slot](../../methods/html/slot)(x, name)`, hence providing access to even more general list-like objects.
`[` and `[[` are sometimes applied to other recursive objects such as <call>s and <expression>s. Pairlists are coerced to lists for extraction by `[`, but all three operators can be used for replacement.
`[[` can be applied recursively to lists, so that if the single index `i` is a vector of length `p`, `alist[[i]]` is equivalent to `alist[[i1]]...[[ip]]` providing all but the final indexing results in a list.
Note that in all three kinds of replacement, a value of `NULL` deletes the corresponding item of the list. To set entries to `NULL`, you need `x[i] <- list(NULL)`.
When `$<-` is applied to a `NULL` `x`, it first coerces `x` to `list()`. This is what also happens with `[[<-` where in **R** versions less than 4.y.z, a length one value resulted in a length one (atomic) *vector*.
### Environments
Both `$` and `[[` can be applied to environments. Only character indices are allowed and no partial matching is done. The semantics of these operations are those of `get(i, env = x,
inherits = FALSE)`. If no match is found then `NULL` is returned. The replacement versions, `$<-` and `[[<-`, can also be used. Again, only character arguments are allowed. The semantics in this case are those of `assign(i, value, env = x,
inherits = FALSE)`. Such an assignment will either create a new binding or change the existing binding in `x`.
### NAs in indexing
When extracting, a numerical, logical or character `NA` index picks an unknown element and so returns `NA` in the corresponding element of a logical, integer, numeric, complex or character result, and `NULL` for a list. (It returns `00` for a raw result.)
When replacing (that is using indexing on the lhs of an assignment) `NA` does not select any element to be replaced. As there is ambiguity as to whether an element of the rhs should be used or not, this is only allowed if the rhs value is of length one (so the two interpretations would have the same outcome). (The documented behaviour of S was that an `NA` replacement index ‘goes nowhere’ but uses up an element of `value`: Becker *et al* p. 359. However, that has not been true of other implementations.)
### Argument matching
Note that these operations do not match their index arguments in the standard way: argument names are ignored and positional matching only is used. So `m[j = 2, i = 1]` is equivalent to `m[2, 1]` and **not** to `m[1, 2]`.
This may not be true for methods defined for them; for example it is not true for the `data.frame` methods described in `[[.data.frame](extract.data.frame)` which warn if `i` or `j` is named and have undocumented behaviour in that case.
To avoid confusion, do not name index arguments (but `drop` and `exact` must be named).
### S4 methods
These operators are also implicit S4 generics, but as primitives, S4 methods will be dispatched only on S4 objects `x`.
The implicit generics for the `$` and `$<-` operators do not have `name` in their signature because the grammar only allows symbols or string constants for the `name` argument.
### Character indices
Character indices can in some circumstances be partially matched (see `<pmatch>`) to the names or dimnames of the object being subsetted (but never for subassignment). Unlike S (Becker *et al* p. 358), **R** never uses partial matching when extracting by `[`, and partial matching is not by default used by `[[` (see argument `exact`).
Thus the default behaviour is to use partial matching only when extracting from recursive objects (except environments) by `$`. Even in that case, warnings can be switched on by `<options>(warnPartialMatchDollar = TRUE)`.
Neither empty (`""`) nor `NA` indices match any names, not even empty nor missing names. If any object has no names or appropriate dimnames, they are taken as all `""` and so match nothing.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<names>` for details of matching to names, and `<pmatch>` for partial matching.
`<list>`, `<array>`, `<matrix>`.
`[[.data.frame](extract.data.frame)` and `[[.factor](extract.factor)` for the behaviour when applied to data.frame and factors.
`[Syntax](syntax)` for operator precedence, and the ‘R Language Definition’ manual about indexing details.
`[NULL](null)` for details of indexing null objects.
### Examples
```
x <- 1:12
m <- matrix(1:6, nrow = 2, dimnames = list(c("a", "b"), LETTERS[1:3]))
li <- list(pi = pi, e = exp(1))
x[10] # the tenth element of x
x <- x[-1] # delete the 1st element of x
m[1,] # the first row of matrix m
m[1, , drop = FALSE] # is a 1-row matrix
m[,c(TRUE,FALSE,TRUE)]# logical indexing
m[cbind(c(1,2,1),3:1)]# matrix numeric index
ci <- cbind(c("a", "b", "a"), c("A", "C", "B"))
m[ci] # matrix character index
m <- m[,-1] # delete the first column of m
li[[1]] # the first element of list li
y <- list(1, 2, a = 4, 5)
y[c(3, 4)] # a list containing elements 3 and 4 of y
y$a # the element of y named a
## non-integer indices are truncated:
(i <- 3.999999999) # "4" is printed
(1:5)[i] # 3
## named atomic vectors, compare "[" and "[[" :
nx <- c(Abc = 123, pi = pi)
nx[1] ; nx["pi"] # keeps names, whereas "[[" does not:
nx[[1]] ; nx[["pi"]]
## recursive indexing into lists
z <- list(a = list(b = 9, c = "hello"), d = 1:5)
unlist(z)
z[[c(1, 2)]]
z[[c(1, 2, 1)]] # both "hello"
z[[c("a", "b")]] <- "new"
unlist(z)
## check $ and [[ for environments
e1 <- new.env()
e1$a <- 10
e1[["a"]]
e1[["b"]] <- 20
e1$b
ls(e1)
## partial matching - possibly with warning :
stopifnot(identical(li$p, pi))
op <- options(warnPartialMatchDollar = TRUE)
stopifnot( identical(li$p, pi), #-- a warning
inherits(tryCatch (li$p, warning = identity), "warning"))
## revert the warning option:
if(is.null(op[[1]])) op[[1]] <- FALSE; options(op)
```
r None
`prod` Product of Vector Elements
----------------------------------
### Description
`prod` returns the product of all the values present in its arguments.
### Usage
```
prod(..., na.rm = FALSE)
```
### Arguments
| | |
| --- | --- |
| `...` | numeric or complex or logical vectors. |
| `na.rm` | logical. Should missing values be removed? |
### Details
If `na.rm` is `FALSE` an `NA` value in any of the arguments will cause a value of `NA` to be returned, otherwise `NA` values are ignored.
This is a generic function: methods can be defined for it directly or via the `[Summary](groupgeneric)` group generic. For this to work properly, the arguments `...` should be unnamed, and dispatch is on the first argument.
Logical true values are regarded as one, false values as zero. For historical reasons, `NULL` is accepted and treated as if it were `numeric(0)`.
### Value
The product, a numeric (of type `"double"`) or complex vector of length one. **NB:** the product of an empty set is one, by definition.
### S4 methods
This is part of the S4 `[Summary](../../methods/html/s4groupgeneric)` group generic. Methods for it must use the signature `x, ..., na.rm`.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<sum>`, `[cumprod](cumsum)`, `<cumsum>`.
‘[plotmath](../../grdevices/html/plotmath)’ for the use of `prod` in plot annotation.
### Examples
```
print(prod(1:7)) == print(gamma(8))
```
r None
`Signals` Interrupting Execution of R
--------------------------------------
### Description
On receiving `SIGUSR1` **R** will save the workspace and quit. `SIGUSR2` has the same result except that the `[.Last](quit)` function and `<on.exit>` expressions will not be called.
### Usage
```
kill -USR1 pid
kill -USR2 pid
```
### Arguments
| | |
| --- | --- |
| `pid` | The process ID of the **R** process. |
### Details
The commands history will also be saved if would be at normal termination.
This is not available on Windows, and possibly on other OSes which do not support these signals.
### Warning
It is possible that one or more **R** objects will be undergoing modification at the time the signal is sent. These objects could be saved in a corrupted form.
### See Also
`[Sys.getpid](sys.getpid)` to report the process ID for future use.
r None
`nlevels` The Number of Levels of a Factor
-------------------------------------------
### Description
Return the number of levels which its argument has.
### Usage
```
nlevels(x)
```
### Arguments
| | |
| --- | --- |
| `x` | an object, usually a factor. |
### Details
This is usually applied to a factor, but other objects can have levels.
The actual factor levels (if they exist) can be obtained with the `<levels>` function.
### Value
The length of `<levels>(x)`, which is zero if `x` has no levels.
### See Also
`<levels>`, `<factor>`.
### Examples
```
nlevels(gl(3, 7)) # = 3
```
r None
`warning` Warning Messages
---------------------------
### Description
Generates a warning message that corresponds to its argument(s) and (optionally) the expression or function from which it was called.
### Usage
```
warning(..., call. = TRUE, immediate. = FALSE, noBreaks. = FALSE,
domain = NULL)
suppressWarnings(expr, classes = "warning")
```
### Arguments
| | |
| --- | --- |
| `...` | zero or more objects which can be coerced to character (and which are pasted together with no separator) or a single condition object. |
| `call.` | logical, indicating if the call should become part of the warning message. |
| `immediate.` | logical, indicating if the call should be output immediately, even if `[getOption](options)("warn") <= 0`. |
| `noBreaks.` | logical, indicating as far as possible the message should be output as a single line when `options(warn = 1)`. |
| `expr` | expression to evaluate. |
| `domain` | see `<gettext>`. If `NA`, messages will not be translated, see also the note in `<stop>`. |
| `classes` | character, indicating which classes of warnings should be suppressed. |
### Details
The result *depends* on the value of `<options>("warn")` and on handlers established in the executing code.
If a condition object is supplied it should be the only argument, and further arguments will be ignored, with a message.
`warning` signals a warning condition by (effectively) calling `signalCondition`. If there are no handlers or if all handlers return, then the value of `warn = [getOption](options)("warn")` is used to determine the appropriate action. If `warn` is negative warnings are ignored; if it is zero they are stored and printed after the top–level function has completed; if it is one they are printed as they occur and if it is 2 (or larger) warnings are turned into errors. Calling `warning(immediate. = TRUE)` turns `warn <=
0` into `warn = 1` for this call only.
If `warn` is zero (the default), a read-only variable `last.warning` is created. It contains the warnings which can be printed via a call to `<warnings>`.
Warnings will be truncated to `[getOption](options)("warning.length")` characters, default 1000, indicated by `[... truncated]`.
While the warning is being processed, a `muffleWarning` restart is available. If this restart is invoked with `invokeRestart`, then `warning` returns immediately.
An attempt is made to coerce other types of inputs to `warning` to character vectors.
`suppressWarnings` evaluates its expression in a context that ignores all warnings.
### Value
The warning message as `<character>` string, invisibly.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<stop>` for fatal errors, `<message>` for diagnostic messages, `<warnings>`, and `<options>` with argument `warn=`.
`<gettext>` for the mechanisms for the automated translation of messages.
### Examples
```
testit <- function() warning("testit")
testit() ## shows call
testit <- function() warning("problem in testit", call. = FALSE)
testit() ## no call
suppressWarnings(warning("testit"))
```
r None
`dots` ..., ..1, etc used in Functions
---------------------------------------
### Description
`...` and `..1`, `..2` etc are used to refer to arguments passed down from a calling function. These (and the following) can only be used *inside* a function which has `...` among its formal arguments.
`...elt(n)` is a functional way to get `..<n>` and basically the same as `eval(paste0("..", n))`, just more elegant and efficient. Note that `switch(n, ...)` is very close, differing by returning `NULL` invisibly instead of an error when `n` is zero or too large.
`...length()` returns the number of expressions in `...`, and `...names()` the `<names>`. These are the same as `length(list(...))` or `names(list(...))` but without evaluating the expressions in `...` (which happens with `list(...)`).
Evaluating elements of `...` with `..1`, `..2`, `...elt(n)`, etc. propagates [visibility](invisible). This is consistent with the evaluation of named arguments which also propagates visibility.
### Usage
```
...length()
...names()
...elt(n)
```
### Arguments
| | |
| --- | --- |
| `n` | a positive integer, not larger than the number of expressions in ..., which is the same as `...length()` which is the same as `length(list(...))`, but the latter evaluates all expressions in `...`. |
### See Also
`...` and `..1`, `..2` are *reserved* words in **R**, see `[Reserved](reserved)`.
For more, see the [Introduction to R](../../../doc/manual/r-intro#The-three-dots-argument) manual for usage of these syntactic elements, and [dotsMethods](../../methods/html/dotsmethods) for their use in formal (S4) methods.
### Examples
```
tst <- function(n, ...) ...elt(n)
tst(1, pi=pi*0:1, 2:4) ## [1] 0.000000 3.141593
tst(2, pi=pi*0:1, 2:4) ## [1] 2 3 4
try(tst(1)) # -> Error about '...' not containing an element.
tst.dl <- function(x, ...) ...length()
tst.dns <- function(x, ...) ...names()
tst.dl(1:10) # 0 (because the first argument is 'x')
tst.dl(4, 5) # 1
tst.dl(4, 5, 6) # 2 namely '5, 6'
tst.dl(4, 5, 6, 7, sin(1:10), "foo"/"bar") # 5. Note: no evaluation!
tst.dns(4, foo=5, 6, bar=7, sini = sin(1:10), "foo"/"bar")
## "foo" NA "bar" "sini" NA
```
| programming_docs |
r None
`difftime` Time Intervals / Differences
----------------------------------------
### Description
Time intervals creation, printing, and some arithmetic. The `<print>()` method calls these “time differences”.
### Usage
```
time1 - time2
difftime(time1, time2, tz,
units = c("auto", "secs", "mins", "hours",
"days", "weeks"))
as.difftime(tim, format = "%X", units = "auto", tz = "UTC")
## S3 method for class 'difftime'
format(x, ...)
## S3 method for class 'difftime'
units(x)
## S3 replacement method for class 'difftime'
units(x) <- value
## S3 method for class 'difftime'
as.double(x, units = "auto", ...)
## Group methods, notably for round(), signif(), floor(),
## ceiling(), trunc(), abs(); called directly, *not* as Math():
## S3 method for class 'difftime'
Math(x, ...)
```
### Arguments
| | |
| --- | --- |
| `time1, time2` | [date-time](datetimeclasses) or [date](dates) objects. |
| `tz` | an optional [time zone](timezones) specification to be used for the conversion, mainly for `"POSIXlt"` objects. |
| `units` | character string. Units in which the results are desired. Can be abbreviated. |
| `value` | character string. Like `units`, except that abbreviations are not allowed. |
| `tim` | character string or numeric value specifying a time interval. |
| `format` | character specifying the format of `tim`: see `<strptime>`. The default is a locale-specific time format. |
| `x` | an object inheriting from class `"difftime"`. |
| `...` | arguments to be passed to or from other methods. |
### Details
Function `difftime` calculates a difference of two date/time objects and returns an object of class `"difftime"` with an attribute indicating the units. The `[Math](groupgeneric)` group method provides `<round>`, `[signif](round)`, `[floor](round)`, `[ceiling](round)`, `[trunc](round)`, `[abs](mathfun)`, and `<sign>` methods for objects of this class, and there are methods for the group-generic (see `[Ops](groupgeneric)`) logical and arithmetic operations.
If `units = "auto"`, a suitable set of units is chosen, the largest possible (excluding `"weeks"`) in which all the absolute differences are greater than one.
Subtraction of date-time objects gives an object of this class, by calling `difftime` with `units = "auto"`. Alternatively, `as.difftime()` works on character-coded or numeric time intervals; in the latter case, units must be specified, and `format` has no effect.
Limited arithmetic is available on `"difftime"` objects: they can be added or subtracted, and multiplied or divided by a numeric vector. In addition, adding or subtracting a numeric vector by a `"difftime"` object implicitly converts the numeric vector to a `"difftime"` object with the same units as the `"difftime"` object. There are methods for `<mean>` and `<sum>` (via the `[Summary](groupgeneric)` group generic), and `<diff>` via `[diff.default](diff)` building on the `"difftime"` method for arithmetic, notably `-`.
The units of a `"difftime"` object can be extracted by the `units` function, which also has a replacement form. If the units are changed, the numerical value is scaled accordingly. The replacement version keeps attributes such as names and dimensions.
Note that `units = "days"` means a period of 24 hours, hence takes no account of Daylight Savings Time. Differences in objects of class `"[Date](dates)"` are computed as if in the UTC time zone.
The `as.double` method returns the numeric value expressed in the specified units. Using `units = "auto"` means the units of the object.
The `format` method simply formats the numeric value and appends the units as a text string.
### Note
Units such as `"months"` are not possible as they are not of constant length. To create intervals of months, quarters or years use `[seq.Date](seq.date)` or `[seq.POSIXt](seq.posixt)`.
### See Also
`[DateTimeClasses](datetimeclasses)`.
### Examples
```
(z <- Sys.time() - 3600)
Sys.time() - z # just over 3600 seconds.
## time interval between release days of R 1.2.2 and 1.2.3.
ISOdate(2001, 4, 26) - ISOdate(2001, 2, 26)
as.difftime(c("0:3:20", "11:23:15"))
as.difftime(c("3:20", "23:15", "2:"), format = "%H:%M") # 3rd gives NA
(z <- as.difftime(c(0,30,60), units = "mins"))
as.numeric(z, units = "secs")
as.numeric(z, units = "hours")
format(z)
```
r None
`gctorture` Torture Garbage Collector
--------------------------------------
### Description
Provokes garbage collection on (nearly) every memory allocation. Intended to ferret out memory protection bugs. Also makes **R** run *very* slowly, unfortunately.
### Usage
```
gctorture(on = TRUE)
gctorture2(step, wait = step, inhibit_release = FALSE)
```
### Arguments
| | |
| --- | --- |
| `on` | logical; turning it on/off. |
| `step` | integer; run GC every `step` allocations; `step
= 0` turns the GC torture off. |
| `wait` | integer; number of allocations to wait before starting GC torture. |
| `inhibit_release` | logical; do not release free objects for re-use: use with caution. |
### Details
Calling `gctorture(TRUE)` instructs the memory manager to force a full GC on every allocation. `gctorture2` provides a more refined interface that allows the start of the GC torture to be deferred and also gives the option of running a GC only every `step` allocations.
The third argument to `gctorture2` is only used if R has been configured with a strict write barrier enabled. When this is the case all garbage collections are full collections, and the memory manager marks free nodes and enables checks in many situations that signal an error when a free node is used. This can help greatly in isolating unprotected values in C code. It does not detect the case where a node becomes free and is reallocated. The `inhibit_release` argument can be used to prevent such reallocation. This will cause memory to grow and should be used with caution and in conjunction with operating system facilities to monitor and limit process memory use.
`gctorture2` can also be invoked via environment variables at the start of the **R** session. R\_GCTORTURE corresponds to the `step` argument, R\_GCTORTURE\_WAIT to `wait`, and R\_GCTORTURE\_INHIBIT\_RELEASE to `inhibit_release`.
### Value
Previous value of first argument.
### Author(s)
Peter Dalgaard and Luke Tierney
r None
`scan` Read Data Values
------------------------
### Description
Read data into a vector or list from the console or file.
### Usage
```
scan(file = "", what = double(), nmax = -1, n = -1, sep = "",
quote = if(identical(sep, "\n")) "" else "'\"", dec = ".",
skip = 0, nlines = 0, na.strings = "NA",
flush = FALSE, fill = FALSE, strip.white = FALSE,
quiet = FALSE, blank.lines.skip = TRUE, multi.line = TRUE,
comment.char = "", allowEscapes = FALSE,
fileEncoding = "", encoding = "unknown", text, skipNul = FALSE)
```
### Arguments
| | |
| --- | --- |
| `file` | the name of a file to read data values from. If the specified file is `""`, then input is taken from the keyboard (or whatever `[stdin](showconnections)()` reads if input is redirected or **R** is embedded). (In this case input can be terminated by a blank line or an EOF signal, Ctrl-D on Unix and Ctrl-Z on Windows.) Otherwise, the file name is interpreted *relative* to the current working directory (given by `<getwd>()`), unless it specifies an *absolute* path. Tilde-expansion is performed where supported. When running **R** from a script, `file = "stdin"` can be used to refer to the process's `stdin` file stream. This can be a compressed file (see `[file](connections)`). Alternatively, `file` can be a `[connection](connections)`, which will be opened if necessary, and if so closed at the end of the function call. Whatever mode the connection is opened in, any of LF, CRLF or CR will be accepted as the EOL marker for a line and so will match `sep = "\n"`. `file` can also be a complete URL. (For the supported URL schemes, see the ‘URLs’ section of the help for `[url](connections)`.) To read a data file not in the current encoding (for example a Latin-1 file in a UTF-8 locale or conversely) use a `[file](connections)` connection setting its `encoding` argument (or `scan`'s `fileEncoding` argument). |
| `what` | the [type](typeof) of `what` gives the type of data to be read. (Here ‘type’ is used in the sense of `<typeof>`.) The supported types are `logical`, `integer`, `numeric`, `complex`, `character`, `raw` and `<list>`. If `what` is a list, it is assumed that the lines of the data file are records each containing `length(what)` items (‘fields’) and the list components should have elements which are one of the first six ([atomic](vector)) types listed or `NULL`, see section ‘Details’ below. |
| `nmax` | the maximum number of data values to be read, or if `what` is a list, the maximum number of records to be read. If omitted or not positive or an invalid value for an integer (and `nlines` is not set to a positive value), `scan` will read to the end of `file`. |
| `n` | integer: the maximum number of data values to be read, defaulting to no limit. Invalid values will be ignored. |
| `sep` | by default, scan expects to read ‘white-space’ delimited input fields. Alternatively, `sep` can be used to specify a character which delimits fields. A field is always delimited by an end-of-line marker unless it is quoted. If specified this should be the empty character string (the default) or `NULL` or a character string containing just one single-byte character. |
| `quote` | the set of quoting characters as a single character string or `NULL`. In a multibyte locale the quoting characters must be ASCII (single-byte). |
| `dec` | decimal point character. This should be a character string containing just one single-byte character. (`NULL` and a zero-length character vector are also accepted, and taken as the default.) |
| `skip` | the number of lines of the input file to skip before beginning to read data values. |
| `nlines` | if positive, the maximum number of lines of data to be read. |
| `na.strings` | character vector. Elements of this vector are to be interpreted as missing (`[NA](na)`) values. Blank fields are also considered to be missing values in logical, integer, numeric and complex fields. Note that the test happens *after* white space is stripped from the input, so `na.strings` values may need their own white space stripped in advance. |
| `flush` | logical: if `TRUE`, `scan` will flush to the end of the line after reading the last of the fields requested. This allows putting comments after the last field, but precludes putting more that one record on a line. |
| `fill` | logical: if `TRUE`, `scan` will implicitly add empty fields to any lines with fewer fields than implied by `what`. |
| `strip.white` | vector of logical value(s) corresponding to items in the `what` argument. It is used only when `sep` has been specified, and allows the stripping of leading and trailing ‘white space’ from `character` fields (`numeric` fields are always stripped). Note: white space inside quoted strings is not stripped. If `strip.white` is of length 1, it applies to all fields; otherwise, if `strip.white[i]` is `TRUE` *and* the `i`-th field is of mode character (because `what[i]` is) then the leading and trailing unquoted white space from field `i` is stripped. |
| `quiet` | logical: if `FALSE` (default), scan() will print a line, saying how many items have been read. |
| `blank.lines.skip` | logical: if `TRUE` blank lines in the input are ignored, except when counting `skip` and `nlines`. |
| `multi.line` | logical. Only used if `what` is a list. If `FALSE`, all of a record must appear on one line (but more than one record can appear on a single line). Note that using `fill = TRUE` implies that a record will be terminated at the end of a line. |
| `comment.char` | character: a character vector of length one containing a single character or an empty string. Use `""` to turn off the interpretation of comments altogether (the default). |
| `allowEscapes` | logical. Should C-style escapes such as \n be processed (the default) or read verbatim? Note that if not within quotes these could be interpreted as a delimiter (but not as a comment character). The escapes which are interpreted are the control characters \a, \b, \f, \n, \r, \t, \v and octal and hexadecimal representations like \040 and \0x2A. Any other escaped character is treated as itself, including backslash. Note that Unicode escapes (starting \u or \U: see [Quotes](quotes)) are never processed. |
| `fileEncoding` | character string: if non-empty declares the encoding used on a file (not a connection nor the keyboard) so the character data can be re-encoded. See the ‘Encoding’ section of the help for `[file](connections)`, and the ‘R Data Import/Export Manual’. |
| `encoding` | encoding to be assumed for input strings. If the value is `"latin1"` or `"UTF-8"` it is used to mark character strings as known to be in Latin-1 or UTF-8: it is not used to re-encode the input (see `fileEncoding`). See also ‘Details’. |
| `text` | character string: if `file` is not supplied and this is, then data are read from the value of `text` via a text connection. |
| `skipNul` | logical: should nuls be skipped when reading character fields? |
### Details
The value of `what` can be a list of types, in which case `scan` returns a list of vectors with the types given by the types of the elements in `what`. This provides a way of reading columnar data. If any of the types is `NULL`, the corresponding field is skipped (but a `NULL` component appears in the result).
The type of `what` or its components can be one of the six atomic vector types or `NULL` (see `[is.atomic](is.recursive)`).
‘White space’ is defined for the purposes of this function as one or more contiguous characters from the set space, horizontal tab, carriage return and line feed. It does not include form feed nor vertical tab, but in Latin-1 and Windows 8-bit locales (but not UTF-8) 'space' includes the non-breaking space "\xa0".
Empty numeric fields are always regarded as missing values. Empty character fields are scanned as empty character vectors, unless `na.strings` contains `""` when they are regarded as missing values.
The allowed input for a numeric field is optional whitespace followed either `NA` or an optional sign followed by a decimal or hexadecimal constant (see [NumericConstants](numericconstants)), or `NaN`, `Inf` or `infinity` (ignoring case). Out-of-range values are recorded as `Inf`, `-Inf` or `0`.
For an integer field the allowed input is optional whitespace, followed by either `NA` or an optional sign and one or more digits (0-9): all out-of-range values are converted to `NA_integer_`.
If `sep` is the default (`""`), the character \ in a quoted string escapes the following character, so quotes may be included in the string by escaping them.
If `sep` is non-default, the fields may be quoted in the style of ‘.csv’ files where separators inside quotes (`''` or `""`) are ignored and quotes may be put inside strings by doubling them. However, if `sep = "\n"` it is assumed by default that one wants to read entire lines verbatim.
Quoting is only interpreted in character fields and in `NULL` fields (which might be skipping character fields).
Note that since `sep` is a separator and not a terminator, reading a file by `scan("foo", sep = "\n", blank.lines.skip = FALSE)` will give an empty final line if the file ends in a linefeed and not if it does not. This might not be what you expected; see also `[readLines](readlines)`.
If `comment.char` occurs (except inside a quoted character field), it signals that the rest of the line should be regarded as a comment and be discarded. Lines beginning with a comment character (possibly after white space with the default separator) are treated as blank lines.
There is a line-length limit of 4095 bytes when reading from the console (which may impose a lower limit: see ‘An Introduction to R’).
There is a check for a user interrupt every 1000 lines if `what` is a list, otherwise every 10000 items.
If `file` is a character string and `fileEncoding` is non-default, or if it is a not-already-open [connection](connections) with a non-default `encoding` argument, the text is converted to UTF-8 and declared as such (and the `encoding` argument to `scan` is ignored). See the examples of `[readLines](readlines)`.
Embedded nuls in the input stream will terminate the field currently being read, with a warning once per call to `scan`. Setting `skipNul = TRUE` causes them to be ignored.
### Value
if `what` is a list, a list of the same length and same names (as any) as `what`.
Otherwise, a vector of the type of `what`.
Character strings in the result will have a declared encoding if `encoding` is `"latin1"` or `"UTF-8"`.
### Note
The default for `multi.line` differs from S. To read one record per line, use `flush = TRUE` and `multi.line = FALSE`. (Note that quoted character strings can still include embedded newlines.)
If number of items is not specified, the internal mechanism re-allocates memory in powers of two and so could use up to three times as much memory as needed. (It needs both old and new copies.) If you can, specify either `n` or `nmax` whenever inputting a large vector, and `nmax` or `nlines` when inputting a large list.
Using `scan` on an open connection to read partial lines can lose chars: use an explicit separator to avoid this.
Having `nul` bytes in fields (including \0 if `allowEscapes = TRUE`) may lead to interpretation of the field being terminated at the `nul`. They not normally present in text files – see `[readBin](readbin)`.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[read.table](../../utils/html/read.table)` for more user-friendly reading of data matrices; `[readLines](readlines)` to read a file a line at a time. `<write>`.
`Quotes` for the details of C-style escape sequences.
`[readChar](readchar)` and `[readBin](readbin)` to read fixed or variable length character strings or binary representations of numbers a few at a time from a connection.
### Examples
```
cat("TITLE extra line", "2 3 5 7", "11 13 17", file = "ex.data", sep = "\n")
pp <- scan("ex.data", skip = 1, quiet = TRUE)
scan("ex.data", skip = 1)
scan("ex.data", skip = 1, nlines = 1) # only 1 line after the skipped one
scan("ex.data", what = list("","","")) # flush is F -> read "7"
scan("ex.data", what = list("","",""), flush = TRUE)
unlink("ex.data") # tidy up
## "inline" usage
scan(text = "1 2 3")
```
r None
`S3method` Register S3 Methods
-------------------------------
### Description
Register S3 methods in R scripts.
### Usage
```
.S3method(generic, class, method)
```
### Arguments
| | |
| --- | --- |
| `generic` | a character string naming an S3 generic function. |
| `class` | a character string naming an S3 class. |
| `method` | a character string or function giving the S3 method to be registered. If not given, the function named `generic.class` is used. |
### Details
This function should only be used in R scripts: for package code, one should use the corresponding S3method ‘NAMESPACE’ directive.
### Examples
```
## Create a generic function and register a method for objects
## inheriting from class 'cls':
gen <- function(x) UseMethod("gen")
met <- function(x) writeLines("Hello world.")
.S3method("gen", "cls", met)
## Create an object inheriting from class 'cls', and call the
## generic on it:
x <- structure(123, class = "cls")
gen(x)
```
r None
`basename` Manipulate File Paths
---------------------------------
### Description
`basename` removes all of the path up to and including the last path separator (if any).
`dirname` returns the part of the `path` up to but excluding the last path separator, or `"."` if there is no path separator.
### Usage
```
basename(path)
dirname(path)
```
### Arguments
| | |
| --- | --- |
| `path` | character vector, containing path names. |
### Details
[tilde expansion](path.expand) of the path is done except on Windows.
Trailing path separators are removed before dissecting the path, and for `dirname` any trailing file separators are removed from the result.
### Value
A character vector of the same length as `path`. A zero-length input will give a zero-length output with no error.
Paths not containing any separators are taken to be in the current directory, so `dirname` returns `"."`.
If an element of `path` is `[NA](na)`, so is the result.
`""` is not a valid pathname, but is returned unchanged.
### Behaviour on Windows
On Windows this will accept either `\` or `/` as the path separator, but `dirname` will return a path using `/` (except if on a network share, when the leading `\\` will be preserved). Expect these only to be able to handle complete paths, and not for example just a network share or a drive.
UTF-8-encoded path names not valid in the current locale can be used.
### Note
These are not wrappers for the POSIX system functions of the same names: in particular they do **not** have the special handling of the path `"/"` and of returning `"."` for empty strings.
### See Also
`<file.path>`, `<path.expand>`.
### Examples
```
basename(file.path("","p1","p2","p3", c("file1", "file2")))
dirname (file.path("","p1","p2","p3", "filename"))
```
| programming_docs |
r None
`socketSelect` Wait on Socket Connections
------------------------------------------
### Description
Waits for the first of several socket connections and server sockets to become available.
### Usage
```
socketSelect(socklist, write = FALSE, timeout = NULL)
```
### Arguments
| | |
| --- | --- |
| `socklist` | list of open socket connections and server sockets. |
| `write` | logical. If `TRUE` wait for corresponding socket to become available for writing; otherwise wait for it to become available for reading or for accepting an incoming connection (server sockets). |
| `timeout` | numeric or `NULL`. Time in seconds to wait for a socket to become available; `NULL` means wait indefinitely. |
### Details
The values in `write` are recycled if necessary to make up a logical vector the same length as `socklist`. Socket connections can appear more than once in `socklist`; this can be useful if you want to determine whether a socket is available for reading or writing.
### Value
Logical the same length as `socklist` indicating whether the corresponding socket connection is available for output or input, depending on the corresponding value of `write`. Server sockets can only become available for input.
### Examples
```
## Not run:
## test whether socket connection s is available for writing or reading
socketSelect(list(s, s), c(TRUE, FALSE), timeout = 0)
## End(Not run)
```
r None
`load` Reload Saved Datasets
-----------------------------
### Description
Reload datasets written with the function `save`.
### Usage
```
load(file, envir = parent.frame(), verbose = FALSE)
```
### Arguments
| | |
| --- | --- |
| `file` | a (readable binary-mode) [connection](connections) or a character string giving the name of the file to load (when [tilde expansion](path.expand) is done). |
| `envir` | the environment where the data should be loaded. |
| `verbose` | should item names be printed during loading? |
### Details
`load` can load **R** objects saved in the current or any earlier format. It can read a compressed file (see `<save>`) directly from a file or from a suitable connection (including a call to `[url](connections)`).
A not-open connection will be opened in mode `"rb"` and closed after use. Any connection other than a `[gzfile](connections)` or `<gzcon>` connection will be wrapped in `<gzcon>` to allow compressed saves to be handled: note that this leaves the connection in an altered state (in particular, binary-only), and that it needs to be closed explicitly (it will not be garbage-collected).
Only **R** objects saved in the current format (used since **R** 1.4.0) can be read from a connection. If no input is available on a connection a warning will be given, but any input not in the current format will result in a error.
Loading from an earlier version will give a warning about the ‘magic number’: magic numbers `1971:1977` are from **R** < 0.99.0, and `RD[ABX]1` from **R** 0.99.0 to **R** 1.3.1. These are all obsolete, and you are strongly recommended to re-save such files in a current format.
The `verbose` argument is mainly intended for debugging. If it is `TRUE`, then as objects from the file are loaded, their names will be printed to the console. If `verbose` is set to an integer value greater than one, additional names corresponding to attributes and other parts of individual objects will also be printed. Larger values will print names to a greater depth.
Objects can be saved with references to namespaces, usually as part of the environment of a function or formula. Such objects can be loaded even if the namespace is not available: it is replaced by a reference to the global environment with a warning. The warning identifies the first object with such a reference (but there may be more than one).
### Value
A character vector of the names of objects created, invisibly.
### Warning
Saved **R** objects are binary files, even those saved with `ascii = TRUE`, so ensure that they are transferred without conversion of end of line markers. `load` tries to detect such a conversion and gives an informative error message.
`load(<file>)` replaces all existing objects with the same names in the current environment (typically your workspace, `[.GlobalEnv](environment)`) and hence potentially overwrites important data. It is considerably safer to use `envir =` to load into a different environment, or to `<attach>(file)` which `load()`s into a new entry in the `<search>` path.
### See Also
`<save>`, `[download.file](../../utils/html/download.file)`; further `<attach>` as wrapper for `load()`.
For other interfaces to the underlying serialization format, see `[unserialize](serialize)` and `[readRDS](readrds)`.
### Examples
```
## save all data
xx <- pi # to ensure there is some data
save(list = ls(all.names = TRUE), file= "all.rda")
rm(xx)
## restore the saved values to the current environment
local({
load("all.rda")
ls()
})
xx <- exp(1:3)
## restore the saved values to the user's workspace
load("all.rda") ## which is here *equivalent* to
## load("all.rda", .GlobalEnv)
## This however annihilates all objects in .GlobalEnv with the same names !
xx # no longer exp(1:3)
rm(xx)
attach("all.rda") # safer and will warn about masked objects w/ same name in .GlobalEnv
ls(pos = 2)
## also typically need to cleanup the search path:
detach("file:all.rda")
## clean up (the example):
unlink("all.rda")
## Not run:
con <- url("http://some.where.net/R/data/example.rda")
## print the value to see what objects were created.
print(load(con))
close(con) # url() always opens the connection
## End(Not run)
```
r None
`Control` Control Flow
-----------------------
### Description
These are the basic control-flow constructs of the **R** language. They function in much the same way as control statements in any Algol-like language. They are all <reserved> words.
### Usage
```
if(cond) expr
if(cond) cons.expr else alt.expr
for(var in seq) expr
while(cond) expr
repeat expr
break
next
```
### Arguments
| | |
| --- | --- |
| `cond` | A length-one logical vector that is not `NA`. Conditions of length greater than one are currently accepted with a warning, but only the first element is used. An error is signalled instead when the environment variable \_R\_CHECK\_LENGTH\_1\_CONDITION\_ is set to true. Other types are coerced to logical if possible, ignoring any class. |
| `var` | A syntactical name for a variable. |
| `seq` | An expression evaluating to a vector (including a list and an <expression>) or to a [pairlist](list) or `NULL`. A factor value will be coerced to a character vector. As from **R** 4.0.0 this can be a long vector. |
| `expr, cons.expr, alt.expr` | An *expression* in a formal sense. This is either a simple expression or a so-called *compound expression*, usually of the form `{ expr1 ; expr2 }`. |
### Details
`break` breaks out of a `for`, `while` or `repeat` loop; control is transferred to the first statement outside the inner-most loop. `next` halts the processing of the current iteration and advances the looping index. Both `break` and `next` apply only to the innermost of nested loops.
Note that it is a common mistake to forget to put braces (`{ .. }`) around your statements, e.g., after `if(..)` or `for(....)`. In particular, you should not have a newline between `}` and `else` to avoid a syntax error in entering a `if ... else` construct at the keyboard or via `source`. For that reason, one (somewhat extreme) attitude of defensive programming is to always use braces, e.g., for `if` clauses.
The `seq` in a `for` loop is evaluated at the start of the loop; changing it subsequently does not affect the loop. If `seq` has length zero the body of the loop is skipped. Otherwise the variable `var` is assigned in turn the value of each element of `seq`. You can assign to `var` within the body of the loop, but this will not affect the next iteration. When the loop terminates, `var` remains as a variable containing its latest value.
### Value
`if` returns the value of the expression evaluated, or `NULL` invisibly if none was (which may happen if there is no `else`).
`for`, `while` and `repeat` return `NULL` invisibly. `for` sets `var` to the last used element of `seq`, or to `NULL` if it was of length zero.
`break` and `next` do not return a value as they transfer control within the loop.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[Syntax](syntax)` for the basic **R** syntax and operators, `[Paren](paren)` for parentheses and braces.
`<ifelse>`, `<switch>` for other ways to control flow.
### Examples
```
for(i in 1:5) print(1:i)
for(n in c(2,5,10,20,50)) {
x <- stats::rnorm(n)
cat(n, ": ", sum(x^2), "\n", sep = "")
}
f <- factor(sample(letters[1:5], 10, replace = TRUE))
for(i in unique(f)) print(i)
```
r None
`transform` Transform an Object, for Example a Data Frame
----------------------------------------------------------
### Description
`transform` is a generic function, which—at least currently—only does anything useful with data frames. `transform.default` converts its first argument to a data frame if possible and calls `transform.data.frame`.
### Usage
```
transform(`_data`, ...)
```
### Arguments
| | |
| --- | --- |
| `_data` | The object to be transformed |
| `...` | Further arguments of the form `tag=value` |
### Details
The `...` arguments to `transform.data.frame` are tagged vector expressions, which are evaluated in the data frame `_data`. The tags are matched against `names(_data)`, and for those that match, the value replace the corresponding variable in `_data`, and the others are appended to `_data`.
### Value
The modified value of `_data`.
### Warning
This is a convenience function intended for use interactively. For programming it is better to use the standard subsetting arithmetic functions, and in particular the non-standard evaluation of argument `transform` can have unanticipated consequences.
### Note
If some of the values are not vectors of the appropriate length, you deserve whatever you get!
### Author(s)
Peter Dalgaard
### See Also
`[within](with)` for a more flexible approach, `<subset>`, `<list>`, `<data.frame>`
### Examples
```
transform(airquality, Ozone = -Ozone)
transform(airquality, new = -Ozone, Temp = (Temp-32)/1.8)
attach(airquality)
transform(Ozone, logOzone = log(Ozone)) # marginally interesting ...
detach(airquality)
```
r None
`hexmode` Display Numbers in Hexadecimal
-----------------------------------------
### Description
Convert or print integers in hexadecimal format, with as many digits as are needed to display the largest, using leading zeroes as necessary.
### Usage
```
as.hexmode(x)
## S3 method for class 'hexmode'
as.character(x, ...)
## S3 method for class 'hexmode'
format(x, width = NULL, upper.case = FALSE, ...)
## S3 method for class 'hexmode'
print(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | An object, for the methods inheriting from class `"hexmode"`. |
| `width` | `NULL` or a positive integer specifying the minimum field width to be used, with padding by leading zeroes. |
| `upper.case` | a logical indicating whether to use upper-case letters or lower-case letters (default). |
| `...` | further arguments passed to or from other methods. |
### Details
Class `"hexmode"` consists of integer vectors with that class attribute, used merely to ensure that they are printed in hex.
If `width = NULL` (the default), the output is padded with leading zeroes to the smallest width needed for all the non-missing elements.
`as.hexmode` can convert integers (of [type](typeof) `"integer"` or `"double"`) and character vectors whose elements contain only `0-9`, `a-f`, `A-F` (or are `NA`) to class `"hexmode"`.
There is a `[!](logic)` method and methods for `[|](logic)` and `[&](logic)`: these recycle their arguments to the length of the longer and then apply the operators bitwise to each element.
### See Also
`<octmode>`, `<sprintf>` for other options in converting integers to hex, `<strtoi>` to convert hex strings to integers.
### Examples
```
i <- as.hexmode("7fffffff")
i; class(i)
identical(as.integer(i), .Machine$integer.max)
hm <- as.hexmode(c(NA, 1)); hm
as.integer(hm)
```
r None
`gzcon` (De)compress I/O Through Connections
---------------------------------------------
### Description
`gzcon` provides a modified connection that wraps an existing connection, and decompresses reads or compresses writes through that connection. Standard `gzip` headers are assumed.
### Usage
```
gzcon(con, level = 6, allowNonCompressed = TRUE, text = FALSE)
```
### Arguments
| | |
| --- | --- |
| `con` | a connection. |
| `level` | integer between 0 and 9, the compression level when writing. |
| `allowNonCompressed` | logical. When reading, should non-compressed input be allowed? |
| `text` | logical. Should the connection be text-oriented? This is distinct from the mode of the connection (must always be binary). If `TRUE`, `[pushBack](pushback)` works on the connection, otherwise `[readBin](readbin)` and friends apply. |
### Details
If `con` is open then the modified connection is opened. Closing the wrapper connection will also close the underlying connection.
Reading from a connection which does not supply a `gzip` magic header is equivalent to reading from the original connection if `allowNonCompressed` is true, otherwise an error.
Compressed output will contain embedded NUL bytes, and so `con` is not permitted to be a `[textConnection](textconnections)` opened with `open = "w"`. Use a writable `[rawConnection](rawconnection)` to compress data into a variable.
The original connection becomes unusable: any object pointing to it will now refer to the modified connection. For this reason, the new connection needs to be closed explicitly.
### Value
An object inheriting from class `"connection"`. This is the same connection *number* as supplied, but with a modified internal structure. It has binary mode.
### See Also
`[gzfile](connections)`
### Examples
```
## Uncompress a data file from a URL
z <- gzcon(url("https://www.stats.ox.ac.uk/pub/datasets/csb/ch12.dat.gz"))
# read.table can only read from a text-mode connection.
raw <- textConnection(readLines(z))
close(z)
dat <- read.table(raw)
close(raw)
dat[1:4, ]
## gzfile and gzcon can inter-work.
## Of course here one would use gzfile, but file() can be replaced by
## any other connection generator.
zzfil <- tempfile(fileext = ".gz")
zz <- gzfile(zzfil, "w")
cat("TITLE extra line", "2 3 5 7", "", "11 13 17", file = zz, sep = "\n")
close(zz)
readLines(zz <- gzcon(file(zzfil, "rb")))
close(zz)
unlink(zzfil)
zzfil2 <- tempfile(fileext = ".gz")
zz <- gzcon(file(zzfil2, "wb"))
cat("TITLE extra line", "2 3 5 7", "", "11 13 17", file = zz, sep = "\n")
close(zz)
readLines(zz <- gzfile(zzfil2))
close(zz)
unlink(zzfil2)
```
r None
`Round` Rounding of Numbers
----------------------------
### Description
`ceiling` takes a single numeric argument `x` and returns a numeric vector containing the smallest integers not less than the corresponding elements of `x`.
`floor` takes a single numeric argument `x` and returns a numeric vector containing the largest integers not greater than the corresponding elements of `x`.
`trunc` takes a single numeric argument `x` and returns a numeric vector containing the integers formed by truncating the values in `x` toward `0`.
`round` rounds the values in its first argument to the specified number of decimal places (default 0). See ‘Details’ about “round to even” when rounding off a 5.
`signif` rounds the values in its first argument to the specified number of significant digits. Hence, for `numeric` `x`, `signif(x, dig)` is the same as `round(x, dig - ceiling(log10(abs(x))))`. For `<complex>` `x`, this is not the case, see the ‘Details’.
### Usage
```
ceiling(x)
floor(x)
trunc(x, ...)
round(x, digits = 0)
signif(x, digits = 6)
```
### Arguments
| | |
| --- | --- |
| `x` | a numeric vector. Or, for `round` and `signif`, a complex vector. |
| `digits` | integer indicating the number of decimal places (`round`) or significant digits (`signif`) to be used. Negative values are allowed (see ‘Details’). |
| `...` | arguments to be passed to methods. |
### Details
These are generic functions: methods can be defined for them individually or via the `[Math](groupgeneric)` group generic.
Note that for rounding off a 5, the IEC 60559 standard (see also ‘IEEE 754’) is expected to be used, ‘*go to the even digit*’. Therefore `round(0.5)` is `0` and `round(-1.5)` is `-2`. However, this is dependent on OS services and on representation error (since e.g. `0.15` is not represented exactly, the rounding rule applies to the represented number and not to the printed number, and so `round(0.15, 1)` could be either `0.1` or `0.2`).
Rounding to a negative number of digits means rounding to a power of ten, so for example `round(x, digits = -2)` rounds to the nearest hundred.
For `signif` the recognized values of `digits` are `1...22`, and non-missing values are rounded to the nearest integer in that range. Complex numbers are rounded to retain the specified number of digits in the larger of the components. Each element of the vector is rounded individually, unlike printing.
These are all primitive functions.
### S4 methods
These are all (internally) S4 generic.
`ceiling`, `floor` and `trunc` are members of the `[Math](../../methods/html/s4groupgeneric)` group generic. As an S4 generic, `trunc` has only one argument.
`round` and `signif` are members of the `[Math2](../../methods/html/s4groupgeneric)` group generic.
### Warning
The realities of computer arithmetic can cause unexpected results, especially with `floor` and `ceiling`. For example, we ‘know’ that `floor(log(x, base = 8))` for `x = 8` is `1`, but `0` has been seen on an **R** platform. It is normally necessary to use a tolerance.
Rounding to decimal digits in binary arithmetic is non-trivial (when `digits != 0`) and may be surprising. Be aware that most decimal fractions are *not* exactly representable in binary double precision. In **R** 4.0.0, the algorithm for `round(x, d)`, for *d > 0*, has been improved to *measure* and round “to nearest even”, contrary to earlier versions of **R** (or also to `<sprintf>()` or `<format>()` based rounding).
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
The ISO/IEC/IEEE 60559:2011 standard is available for money from <https://www.iso.org>.
The IEEE 754:2008 standard is more openly documented, e.g, at <https://en.wikipedia.org/wiki/IEEE_754>.
### See Also
`[as.integer](integer)`. Package [round](https://CRAN.R-project.org/package=round)'s `[roundX](../../round/html/roundx)()` for several versions or implementations of rounding, including some previous and the current **R** version (as `version = "3d.C"`).
### Examples
```
round(.5 + -2:4) # IEEE / IEC rounding: -2 0 0 2 2 4 4
## (this is *good* behaviour -- do *NOT* report it as bug !)
( x1 <- seq(-2, 4, by = .5) )
round(x1) #-- IEEE / IEC rounding !
x1[trunc(x1) != floor(x1)]
x1[round(x1) != floor(x1 + .5)]
(non.int <- ceiling(x1) != floor(x1))
x2 <- pi * 100^(-1:3)
round(x2, 3)
signif(x2, 3)
```
r None
`round.POSIXt` Round / Truncate Data-Time Objects
--------------------------------------------------
### Description
Round or truncate date-time objects.
### Usage
```
## S3 method for class 'POSIXt'
round(x,
units = c("secs", "mins", "hours", "days", "months", "years"))
## S3 method for class 'POSIXt'
trunc(x,
units = c("secs", "mins", "hours", "days", "months", "years"),
...)
## S3 method for class 'Date'
round(x, ...)
## S3 method for class 'Date'
trunc(x, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | an object inheriting from `"POSIXt"` or `"Date"`. |
| `units` | one of the units listed. Can be abbreviated. |
| `...` | arguments to be passed to or from other methods, notably `digits` for `round`. |
### Details
The time is rounded or truncated to the second, minute, hour, day, month or year. Time zones are only relevant to days or more, when midnight in the current [time zone](timezones) is used.
The methods for class `"Date"` are of little use except to remove fractional days.
### Value
An object of class `"POSIXlt"` or `"Date"`.
### See Also
`<round>` for the generic function and default methods.
`[DateTimeClasses](datetimeclasses)`, `[Date](dates)`
### Examples
```
round(.leap.seconds + 1000, "hour")
trunc(Sys.time(), "day")
```
| programming_docs |
r None
`sweep` Sweep out Array Summaries
----------------------------------
### Description
Return an array obtained from an input array by sweeping out a summary statistic.
### Usage
```
sweep(x, MARGIN, STATS, FUN = "-", check.margin = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | an array, including a matrix. |
| `MARGIN` | a vector of indices giving the extent(s) of `x` which correspond to `STATS`. Where `x` has named dimnames, it can be a character vector selecting dimension names. |
| `STATS` | the summary statistic which is to be swept out. |
| `FUN` | the function to be used to carry out the sweep. |
| `check.margin` | logical. If `TRUE` (the default), warn if the length or dimensions of `STATS` do not match the specified dimensions of `x`. Set to `FALSE` for a small speed gain when you *know* that dimensions match. |
| `...` | optional arguments to `FUN`. |
### Details
`FUN` is found by a call to `<match.fun>`. As in the default, binary operators can be supplied if quoted or backquoted.
`FUN` should be a function of two arguments: it will be called with arguments `x` and an array of the same dimensions generated from `STATS` by `<aperm>`.
The consistency check among `STATS`, `MARGIN` and `x` is stricter if `STATS` is an array than if it is a vector. In the vector case, some kinds of recycling are allowed without a warning. Use `sweep(x, MARGIN, as.array(STATS))` if `STATS` is a vector and you want to be warned if any recycling occurs.
### Value
An array with the same shape as `x`, but with the summary statistics swept out.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<apply>` on which `sweep` used to be based; `<scale>` for centering and scaling.
### Examples
```
require(stats) # for median
med.att <- apply(attitude, 2, median)
sweep(data.matrix(attitude), 2, med.att) # subtract the column medians
## More sweeping:
A <- array(1:24, dim = 4:2)
## no warnings in normal use
sweep(A, 1, 5)
(A.min <- apply(A, 1, min)) # == 1:4
sweep(A, 1, A.min)
sweep(A, 1:2, apply(A, 1:2, median))
## warnings when mismatch
sweep(A, 1, 1:3) # STATS does not recycle
sweep(A, 1, 6:1) # STATS is longer
## exact recycling:
sweep(A, 1, 1:2) # no warning
sweep(A, 1, as.array(1:2)) # warning
## Using named dimnames
dimnames(A) <- list(fee=1:4, fie=1:3, fum=1:2)
mn_fum_fie <- apply(A, c("fum", "fie"), mean)
mn_fum_fie
sweep(A, c("fum", "fie"), mn_fum_fie)
```
r None
`factor` Factors
-----------------
### Description
The function `factor` is used to encode a vector as a factor (the terms ‘category’ and ‘enumerated type’ are also used for factors). If argument `ordered` is `TRUE`, the factor levels are assumed to be ordered. For compatibility with S there is also a function `ordered`.
`is.factor`, `is.ordered`, `as.factor` and `as.ordered` are the membership and coercion functions for these classes.
### Usage
```
factor(x = character(), levels, labels = levels,
exclude = NA, ordered = is.ordered(x), nmax = NA)
ordered(x, ...)
is.factor(x)
is.ordered(x)
as.factor(x)
as.ordered(x)
addNA(x, ifany = FALSE)
```
### Arguments
| | |
| --- | --- |
| `x` | a vector of data, usually taking a small number of distinct values. |
| `levels` | an optional vector of the unique values (as character strings) that `x` might have taken. The default is the unique set of values taken by `[as.character](character)(x)`, sorted into increasing order *of `x`*. Note that this set can be specified as smaller than `sort(unique(x))`. |
| `labels` | *either* an optional character vector of labels for the levels (in the same order as `levels` after removing those in `exclude`), *or* a character string of length 1. Duplicated values in `labels` can be used to map different values of `x` to the same factor level. |
| `exclude` | a vector of values to be excluded when forming the set of levels. This may be factor with the same level set as `x` or should be a `character`. |
| `ordered` | logical flag to determine if the levels should be regarded as ordered (in the order given). |
| `nmax` | an upper bound on the number of levels; see ‘Details’. |
| `...` | (in `ordered(.)`): any of the above, apart from `ordered` itself. |
| `ifany` | only add an `NA` level if it is used, i.e. if `any(is.na(x))`. |
### Details
The type of the vector `x` is not restricted; it only must have an `[as.character](character)` method and be sortable (by `<order>`).
Ordered factors differ from factors only in their class, but methods and the model-fitting functions treat the two classes quite differently.
The encoding of the vector happens as follows. First all the values in `exclude` are removed from `levels`. If `x[i]` equals `levels[j]`, then the `i`-th element of the result is `j`. If no match is found for `x[i]` in `levels` (which will happen for excluded values) then the `i`-th element of the result is set to `[NA](na)`.
Normally the ‘levels’ used as an attribute of the result are the reduced set of levels after removing those in `exclude`, but this can be altered by supplying `labels`. This should either be a set of new labels for the levels, or a character string, in which case the levels are that character string with a sequence number appended.
`factor(x, exclude = NULL)` applied to a factor without `[NA](na)`s is a no-operation unless there are unused levels: in that case, a factor with the reduced level set is returned. If `exclude` is used, since **R** version 3.4.0, excluding non-existing character levels is equivalent to excluding nothing, and when `exclude` is a `<character>` vector, that *is* applied to the levels of `x`. Alternatively, `exclude` can be factor with the same level set as `x` and will exclude the levels present in `exclude`.
The codes of a factor may contain `[NA](na)`. For a numeric `x`, set `exclude = NULL` to make `[NA](na)` an extra level (prints as `<NA>`); by default, this is the last level.
If `NA` is a level, the way to set a code to be missing (as opposed to the code of the missing level) is to use `[is.na](na)` on the left-hand-side of an assignment (as in `is.na(f)[i] <- TRUE`; indexing inside `is.na` does not work). Under those circumstances missing values are currently printed as `<NA>`, i.e., identical to entries of level `NA`.
`is.factor` is generic: you can write methods to handle specific classes of objects, see [InternalMethods](internalmethods).
Where `levels` is not supplied, `<unique>` is called. Since factors typically have quite a small number of levels, for large vectors `x` it is helpful to supply `nmax` as an upper bound on the number of unique values.
Since **R** 4.1.0, when using `<c>` to combine a (possibly ordered) factor with other objects, if all objects are (possibly ordered) factors, the result will be a factor with levels the union of the level sets of the elements, in the order the levels occur in the level sets of the elements (which means that if all the elements have the same level set, that is the level set of the result), equivalent to how `<unlist>` operates on a list of factor objects.
### Value
`factor` returns an object of class `"factor"` which has a set of integer codes the length of `x` with a `"levels"` attribute of mode `<character>` and unique (`(.)`) entries. If argument `ordered` is true (or `ordered()` is used) the result has class `c("ordered", "factor")`. Undocumentedly for a long time, `factor(x)` loses all `<attributes>(x)` but `"names"`, and resets `"levels"` and `"class"`.
Applying `factor` to an ordered or unordered factor returns a factor (of the same type) with just the levels which occur: see also `[[.factor](extract.factor)` for a more transparent way to achieve this.
`is.factor` returns `TRUE` or `FALSE` depending on whether its argument is of type factor or not. Correspondingly, `is.ordered` returns `TRUE` when its argument is an ordered factor and `FALSE` otherwise.
`as.factor` coerces its argument to a factor. It is an abbreviated (sometimes faster) form of `factor`.
`as.ordered(x)` returns `x` if this is ordered, and `ordered(x)` otherwise.
`addNA` modifies a factor by turning `NA` into an extra level (so that `NA` values are counted in tables, for instance).
`.valid.factor(object)` checks the validity of a factor, currently only `levels(object)`, and returns `TRUE` if it is valid, otherwise a string describing the validity problem. This function is used for `[validObject](../../methods/html/validobject)(<factor>)`.
### Warning
The interpretation of a factor depends on both the codes and the `"levels"` attribute. Be careful only to compare factors with the same set of levels (in the same order). In particular, `as.numeric` applied to a factor is meaningless, and may happen by implicit coercion. To transform a factor `f` to approximately its original numeric values, `as.numeric(levels(f))[f]` is recommended and slightly more efficient than `as.numeric(as.character(f))`.
The levels of a factor are by default sorted, but the sort order may well depend on the locale at the time of creation, and should not be assumed to be ASCII.
There are some anomalies associated with factors that have `NA` as a level. It is suggested to use them sparingly, e.g., only for tabulation purposes.
### Comparison operators and group generic methods
There are `"factor"` and `"ordered"` methods for the [group generic](groupgeneric) `[Ops](groupgeneric)` which provide methods for the [Comparison](comparison) operators, and for the `[min](extremes)`, `[max](extremes)`, and `<range>` generics in `[Summary](groupgeneric)` of `"ordered"`. (The rest of the groups and the `[Math](groupgeneric)` group generate an error as they are not meaningful for factors.)
Only `==` and `!=` can be used for factors: a factor can only be compared to another factor with an identical set of levels (not necessarily in the same ordering) or to a character vector. Ordered factors are compared in the same way, but the general dispatch mechanism precludes comparing ordered and unordered factors.
All the comparison operators are available for ordered factors. Collation is done by the levels of the operands: if both operands are ordered factors they must have the same level set.
### Note
In earlier versions of **R**, storing character data as a factor was more space efficient if there is even a small proportion of repeats. However, identical character strings now share storage, so the difference is small in most cases. (Integer values are stored in 4 bytes whereas each reference to a character string needs a pointer of 4 or 8 bytes.)
### References
Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S*. Wadsworth & Brooks/Cole.
### See Also
`[[.factor](extract.factor)` for subsetting of factors.
`<gl>` for construction of balanced factors and `[C](../../stats/html/zc)` for factors with specified contrasts. `<levels>` and `<nlevels>` for accessing the levels, and `[unclass](class)` to get integer codes.
### Examples
```
(ff <- factor(substring("statistics", 1:10, 1:10), levels = letters))
as.integer(ff) # the internal codes
(f. <- factor(ff)) # drops the levels that do not occur
ff[, drop = TRUE] # the same, more transparently
factor(letters[1:20], labels = "letter")
class(ordered(4:1)) # "ordered", inheriting from "factor"
z <- factor(LETTERS[3:1], ordered = TRUE)
## and "relational" methods work:
stopifnot(sort(z)[c(1,3)] == range(z), min(z) < max(z))
## suppose you want "NA" as a level, and to allow missing values.
(x <- factor(c(1, 2, NA), exclude = NULL))
is.na(x)[2] <- TRUE
x # [1] 1 <NA> <NA>
is.na(x)
# [1] FALSE TRUE FALSE
## More rational, since R 3.4.0 :
factor(c(1:2, NA), exclude = "" ) # keeps <NA> , as
factor(c(1:2, NA), exclude = NULL) # always did
## exclude = <character>
z # ordered levels 'A < B < C'
factor(z, exclude = "C") # does exclude
factor(z, exclude = "B") # ditto
## Now, labels maybe duplicated:
## factor() with duplicated labels allowing to "merge levels"
x <- c("Man", "Male", "Man", "Lady", "Female")
## Map from 4 different values to only two levels:
(xf <- factor(x, levels = c("Male", "Man" , "Lady", "Female"),
labels = c("Male", "Male", "Female", "Female")))
#> [1] Male Male Male Female Female
#> Levels: Male Female
## Using addNA()
Month <- airquality$Month
table(addNA(Month))
table(addNA(Month, ifany = TRUE))
```
r None
`Sys.getpid` Get the Process ID of the R Session
-------------------------------------------------
### Description
Get the process ID of the **R** Session. It is guaranteed by the operating system that two **R** sessions running simultaneously will have different IDs, but it is possible that **R** sessions running at different times will have the same ID.
### Usage
```
Sys.getpid()
```
### Value
An integer, often between 1 and 32767 under Unix-alikes (but for example FreeBSD and macOS use IDs up to 99999) and a positive integer (up to 32767) under Windows.
### Examples
```
Sys.getpid()
## Show files opened from this R process
if(.Platform$OS.type == "unix") ## on Unix-alikes such Linux, macOS, FreeBSD:
system(paste("lsof -p", Sys.getpid()))
```
r None
`ns-dblcolon` Double Colon and Triple Colon Operators
------------------------------------------------------
### Description
Accessing exported and internal variables, i.e. **R** objects (including lazy loaded data sets) in a namespace.
### Usage
```
pkg::name
pkg:::name
```
### Arguments
| | |
| --- | --- |
| `pkg` | package name: symbol or literal character string. |
| `name` | variable name: symbol or literal character string. |
### Details
For a package pkg, `pkg::name` returns the value of the exported variable `name` in namespace `pkg`, whereas `pkg:::name` returns the value of the internal variable `name`. The package namespace will be loaded if it was not loaded before the call, but the package will not be attached to the search path.
Specifying a variable or package that does not exist is an error.
Note that `pkg::name` does **not** access the objects in the environment `package:pkg` (which does not exist until the package's namespace is attached): the latter may contain objects not exported from the namespace. It can access datasets made available by lazy-loading.
### Note
It is typically a design mistake to use `:::` in your code since the corresponding object has probably been kept internal for a good reason. Consider contacting the package `[maintainer](../../utils/html/maintainer)` if you feel the need to access the object for anything but mere inspection.
### See Also
`<get>` to access an object masked by another of the same name. `[loadNamespace](ns-load)`, `[asNamespace](ns-internal)` for more about namespaces.
### Examples
```
base::log
base::"+"
## Beware -- use ':::' at your own risk! (see "Details")
stats:::coef.default
```
r None
`zapsmall` Rounding of Numbers: Zapping Small Ones to Zero
-----------------------------------------------------------
### Description
`zapsmall` determines a `digits` argument `dr` for calling `round(x, digits = dr)` such that values close to zero (compared with the maximal absolute value) are ‘zapped’, i.e., replaced by `0`.
### Usage
```
zapsmall(x, digits = getOption("digits"))
```
### Arguments
| | |
| --- | --- |
| `x` | a numeric or complex vector or any **R** number-like object which has a `<round>` method and basic arithmetic methods including `[log10](log)()`. |
| `digits` | integer indicating the precision to be used. |
### References
Chambers, J. M. (1998) *Programming with Data. A Guide to the S Language*. Springer.
### Examples
```
x2 <- pi * 100^(-1:3)
print(x2 / 1000, digits = 4)
zapsmall(x2 / 1000, digits = 4)
zapsmall(exp(1i*0:4*pi/2))
```
r None
`Sys.readlink` Read File Symbolic Links
----------------------------------------
### Description
Find out if a file path is a symbolic link, and if so what it is linked to, *via* the system call `readlink`.
Symbolic links are a POSIX concept, not implemented on Windows but for most filesystems on Unix-alikes.
### Usage
```
Sys.readlink(paths)
```
### Arguments
| | |
| --- | --- |
| `paths` | character vector of file paths. Tilde expansion is done: see `<path.expand>`. |
### Value
A character vector of the same length as `paths`. The entries are the path of the file linked to, `""` if the path is not a symbolic link, and `NA` if there is an error (e.g., the path does not exist or cannot be converted to the native encoding).
On platforms without the `readlink` system call, all elements are `""`.
### See Also
`[file.symlink](files)` for the creation of symbolic links (and their Windows analogues), `<file.info>`
### Examples
```
##' To check if files (incl. directories) are symbolic links:
is.symlink <- function(paths) isTRUE(nzchar(Sys.readlink(paths), keepNA=TRUE))
## will return all FALSE when the platform has no `readlink` system call.
is.symlink("/foo/bar")
```
r None
`delayedAssign` Delay Evaluation
---------------------------------
### Description
`delayedAssign` creates a *promise* to evaluate the given expression if its value is requested. This provides direct access to the *lazy evaluation* mechanism used by **R** for the evaluation of (interpreted) functions.
### Usage
```
delayedAssign(x, value, eval.env = parent.frame(1),
assign.env = parent.frame(1))
```
### Arguments
| | |
| --- | --- |
| `x` | a variable name (given as a quoted string in the function call) |
| `value` | an expression to be assigned to `x` |
| `eval.env` | an environment in which to evaluate `value` |
| `assign.env` | an environment in which to assign `x` |
### Details
Both `eval.env` and `assign.env` default to the currently active environment.
The expression assigned to a promise by `delayedAssign` will not be evaluated until it is eventually ‘forced’. This happens when the variable is first accessed.
When the promise is eventually forced, it is evaluated within the environment specified by `eval.env` (whose contents may have changed in the meantime). After that, the value is fixed and the expression will not be evaluated again.
### Value
This function is invoked for its side effect, which is assigning a promise to evaluate `value` to the variable `x`.
### See Also
`<substitute>`, to see the expression associated with a promise, if `assign.env` is not the `[.GlobalEnv](environment)`.
### Examples
```
msg <- "old"
delayedAssign("x", msg)
substitute(x) # shows only 'x', as it is in the global env.
msg <- "new!"
x # new!
delayedAssign("x", {
for(i in 1:3)
cat("yippee!\n")
10
})
x^2 #- yippee
x^2 #- simple number
ne <- new.env()
delayedAssign("x", pi + 2, assign.env = ne)
## See the promise {without "forcing" (i.e. evaluating) it}:
substitute(x, ne) # 'pi + 2'
### Promises in an environment [for advanced users]: ---------------------
e <- (function(x, y = 1, z) environment())(cos, "y", {cat(" HO!\n"); pi+2})
## How can we look at all promises in an env (w/o forcing them)?
gete <- function(e_)
lapply(lapply(ls(e_), as.name),
function(n) eval(substitute(substitute(X, e_), list(X=n))))
(exps <- gete(e))
sapply(exps, typeof)
(le <- as.list(e)) # evaluates ("force"s) the promises
stopifnot(identical(unname(le), lapply(exps, eval))) # and another "Ho!"
```
r None
`chol` The Choleski Decomposition
----------------------------------
### Description
Compute the Choleski factorization of a real symmetric positive-definite square matrix.
### Usage
```
chol(x, ...)
## Default S3 method:
chol(x, pivot = FALSE, LINPACK = FALSE, tol = -1, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | an object for which a method exists. The default method applies to numeric (or logical) symmetric, positive-definite matrices. |
| `...` | arguments to be based to or from methods. |
| `pivot` | Should pivoting be used? |
| `LINPACK` | logical. Should LINPACK be used (now an error)? |
| `tol` | A numeric tolerance for use with `pivot = TRUE`. |
### Details
`chol` is generic: the description here applies to the default method.
Note that only the upper triangular part of `x` is used, so that *R'R = x* when `x` is symmetric.
If `pivot = FALSE` and `x` is not non-negative definite an error occurs. If `x` is positive semi-definite (i.e., some zero eigenvalues) an error will also occur as a numerical tolerance is used.
If `pivot = TRUE`, then the Choleski decomposition of a positive semi-definite `x` can be computed. The rank of `x` is returned as `attr(Q, "rank")`, subject to numerical errors. The pivot is returned as `attr(Q, "pivot")`. It is no longer the case that `t(Q) %*% Q` equals `x`. However, setting `pivot <- attr(Q, "pivot")` and `oo <- order(pivot)`, it is true that `t(Q[, oo]) %*% Q[, oo]` equals `x`, or, alternatively, `t(Q) %*% Q` equals `x[pivot,
pivot]`. See the examples.
The value of `tol` is passed to LAPACK, with negative values selecting the default tolerance of (usually) `nrow(x) *
.Machine$double.neg.eps * max(diag(x))`. The algorithm terminates once the pivot is less than `tol`.
Unsuccessful results from the underlying LAPACK code will result in an error giving a positive error code: these can only be interpreted by detailed study of the FORTRAN code.
### Value
The upper triangular factor of the Choleski decomposition, i.e., the matrix *R* such that *R'R = x* (see example).
If pivoting is used, then two additional attributes `"pivot"` and `"rank"` are also returned.
### Warning
The code does not check for symmetry.
If `pivot = TRUE` and `x` is not non-negative definite then there will be a warning message but a meaningless result will occur. So only use `pivot = TRUE` when `x` is non-negative definite by construction.
### Source
This is an interface to the LAPACK routines `DPOTRF` and `DPSTRF`,
LAPACK is from <https://www.netlib.org/lapack/> and its guide is listed in the references.
### References
Anderson. E. and ten others (1999) *LAPACK Users' Guide*. Third Edition. SIAM.
Available on-line at <https://www.netlib.org/lapack/lug/lapack_lug.html>.
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<chol2inv>` for its *inverse* (without pivoting), `<backsolve>` for solving linear systems with upper triangular left sides.
`<qr>`, `<svd>` for related matrix factorizations.
### Examples
```
( m <- matrix(c(5,1,1,3),2,2) )
( cm <- chol(m) )
t(cm) %*% cm #-- = 'm'
crossprod(cm) #-- = 'm'
# now for something positive semi-definite
x <- matrix(c(1:5, (1:5)^2), 5, 2)
x <- cbind(x, x[, 1] + 3*x[, 2])
colnames(x) <- letters[20:22]
m <- crossprod(x)
qr(m)$rank # is 2, as it should be
# chol() may fail, depending on numerical rounding:
# chol() unlike qr() does not use a tolerance.
try(chol(m))
(Q <- chol(m, pivot = TRUE))
## we can use this by
pivot <- attr(Q, "pivot")
crossprod(Q[, order(pivot)]) # recover m
## now for a non-positive-definite matrix
( m <- matrix(c(5,-5,-5,3), 2, 2) )
try(chol(m)) # fails
(Q <- chol(m, pivot = TRUE)) # warning
crossprod(Q) # not equal to m
```
| programming_docs |
r None
`c` Combine Values into a Vector or List
-----------------------------------------
### Description
This is a generic function which combines its arguments.
The default method combines its arguments to form a vector. All arguments are coerced to a common type which is the type of the returned value, and all attributes except names are removed.
### Usage
```
## S3 Generic function
c(...)
## Default S3 method:
c(..., recursive = FALSE, use.names = TRUE)
```
### Arguments
| | |
| --- | --- |
| `...` | objects to be concatenated. All `[NULL](null)` entries are dropped before method dispatch unless at the very beginning of the argument list. |
| `recursive` | logical. If `recursive = TRUE`, the function recursively descends through lists (and pairlists) combining all their elements into a vector. |
| `use.names` | logical indicating if `<names>` should be preserved. |
### Details
The output type is determined from the highest type of the components in the hierarchy NULL < raw < logical < integer < double < complex < character < list < expression. Pairlists are treated as lists, whereas non-vector components (such as `<name>`s / `symbol`s and `<call>`s) are treated as one-element `<list>`s which cannot be unlisted even if `recursive = TRUE`.
Note that in **R** < 4.1.0, `<factor>`s were treated only via their internal `<integer>` codes: now there is `[c.factor](factor)` method which combines factors into a factor.
`c` is sometimes used for its side effect of removing attributes except names, for example to turn an `<array>` into a vector. `as.vector` is a more intuitive way to do this, but also drops names. Note that methods other than the default are not required to do this (and they will almost certainly preserve a class attribute).
This is a <primitive> function.
### Value
`NULL` or an expression or a vector of an appropriate mode. (With no arguments the value is `NULL`.)
### S4 methods
This function is S4 generic, but with argument list `(x, ...)`.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<unlist>` and `[as.vector](vector)` to produce attribute-free vectors.
### Examples
```
c(1,7:9)
c(1:5, 10.5, "next")
## uses with a single argument to drop attributes
x <- 1:4
names(x) <- letters[1:4]
x
c(x) # has names
as.vector(x) # no names
dim(x) <- c(2,2)
x
c(x)
as.vector(x)
## append to a list:
ll <- list(A = 1, c = "C")
## do *not* use
c(ll, d = 1:3) # which is == c(ll, as.list(c(d = 1:3)))
## but rather
c(ll, d = list(1:3)) # c() combining two lists
c(list(A = c(B = 1)), recursive = TRUE)
c(options(), recursive = TRUE)
c(list(A = c(B = 1, C = 2), B = c(E = 7)), recursive = TRUE)
```
r None
`paste` Concatenate Strings
----------------------------
### Description
Concatenate vectors after converting to character.
### Usage
```
paste (..., sep = " ", collapse = NULL, recycle0 = FALSE)
paste0(..., collapse = NULL, recycle0 = FALSE)
```
### Arguments
| | |
| --- | --- |
| `...` | one or more **R** objects, to be converted to character vectors. |
| `sep` | a character string to separate the terms. Not `[NA\_character\_](na)`. |
| `collapse` | an optional character string to separate the results. Not `[NA\_character\_](na)`. |
| `recycle0` | `<logical>` indicating if zero-length character arguments should lead to the zero-length `<character>(0)` after the `sep`-phase (which turns into `""` in the `collapse`-phase, i.e., when `collapse` is not `NULL`). |
### Details
`paste` converts its arguments (*via* `[as.character](character)`) to character strings, and concatenates them (separating them by the string given by `sep`). If the arguments are vectors, they are concatenated term-by-term to give a character vector result. Vector arguments are recycled as needed, with zero-length arguments being recycled to `""` only if `recycle0` is not true *or* `collapse` is not `NULL`.
Note that `paste()` coerces `[NA\_character\_](na)`, the character missing value, to `"NA"` which may seem undesirable, e.g., when pasting two character vectors, or very desirable, e.g. in `paste("the value of p is ", p)`.
`paste0(..., collapse)` is equivalent to `paste(..., sep = "", collapse)`, slightly more efficiently.
If a value is specified for `collapse`, the values in the result are then concatenated into a single string, with the elements being separated by the value of `collapse`.
### Value
A character vector of the concatenated values. This will be of length zero if all the objects are, unless `collapse` is non-NULL, in which case it is `""` (a single empty string).
If any input into an element of the result is in UTF-8 (and none are declared with encoding `"bytes"`, see `[Encoding](encoding)`), that element will be in UTF-8, otherwise in the current encoding in which case the encoding of the element is declared if the current locale is either Latin-1 or UTF-8, at least one of the corresponding inputs (including separators) had a declared encoding and all inputs were either ASCII or declared.
If an input into an element is declared with encoding `"bytes"`, no translation will be done of any of the elements and the resulting element will have encoding `"bytes"`. If `collapse` is non-NULL, this applies also to the second, collapsing, phase, but some translation may have been done in pasting object together in the first phase.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[toString](tostring)` typically calls `paste(*, collapse=", ")`. String manipulation with `[as.character](character)`, `<substr>`, `<nchar>`, `<strsplit>`; further, `<cat>` which concatenates and writes to a file, and `<sprintf>` for C like string construction.
‘[plotmath](../../grdevices/html/plotmath)’ for the use of `paste` in plot annotation.
### Examples
```
## When passing a single vector, paste0 and paste work like as.character.
paste0(1:12)
paste(1:12) # same
as.character(1:12) # same
## If you pass several vectors to paste0, they are concatenated in a
## vectorized way.
(nth <- paste0(1:12, c("st", "nd", "rd", rep("th", 9))))
## paste works the same, but separates each input with a space.
## Notice that the recycling rules make every input as long as the longest input.
paste(month.abb, "is the", nth, "month of the year.")
paste(month.abb, letters)
## You can change the separator by passing a sep argument
## which can be multiple characters.
paste(month.abb, "is the", nth, "month of the year.", sep = "_*_")
## To collapse the output into a single string, pass a collapse argument.
paste0(nth, collapse = ", ")
## For inputs of length 1, use the sep argument rather than collapse
paste("1st", "2nd", "3rd", collapse = ", ") # probably not what you wanted
paste("1st", "2nd", "3rd", sep = ", ")
## You can combine the sep and collapse arguments together.
paste(month.abb, nth, sep = ": ", collapse = "; ")
## Using paste() in combination with strwrap() can be useful
## for dealing with long strings.
(title <- paste(strwrap(
"Stopping distance of cars (ft) vs. speed (mph) from Ezekiel (1930)",
width = 30), collapse = "\n"))
plot(dist ~ speed, cars, main = title)
## 'recycle0 = TRUE' allows more vectorized behaviour, i.e. zero-length recycling :
valid <- FALSE
val <- pi
paste("The value is", val[valid], "-- not so good!")
paste("The value is", val[valid], "-- good: empty!", recycle0=TRUE) # -> character(0)
## When 'collapse = <string>', the result is a length-1 string :
paste("foo", {}, "bar", collapse="|") # |--> "foo bar"
paste("foo", {}, "bar", collapse="|", recycle0 = TRUE) # |--> ""
## all empty args
paste( collapse="|") # |--> "" as do all these:
paste( collapse="|", recycle0 = TRUE)
paste({}, collapse="|")
paste({}, collapse="|", recycle0 = TRUE)
```
r None
`Trig` Trigonometric Functions
-------------------------------
### Description
These functions give the obvious trigonometric functions. They respectively compute the cosine, sine, tangent, arc-cosine, arc-sine, arc-tangent, and the two-argument arc-tangent.
`cospi(x)`, `sinpi(x)`, and `tanpi(x)`, compute `cos(pi*x)`, `sin(pi*x)`, and `tan(pi*x)`.
### Usage
```
cos(x)
sin(x)
tan(x)
acos(x)
asin(x)
atan(x)
atan2(y, x)
cospi(x)
sinpi(x)
tanpi(x)
```
### Arguments
| | |
| --- | --- |
| `x, y` | numeric or complex vectors. |
### Details
The arc-tangent of two arguments `atan2(y, x)` returns the angle between the x-axis and the vector from the origin to *(x, y)*, i.e., for positive arguments `atan2(y, x) == atan(y/x)`.
Angles are in radians, not degrees, for the standard versions (i.e., a right angle is *π/2*), and in ‘half-rotations’ for `cospi` etc.
`cospi(x)`, `sinpi(x)`, and `tanpi(x)` are accurate for `x` values which are multiples of a half.
All except `atan2` are [internal generic](internalmethods) <primitive> functions: methods can be defined for them individually or via the `[Math](groupgeneric)` group generic.
These are all wrappers to system calls of the same name (with prefix `c` for complex arguments) where available. (`cospi`, `sinpi`, and `tanpi` are part of a C11 extension and provided by e.g. macOS and Solaris: where not yet available call to `cos` *etc* are used, with special cases for multiples of a half.)
### Value
`tanpi(0.5)` is `[NaN](is.finite)`. Similarly for other inputs with fractional part `0.5`.
### Complex values
For the inverse trigonometric functions, branch cuts are defined as in Abramowitz and Stegun, figure 4.4, page 79.
For `asin` and `acos`, there are two cuts, both along the real axis: *(-Inf, -1]* and *[1, Inf)*.
For `atan` there are two cuts, both along the pure imaginary axis: *(-1i\*Inf, -1i]* and *[1i, 1i\*Inf)*.
The behaviour actually on the cuts follows the C99 standard which requires continuity coming round the endpoint in a counter-clockwise direction.
Complex arguments for `cospi`, `sinpi`, and `tanpi` are not yet implemented, and they are a ‘future direction’ of ISO/IEC TS 18661-4.
### S4 methods
All except `atan2` are S4 generic functions: methods can be defined for them individually or via the `[Math](../../methods/html/s4groupgeneric)` group generic.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Abramowitz, M. and Stegun, I. A. (1972). *Handbook of Mathematical Functions*. New York: Dover.
Chapter 4. Elementary Transcendental Functions: Logarithmic, Exponential, Circular and Hyperbolic Functions
For `cospi`, `sinpi`, and `tanpi` the C11 extension ISO/IEC TS 18661-4:2015 (draft at <http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1950.pdf>).
### Examples
```
x <- seq(-3, 7, by = 1/8)
tx <- cbind(x, cos(pi*x), cospi(x), sin(pi*x), sinpi(x),
tan(pi*x), tanpi(x), deparse.level=2)
op <- options(digits = 4, width = 90) # for nice formatting
head(tx)
tx[ (x %% 1) %in% c(0, 0.5) ,]
options(op)
```
r None
`taskCallback` Add or Remove a Top-Level Task Callback
-------------------------------------------------------
### Description
`addTaskCallback` registers an R function that is to be called each time a top-level task is completed.
`removeTaskCallback` un-registers a function that was registered earlier via `addTaskCallback`.
These provide low-level access to the internal/native mechanism for managing task-completion actions. One can use `[taskCallbackManager](taskcallbackmanager)` at the **R**-language level to manage **R** functions that are called at the completion of each task. This is easier and more direct.
### Usage
```
addTaskCallback(f, data = NULL, name = character())
removeTaskCallback(id)
```
### Arguments
| | |
| --- | --- |
| `f` | the function that is to be invoked each time a top-level task is successfully completed. This is called with 5 or 4 arguments depending on whether `data` is specified or not, respectively. The return value should be a logical value indicating whether to keep the callback in the list of active callbacks or discard it. |
| `data` | if specified, this is the 5-th argument in the call to the callback function `f`. |
| `id` | a string or an integer identifying the element in the internal callback list to be removed. Integer indices are 1-based, i.e the first element is 1. The names of currently registered handlers is available using `[getTaskCallbackNames](taskcallbacknames)` and is also returned in a call to `[addTaskCallback](taskcallback)`. |
| `name` | character: names to be used. |
### Details
Top-level tasks are individual expressions rather than entire lines of input. Thus an input line of the form `expression1 ; expression2` will give rise to 2 top-level tasks.
A top-level task callback is called with the expression for the top-level task, the result of the top-level task, a logical value indicating whether it was successfully completed or not (always TRUE at present), and a logical value indicating whether the result was printed or not. If the `data` argument was specified in the call to `addTaskCallback`, that value is given as the fifth argument.
The callback function should return a logical value. If the value is FALSE, the callback is removed from the task list and will not be called again by this mechanism. If the function returns TRUE, it is kept in the list and will be called on the completion of the next top-level task.
### Value
`addTaskCallback` returns an integer value giving the position in the list of task callbacks that this new callback occupies. This is only the current position of the callback. It can be used to remove the entry as long as no other values are removed from earlier positions in the list first.
`removeTaskCallback` returns a logical value indicating whether the specified element was removed. This can fail (i.e., return `FALSE`) if an incorrect name or index is given that does not correspond to the name or position of an element in the list.
### Note
There is also C-level access to top-level task callbacks to allow C routines rather than R functions be used.
### See Also
`[getTaskCallbackNames](taskcallbacknames)` `[taskCallbackManager](taskcallbackmanager)` <https://developer.r-project.org/TaskHandlers.pdf>
### Examples
```
times <- function(total = 3, str = "Task a") {
ctr <- 0
function(expr, value, ok, visible) {
ctr <<- ctr + 1
cat(str, ctr, "\n")
keep.me <- (ctr < total)
if (!keep.me)
cat("handler removing itself\n")
# return
keep.me
}
}
# add the callback that will work for
# 4 top-level tasks and then remove itself.
n <- addTaskCallback(times(4))
# now remove it, assuming it is still first in the list.
removeTaskCallback(n)
## See how the handler is called every time till "self destruction":
addTaskCallback(times(4)) # counts as once already
sum(1:10) ; mean(1:3) # two more
sinpi(1) # 4th - and "done"
cospi(1)
tanpi(1)
```
r None
`funprog` Common Higher-Order Functions in Functional Programming Languages
----------------------------------------------------------------------------
### Description
`Reduce` uses a binary function to successively combine the elements of a given vector and a possibly given initial value. `Filter` extracts the elements of a vector for which a predicate (logical) function gives true. `Find` and `Position` give the first or last such element and its position in the vector, respectively. `Map` applies a function to the corresponding elements of given vectors. `Negate` creates the negation of a given function.
### Usage
```
Reduce(f, x, init, right = FALSE, accumulate = FALSE)
Filter(f, x)
Find(f, x, right = FALSE, nomatch = NULL)
Map(f, ...)
Negate(f)
Position(f, x, right = FALSE, nomatch = NA_integer_)
```
### Arguments
| | |
| --- | --- |
| `f` | a function of the appropriate arity (binary for `Reduce`, unary for `Filter`, `Find` and `Position`, *k*-ary for `Map` if this is called with *k* arguments). An arbitrary predicate function for `Negate`. |
| `x` | a vector. |
| `init` | an **R** object of the same kind as the elements of `x`. |
| `right` | a logical indicating whether to proceed from left to right (default) or from right to left. |
| `accumulate` | a logical indicating whether the successive reduce combinations should be accumulated. By default, only the final combination is used. |
| `nomatch` | the value to be returned in the case when “no match” (no element satisfying the predicate) is found. |
| `...` | vectors. |
### Details
If `init` is given, `Reduce` logically adds it to the start (when proceeding left to right) or the end of `x`, respectively. If this possibly augmented vector *v* has *n > 1* elements, `Reduce` successively applies *f* to the elements of *v* from left to right or right to left, respectively. I.e., a left reduce computes *l\_1 = f(v\_1, v\_2)*, *l\_2 = f(l\_1, v\_3)*, etc., and returns *l\_{n-1} = f(l\_{n-2}, v\_n)*, and a right reduce does *r\_{n-1} = f(v\_{n-1}, v\_n)*, *r\_{n-2} = f(v\_{n-2}, r\_{n-1})* and returns *r\_1 = f(v\_1, r\_2)*. (E.g., if *v* is the sequence (2, 3, 4) and *f* is division, left and right reduce give *(2 / 3) / 4 = 1/6* and *2 / (3 / 4) = 8/3*, respectively.) If *v* has only a single element, this is returned; if there are no elements, `NULL` is returned. Thus, it is ensured that `f` is always called with 2 arguments.
The current implementation is non-recursive to ensure stability and scalability.
`Reduce` is patterned after Common Lisp's `reduce`. A reduce is also known as a fold (e.g., in Haskell) or an accumulate (e.g., in the C++ Standard Template Library). The accumulative version corresponds to Haskell's scan functions.
`Filter` applies the unary predicate function `f` to each element of `x`, coercing to logical if necessary, and returns the subset of `x` for which this gives true. Note that possible `NA` values are currently always taken as false; control over `NA` handling may be added in the future. `Filter` corresponds to `filter` in Haskell or `remove-if-not` in Common Lisp.
`Find` and `Position` are patterned after Common Lisp's `find-if` and `position-if`, respectively. If there is an element for which the predicate function gives true, then the first or last such element or its position is returned depending on whether `right` is false (default) or true, respectively. If there is no such element, the value specified by `nomatch` is returned. The current implementation is not optimized for performance.
`Map` is a simple wrapper to `<mapply>` which does not attempt to simplify the result, similar to Common Lisp's `mapcar` (with arguments being recycled, however). Future versions may allow some control of the result type.
`Negate` corresponds to Common Lisp's `complement`. Given a (predicate) function `f`, it creates a function which returns the logical negation of what `f` returns.
### See Also
Function `[clusterMap](../../parallel/html/clusterapply)` and `[mcmapply](../../parallel/html/mclapply)` (not Windows) in package parallel provide parallel versions of `Map`.
### Examples
```
## A general-purpose adder:
add <- function(x) Reduce("+", x)
add(list(1, 2, 3))
## Like sum(), but can also used for adding matrices etc., as it will
## use the appropriate '+' method in each reduction step.
## More generally, many generics meant to work on arbitrarily many
## arguments can be defined via reduction:
FOO <- function(...) Reduce(FOO2, list(...))
FOO2 <- function(x, y) UseMethod("FOO2")
## FOO() methods can then be provided via FOO2() methods.
## A general-purpose cumulative adder:
cadd <- function(x) Reduce("+", x, accumulate = TRUE)
cadd(seq_len(7))
## A simple function to compute continued fractions:
cfrac <- function(x) Reduce(function(u, v) u + 1 / v, x, right = TRUE)
## Continued fraction approximation for pi:
cfrac(c(3, 7, 15, 1, 292))
## Continued fraction approximation for Euler's number (e):
cfrac(c(2, 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8))
## Iterative function application:
Funcall <- function(f, ...) f(...)
## Compute log(exp(acos(cos(0))))
Reduce(Funcall, list(log, exp, acos, cos), 0, right = TRUE)
## n-fold iterate of a function, functional style:
Iterate <- function(f, n = 1)
function(x) Reduce(Funcall, rep.int(list(f), n), x, right = TRUE)
## Continued fraction approximation to the golden ratio:
Iterate(function(x) 1 + 1 / x, 30)(1)
## which is the same as
cfrac(rep.int(1, 31))
## Computing square root approximations for x as fixed points of the
## function t |-> (t + x / t) / 2, as a function of the initial value:
asqrt <- function(x, n) Iterate(function(t) (t + x / t) / 2, n)
asqrt(2, 30)(10) # Starting from a positive value => +sqrt(2)
asqrt(2, 30)(-1) # Starting from a negative value => -sqrt(2)
## A list of all functions in the base environment:
funs <- Filter(is.function, sapply(ls(baseenv()), get, baseenv()))
## Functions in base with more than 10 arguments:
names(Filter(function(f) length(formals(f)) > 10, funs))
## Number of functions in base with a '...' argument:
length(Filter(function(f)
any(names(formals(f)) %in% "..."),
funs))
## Find all objects in the base environment which are *not* functions:
Filter(Negate(is.function), sapply(ls(baseenv()), get, baseenv()))
```
| programming_docs |
r None
`split` Divide into Groups and Reassemble
------------------------------------------
### Description
`split` divides the data in the vector `x` into the groups defined by `f`. The replacement forms replace values corresponding to such a division. `unsplit` reverses the effect of `split`.
### Usage
```
split(x, f, drop = FALSE, ...)
## Default S3 method:
split(x, f, drop = FALSE, sep = ".", lex.order = FALSE, ...)
split(x, f, drop = FALSE, ...) <- value
unsplit(value, f, drop = FALSE)
```
### Arguments
| | |
| --- | --- |
| `x` | vector or data frame containing values to be divided into groups. |
| `f` | a ‘factor’ in the sense that `[as.factor](factor)(f)` defines the grouping, or a list of such factors in which case their interaction is used for the grouping. If `x` is a data frame, `f` can also be a formula of the form `~ g` to split by the variable `g`, or more generally of the form `~ g1 +
... + gk` to split by the interaction of the variables `g1`, ..., `gk`, where these variables are evaluated in the data frame `x` using the usual non-standard evaluation rules. |
| `drop` | logical indicating if levels that do not occur should be dropped (if `f` is a `factor` or a list). |
| `value` | a list of vectors or data frames compatible with a splitting of `x`. Recycling applies if the lengths do not match. |
| `sep` | character string, passed to `<interaction>` in the case where `f` is a `<list>`. |
| `lex.order` | logical, passed to `<interaction>` when `f` is a list. |
| `...` | further potential arguments passed to methods. |
### Details
`split` and `split<-` are generic functions with default and `data.frame` methods. The data frame method can also be used to split a matrix into a list of matrices, and the replacement form likewise, provided they are invoked explicitly.
`unsplit` works with lists of vectors or data frames (assumed to have compatible structure, as if created by `split`). It puts elements or rows back in the positions given by `f`. In the data frame case, row names are obtained by unsplitting the row name vectors from the elements of `value`.
`f` is recycled as necessary and if the length of `x` is not a multiple of the length of `f` a warning is printed.
Any missing values in `f` are dropped together with the corresponding values of `x`.
The default method calls `<interaction>` when `f` is a `<list>`. If the levels of the factors contain . the factors may not be split as expected, unless `sep` is set to string not present in the factor `<levels>`.
### Value
The value returned from `split` is a list of vectors containing the values for the groups. The components of the list are named by the levels of `f` (after converting to a factor, or if already a factor and `drop = TRUE`, dropping unused levels).
The replacement forms return their right hand side. `unsplit` returns a vector or data frame for which `split(x, f)` equals `value`
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<cut>` to categorize numeric values.
`<strsplit>` to split strings.
### Examples
```
require(stats); require(graphics)
n <- 10; nn <- 100
g <- factor(round(n * runif(n * nn)))
x <- rnorm(n * nn) + sqrt(as.numeric(g))
xg <- split(x, g)
boxplot(xg, col = "lavender", notch = TRUE, varwidth = TRUE)
sapply(xg, length)
sapply(xg, mean)
### Calculate 'z-scores' by group (standardize to mean zero, variance one)
z <- unsplit(lapply(split(x, g), scale), g)
# or
zz <- x
split(zz, g) <- lapply(split(x, g), scale)
# and check that the within-group std dev is indeed one
tapply(z, g, sd)
tapply(zz, g, sd)
### data frame variation
## Notice that assignment form is not used since a variable is being added
g <- airquality$Month
l <- split(airquality, g)
## Alternative using a formula
identical(l, split(airquality, ~ Month))
l <- lapply(l, transform, Oz.Z = scale(Ozone))
aq2 <- unsplit(l, g)
head(aq2)
with(aq2, tapply(Oz.Z, Month, sd, na.rm = TRUE))
### Split a matrix into a list by columns
ma <- cbind(x = 1:10, y = (-4:5)^2)
split(ma, col(ma))
split(1:10, 1:2)
```
r None
`find.package` Find Packages
-----------------------------
### Description
Find the paths to one or more packages.
### Usage
```
find.package(package, lib.loc = NULL, quiet = FALSE,
verbose = getOption("verbose"))
path.package(package, quiet = FALSE)
packageNotFoundError(package, lib.loc, call = NULL)
```
### Arguments
| | |
| --- | --- |
| `package` | character vector: the names of packages. |
| `lib.loc` | a character vector describing the location of **R** library trees to search through, or `NULL`. The default value of `NULL` corresponds to checking the loaded namespace, then all libraries currently known in `[.libPaths](libpaths)()`. |
| `quiet` | logical. Should this not give warnings or an error if the package is not found? |
| `verbose` | a logical. If `TRUE`, additional diagnostics are printed, notably when a package is found more than once. |
| `call` | call expression. |
### Details
`find.package` returns path to the locations where the given packages are found. If `lib.loc` is `NULL`, then loaded namespaces are searched before the libraries. If a package is found more than once, the first match is used. Unless `quiet =
TRUE` a warning will be given about the named packages which are not found, and an error if none are. If `verbose` is true, warnings about packages found more than once are given. For a package to be returned it must contain a either a ‘Meta’ subdirectory or a ‘DESCRIPTION’ file containing a valid `version` field, but it need not be installed (it could be a source package if `lib.loc` was set suitably).
`find.package` is not usually the right tool to find out if a package is available for use: the only way to do that is to use `[require](library)` to try to load it. It need not be installed for the correct platform, it might have a version requirement not met by the running version of **R**, there might be dependencies which are not available, ....
`path.package` returns the paths from which the named packages were loaded, or if none were named, for all currently attached packages. Unless `quiet = TRUE` it will warn if some of the packages named are not attached, and given an error if none are.
`packageNotFoundError` creates an error condition object of class `packageNotFoundError` for signaling errors. The condition object contains the fields `package` and `lib.loc`.
### Value
A character vector of paths of package directories.
### See Also
`<path.expand>` and `[normalizePath](normalizepath)` for path standardization.
### Examples
```
try(find.package("knitr"))
## will not give an error, maybe a warning about *all* locations it is found:
find.package("kitty", quiet=TRUE, verbose=TRUE)
## Find all .libPaths() entries a package is found:
findPkgAll <- function(pkg)
unlist(lapply(.libPaths(), function(lib)
find.package(pkg, lib, quiet=TRUE, verbose=FALSE)))
findPkgAll("MASS")
findPkgAll("knitr")
```
r None
`norm` Compute the Norm of a Matrix
------------------------------------
### Description
Computes a matrix norm of `x` using LAPACK. The norm can be the one (`"O"`) norm, the infinity (`"I"`) norm, the Frobenius (`"F"`) norm, the maximum modulus (`"M"`) among elements of a matrix, or the “spectral” or `"2"`-norm, as determined by the value of `type`.
### Usage
```
norm(x, type = c("O", "I", "F", "M", "2"))
```
### Arguments
| | |
| --- | --- |
| `x` | numeric matrix; note that packages such as [Matrix](https://CRAN.R-project.org/package=Matrix) define more `norm()` methods. |
| `type` | character string, specifying the *type* of matrix norm to be computed. A character indicating the type of norm desired.
`"O"`, `"o"` or `"1"`
specifies the **o**ne norm, (maximum absolute column sum);
`"I"` or `"i"`
specifies the **i**nfinity norm (maximum absolute row sum);
`"F"` or `"f"`
specifies the **F**robenius norm (the Euclidean norm of `x` treated as if it were a vector);
`"M"` or `"m"`
specifies the **m**aximum modulus of all the elements in `x`; and `"2"`
specifies the “spectral” or 2-norm, which is the largest singular value (`<svd>`) of `x`. The default is `"O"`. Only the first character of `type[1]` is used. |
### Details
The base method of `norm()` calls the LAPACK function `dlange`.
Note that the 1-, Inf- and `"M"` norm is faster to calculate than the Frobenius one.
Unsuccessful results from the underlying LAPACK code will result in an error giving a positive error code: these can only be interpreted by detailed study of the FORTRAN code.
### Value
The matrix norm, a non-negative number.
### Source
Except for `norm = "2"`, the LAPACK routine `DLANGE`.
LAPACK is from <https://www.netlib.org/lapack/>.
### References
Anderson, E., *et al* (1994). *LAPACK User's Guide*, 2nd edition, SIAM, Philadelphia.
### See Also
`[rcond](kappa)` for the (reciprocal) condition number.
### Examples
```
(x1 <- cbind(1, 1:10))
norm(x1)
norm(x1, "I")
norm(x1, "M")
stopifnot(all.equal(norm(x1, "F"),
sqrt(sum(x1^2))))
hilbert <- function(n) { i <- 1:n; 1 / outer(i - 1, i, "+") }
h9 <- hilbert(9)
## all 5 types of norm:
(nTyp <- eval(formals(base::norm)$type))
sapply(nTyp, norm, x = h9)
```
r None
`CallExternal` Modern Interfaces to C/C++ code
-----------------------------------------------
### Description
Functions to pass **R** objects to compiled C/C++ code that has been loaded into **R**.
### Usage
```
.Call(.NAME, ..., PACKAGE)
.External(.NAME, ..., PACKAGE)
```
### Arguments
| | |
| --- | --- |
| `.NAME` | a character string giving the name of a C function, or an object of class `"[NativeSymbolInfo](getnativesymbolinfo)"`, `"[RegisteredNativeSymbol](getnativesymbolinfo)"` or `"[NativeSymbol](getnativesymbolinfo)"` referring to such a name. |
| `...` | arguments to be passed to the compiled code. Up to 65 for `.Call`. |
| `PACKAGE` | if supplied, confine the search for a character string `.NAME` to the DLL given by this argument (plus the conventional extension, ‘.so’, ‘.dll’, ...). This argument follows `...` and so its name cannot be abbreviated. This is intended to add safety for packages, which can ensure by using this argument that no other package can override their external symbols, and also speeds up the search (see ‘Note’). |
### Details
The functions are used to call compiled code which makes use of internal **R** objects, passing the arguments to the code as a sequence of **R** objects. They assume C calling conventions, so can usually also be used for C++ code.
For details about how to write code to use with these functions see the chapter on ‘System and foreign language interfaces’ in the ‘Writing R Extensions’ manual. They differ in the way the arguments are passed to the C code: `.External` allows for a variable or unlimited number of arguments.
These functions are <primitive>, and `.NAME` is always matched to the first argument supplied (which should not be named). For clarity, avoid using names in the arguments passed to `...` that match or partially match `.NAME`.
### Value
An **R** object constructed in the compiled code.
### Header files for external code
Writing code for use with these functions will need to use internal **R** structures defined in ‘Rinternals.h’ and/or the macros in ‘Rdefines.h’.
### Note
If one of these functions is to be used frequently, do specify `PACKAGE` (to confine the search to a single DLL) or pass `.NAME` as one of the native symbol objects. Searching for symbols can take a long time, especially when many namespaces are loaded.
You may see `PACKAGE = "base"` for symbols linked into **R**. Do not use this in your own code: such symbols are not part of the API and may be changed without warning.
`PACKAGE = ""` used to be accepted (but was undocumented): it is now an error.
### References
Chambers, J. M. (1998) *Programming with Data. A Guide to the S Language*. Springer. (`.Call`.)
### See Also
`[dyn.load](dynload)`, `[.C](foreign)`, `[.Fortran](foreign)`.
The ‘Writing R Extensions’ manual.
r None
`rank` Sample Ranks
--------------------
### Description
Returns the sample ranks of the values in a vector. Ties (i.e., equal values) and missing values can be handled in several ways.
### Usage
```
rank(x, na.last = TRUE,
ties.method = c("average", "first", "last", "random", "max", "min"))
```
### Arguments
| | |
| --- | --- |
| `x` | a numeric, complex, character or logical vector. |
| `na.last` | for controlling the treatment of `[NA](na)`s. If `TRUE`, missing values in the data are put last; if `FALSE`, they are put first; if `NA`, they are removed; if `"keep"` they are kept with rank `NA`. |
| `ties.method` | a character string specifying how ties are treated, see ‘Details’; can be abbreviated. |
### Details
If all components are different (and no `NA`s), the ranks are well defined, with values in `seq_along(x)`. With some values equal (called ‘ties’), the argument `ties.method` determines the result at the corresponding indices. The `"first"` method results in a permutation with increasing values at each index set of ties, and analogously `"last"` with decreasing values. The `"random"` method puts these in random order whereas the default, `"average"`, replaces them by their mean, and `"max"` and `"min"` replaces them by their maximum and minimum respectively, the latter being the typical sports ranking.
`NA` values are never considered to be equal: for `na.last =
TRUE` and `na.last = FALSE` they are given distinct ranks in the order in which they occur in `x`.
**NB**: `rank` is not itself generic but `<xtfrm>` is, and `rank(xtfrm(x), ....)` will have the desired result if there is a `xtfrm` method. Otherwise, `rank` will make use of `==`, `>`, `is.na` and extraction methods for classed objects, possibly rather slowly.
### Value
A numeric vector of the same length as `x` with names copied from `x` (unless `na.last = NA`, when missing values are removed). The vector is of integer type unless `x` is a long vector or `ties.method = "average"` when it is of double type (whether or not there are any ties).
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<order>` and `<sort>`; `<xtfrm>`, see above.
### Examples
```
(r1 <- rank(x1 <- c(3, 1, 4, 15, 92)))
x2 <- c(3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5)
names(x2) <- letters[1:11]
(r2 <- rank(x2)) # ties are averaged
## rank() is "idempotent": rank(rank(x)) == rank(x) :
stopifnot(rank(r1) == r1, rank(r2) == r2)
## ranks without averaging
rank(x2, ties.method= "first") # first occurrence wins
rank(x2, ties.method= "last") # last occurrence wins
rank(x2, ties.method= "random") # ties broken at random
rank(x2, ties.method= "random") # and again
## keep ties ties, no average
(rma <- rank(x2, ties.method= "max")) # as used classically
(rmi <- rank(x2, ties.method= "min")) # as in Sports
stopifnot(rma + rmi == round(r2 + r2))
## Comparing all tie.methods:
tMeth <- eval(formals(rank)$ties.method)
rx2 <- sapply(tMeth, function(M) rank(x2, ties.method=M))
cbind(x2, rx2)
## ties.method's does not matter w/o ties:
x <- sample(47)
rx <- sapply(tMeth, function(MM) rank(x, ties.method=MM))
stopifnot(all(rx[,1] == rx))
```
r None
`Internal` Call an Internal Function
-------------------------------------
### Description
`.Internal` performs a call to an internal code which is built in to the **R** interpreter.
Only true **R** wizards should even consider using this function, and only **R** developers can add to the list of internal functions.
### Usage
```
.Internal(call)
```
### Arguments
| | |
| --- | --- |
| `call` | a call expression |
### See Also
`[.Primitive](primitive)`, `[.External](callexternal)` (the nearest equivalent available to users).
r None
`as.data.frame` Coerce to a Data Frame
---------------------------------------
### Description
Functions to check if an object is a data frame, or coerce it if possible.
### Usage
```
as.data.frame(x, row.names = NULL, optional = FALSE, ...)
## S3 method for class 'character'
as.data.frame(x, ...,
stringsAsFactors = FALSE)
## S3 method for class 'list'
as.data.frame(x, row.names = NULL, optional = FALSE, ...,
cut.names = FALSE, col.names = names(x), fix.empty.names = TRUE,
check.names = !optional,
stringsAsFactors = FALSE)
## S3 method for class 'matrix'
as.data.frame(x, row.names = NULL, optional = FALSE,
make.names = TRUE, ...,
stringsAsFactors = FALSE)
is.data.frame(x)
```
### Arguments
| | |
| --- | --- |
| `x` | any **R** object. |
| `row.names` | `NULL` or a character vector giving the row names for the data frame. Missing values are not allowed. |
| `optional` | logical. If `TRUE`, setting row names and converting column names (to syntactic names: see `<make.names>`) is optional. Note that all of **R**'s base package `as.data.frame()` methods use `optional` only for column names treatment, basically with the meaning of `<data.frame>(*, check.names = !optional)`. See also the `make.names` argument of the `matrix` method. |
| `...` | additional arguments to be passed to or from methods. |
| `stringsAsFactors` | logical: should the character vector be converted to a factor? |
| | |
| --- | --- |
| `cut.names` | logical or integer; indicating if column names with more than 256 (or `cut.names` if that is numeric) characters should be shortened (and the last 6 characters replaced by `" ..."`). |
| `col.names` | (optional) character vector of column names. |
| `fix.empty.names` | logical indicating if empty column names, i.e., `""` should be fixed up (in `<data.frame>`) or not. |
| `check.names` | logical; passed to the `<data.frame>()` call. |
| | |
| --- | --- |
| `make.names` | a `<logical>`, i.e., one of `FALSE, NA, TRUE`, indicating what should happen if the row names (of the matrix `x`) are invalid. If they are invalid, the default, `TRUE`, calls `<make.names>(*, unique=TRUE)`; `make.names=NA` will use “automatic” row names and a `FALSE` value will signal an error for invalid row names. |
### Details
`as.data.frame` is a generic function with many methods, and users and packages can supply further methods. For classes that act as vectors, often a copy of `as.data.frame.vector` will work as the method.
If a list is supplied, each element is converted to a column in the data frame. Similarly, each column of a matrix is converted separately. This can be overridden if the object has a class which has a method for `as.data.frame`: two examples are matrices of class `"[model.matrix](../../stats/html/model.matrix)"` (which are included as a single column) and list objects of class `"[POSIXlt](datetimeclasses)"` which are coerced to class `"[POSIXct](datetimeclasses)"`.
Arrays can be converted to data frames. One-dimensional arrays are treated like vectors and two-dimensional arrays like matrices. Arrays with more than two dimensions are converted to matrices by ‘flattening’ all dimensions after the first and creating suitable column labels.
Character variables are converted to factor columns unless protected by `[I](asis)`.
If a data frame is supplied, all classes preceding `"data.frame"` are stripped, and the row names are changed if that argument is supplied.
If `row.names = NULL`, row names are constructed from the names or dimnames of `x`, otherwise are the integer sequence starting at one. Few of the methods check for duplicated row names. Names are removed from vector columns unless `[I](asis)`.
### Value
`as.data.frame` returns a data frame, normally with all row names `""` if `optional = TRUE`.
`is.data.frame` returns `TRUE` if its argument is a data frame (that is, has `"data.frame"` amongst its classes) and `FALSE` otherwise.
### References
Chambers, J. M. (1992) *Data for models.* Chapter 3 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole.
### See Also
`<data.frame>`, `[as.data.frame.table](table)` for the `table` method (which has additional arguments if called directly).
| programming_docs |
r None
`seq` Sequence Generation
--------------------------
### Description
Generate regular sequences. `seq` is a standard generic with a default method. `seq.int` is a primitive which can be much faster but has a few restrictions. `seq_along` and `seq_len` are very fast primitives for two common cases.
### Usage
```
seq(...)
## Default S3 method:
seq(from = 1, to = 1, by = ((to - from)/(length.out - 1)),
length.out = NULL, along.with = NULL, ...)
seq.int(from, to, by, length.out, along.with, ...)
seq_along(along.with)
seq_len(length.out)
```
### Arguments
| | |
| --- | --- |
| `...` | arguments passed to or from methods. |
| `from, to` | the starting and (maximal) end values of the sequence. Of length `1` unless just `from` is supplied as an unnamed argument. |
| `by` | number: increment of the sequence. |
| `length.out` | desired length of the sequence. A non-negative number, which for `seq` and `seq.int` will be rounded up if fractional. |
| `along.with` | take the length from the length of this argument. |
### Details
Numerical inputs should all be [finite](is.finite) (that is, not infinite, `[NaN](is.finite)` or `NA`).
The interpretation of the unnamed arguments of `seq` and `seq.int` is *not* standard, and it is recommended always to name the arguments when programming.
`seq` is generic, and only the default method is described here. Note that it dispatches on the class of the **first** argument irrespective of argument names. This can have unintended consequences if it is called with just one argument intending this to be taken as `along.with`: it is much better to use `seq_along` in that case.
`seq.int` is an [internal generic](internalmethods) which dispatches on methods for `"seq"` based on the class of the first supplied argument (before argument matching).
Typical usages are
```
seq(from, to)
seq(from, to, by= )
seq(from, to, length.out= )
seq(along.with= )
seq(from)
seq(length.out= )
```
The first form generates the sequence `from, from+/-1, ..., to` (identical to `from:to`).
The second form generates `from, from+by`, ..., up to the sequence value less than or equal to `to`. Specifying `to -
from` and `by` of opposite signs is an error. Note that the computed final value can go just beyond `to` to allow for rounding error, but is truncated to `to`. (‘Just beyond’ is by up to *1e-10* times `abs(from - to)`.)
The third generates a sequence of `length.out` equally spaced values from `from` to `to`. (`length.out` is usually abbreviated to `length` or `len`, and `seq_len` is much faster.)
The fourth form generates the integer sequence `1, 2, ...,
length(along.with)`. (`along.with` is usually abbreviated to `along`, and `seq_along` is much faster.)
The fifth form generates the sequence `1, 2, ..., length(from)` (as if argument `along.with` had been specified), *unless* the argument is numeric of length 1 when it is interpreted as `1:from` (even for `seq(0)` for compatibility with S). Using either `seq_along` or `seq_len` is much preferred (unless strict S compatibility is essential).
The final form generates the integer sequence `1, 2, ...,
length.out` unless `length.out = 0`, when it generates `integer(0)`.
Very small sequences (with `from - to` of the order of *10^{-14}* times the larger of the ends) will return `from`.
For `seq` (only), up to two of `from`, `to` and `by` can be supplied as complex values provided `length.out` or `along.with` is specified. More generally, the default method of `seq` will handle classed objects with methods for the `Math`, `Ops` and `Summary` group generics.
`seq.int`, `seq_along` and `seq_len` are <primitive>.
### Value
`seq.int` and the default method of `seq` for numeric arguments return a vector of type `"integer"` or `"double"`: programmers should not rely on which.
`seq_along` and `seq_len` return an integer vector, unless it is a *[long vector](longvectors)* when it will be double.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
The methods `[seq.Date](seq.date)` and `[seq.POSIXt](seq.posixt)`.
`[:](colon)`, `<rep>`, `<sequence>`, `<row>`, `<col>`.
### Examples
```
seq(0, 1, length.out = 11)
seq(stats::rnorm(20)) # effectively 'along'
seq(1, 9, by = 2) # matches 'end'
seq(1, 9, by = pi) # stays below 'end'
seq(1, 6, by = 3)
seq(1.575, 5.125, by = 0.05)
seq(17) # same as 1:17, or even better seq_len(17)
```
r None
`unname` Remove names or dimnames
----------------------------------
### Description
Remove the `<names>` or `<dimnames>` attribute of an **R** object.
### Usage
```
unname(obj, force = FALSE)
```
### Arguments
| | |
| --- | --- |
| `obj` | an **R** object. |
| `force` | logical; if true, the `dimnames` (names and row names) are removed even from `<data.frame>`s. |
### Value
Object as `obj` but without `<names>` or `<dimnames>`.
### Examples
```
require(graphics); require(stats)
## Answering a question on R-help (14 Oct 1999):
col3 <- 750+ 100*rt(1500, df = 3)
breaks <- factor(cut(col3, breaks = 360+5*(0:155)))
z <- table(breaks)
z[1:5] # The names are larger than the data ...
barplot(unname(z), axes = FALSE)
```
r None
`gl` Generate Factor Levels
----------------------------
### Description
Generate factors by specifying the pattern of their levels.
### Usage
```
gl(n, k, length = n*k, labels = seq_len(n), ordered = FALSE)
```
### Arguments
| | |
| --- | --- |
| `n` | an integer giving the number of levels. |
| `k` | an integer giving the number of replications. |
| `length` | an integer giving the length of the result. |
| `labels` | an optional vector of labels for the resulting factor levels. |
| `ordered` | a logical indicating whether the result should be ordered or not. |
### Value
The result has levels from `1` to `n` with each value replicated in groups of length `k` out to a total length of `length`.
`gl` is modelled on the *GLIM* function of the same name.
### See Also
The underlying `<factor>()`.
### Examples
```
## First control, then treatment:
gl(2, 8, labels = c("Control", "Treat"))
## 20 alternating 1s and 2s
gl(2, 1, 20)
## alternating pairs of 1s and 2s
gl(2, 2, 20)
```
r None
`bindenv` Binding and Environment Locking, Active Bindings
-----------------------------------------------------------
### Description
These functions represent an interface for adjustments to environments and bindings within environments. They allow for locking environments as well as individual bindings, and for linking a variable to a function.
### Usage
```
lockEnvironment(env, bindings = FALSE)
environmentIsLocked(env)
lockBinding(sym, env)
unlockBinding(sym, env)
bindingIsLocked(sym, env)
makeActiveBinding(sym, fun, env)
bindingIsActive(sym, env)
activeBindingFunction(sym, env)
```
### Arguments
| | |
| --- | --- |
| `env` | an environment. |
| `bindings` | logical specifying whether bindings should be locked. |
| `sym` | a name object or character string. |
| `fun` | a function taking zero or one arguments. |
### Details
The function `lockEnvironment` locks its environment argument. Locking the environment prevents adding or removing variable bindings from the environment. Changing the value of a variable is still possible unless the binding has been locked. The namespace environments of packages with namespaces are locked when loaded.
`lockBinding` locks individual bindings in the specified environment. The value of a locked binding cannot be changed. Locked bindings may be removed from an environment unless the environment is locked.
`makeActiveBinding` installs `fun` in environment `env` so that getting the value of `sym` calls `fun` with no arguments, and assigning to `sym` calls `fun` with one argument, the value to be assigned. This allows the implementation of things like C variables linked to **R** variables and variables linked to databases, and is used to implement `[setRefClass](../../methods/html/refclass)`. It may also be useful for making thread-safe versions of some system globals. Currently active bindings are not preserved during package installation, but they can be created in `[.onLoad](ns-hooks)`.
### Value
The `bindingIsLocked` and `environmentIsLocked` return a length-one logical vector. The remaining functions return `NULL`, invisibly.
### Author(s)
Luke Tierney
### Examples
```
# locking environments
e <- new.env()
assign("x", 1, envir = e)
get("x", envir = e)
lockEnvironment(e)
get("x", envir = e)
assign("x", 2, envir = e)
try(assign("y", 2, envir = e)) # error
# locking bindings
e <- new.env()
assign("x", 1, envir = e)
get("x", envir = e)
lockBinding("x", e)
try(assign("x", 2, envir = e)) # error
unlockBinding("x", e)
assign("x", 2, envir = e)
get("x", envir = e)
# active bindings
f <- local( {
x <- 1
function(v) {
if (missing(v))
cat("get\n")
else {
cat("set\n")
x <<- v
}
x
}
})
makeActiveBinding("fred", f, .GlobalEnv)
bindingIsActive("fred", .GlobalEnv)
fred
fred <- 2
fred
```
r None
`make.names` Make Syntactically Valid Names
--------------------------------------------
### Description
Make syntactically valid names out of character vectors.
### Usage
```
make.names(names, unique = FALSE, allow_ = TRUE)
```
### Arguments
| | |
| --- | --- |
| `names` | character vector to be coerced to syntactically valid names. This is coerced to character if necessary. |
| `unique` | logical; if `TRUE`, the resulting elements are unique. This may be desired for, e.g., column names. |
| `allow_` | logical. For compatibility with **R** prior to 1.9.0. |
### Details
A syntactically valid name consists of letters, numbers and the dot or underline characters and starts with a letter or the dot not followed by a number. Names such as `".2way"` are not valid, and neither are the <reserved> words.
The definition of a *letter* depends on the current locale, but only ASCII digits are considered to be digits.
The character `"X"` is prepended if necessary. All invalid characters are translated to `"."`. A missing value is translated to `"NA"`. Names which match **R** keywords have a dot appended to them. Duplicated values are altered by `<make.unique>`.
### Value
A character vector of same length as `names` with each changed to a syntactically valid name, in the current locale's encoding.
### Warning
Some OSes, notably FreeBSD, report extremely incorrect information about which characters are alphabetic in some locales (typically, all multi-byte locales including UTF-8 locales). However, **R** provides substitutes on Windows, macOS and AIX.
### Note
Prior to **R** version 1.9.0, underscores were not valid in variable names, and code that relies on them being converted to dots will no longer work. Use `allow_ = FALSE` for back-compatibility.
`allow_ = FALSE` is also useful when creating names for export to applications which do not allow underline in names (for example, S-PLUS and some DBMSes).
### See Also
`<make.unique>`, `<names>`, `<character>`, `<data.frame>`.
### Examples
```
make.names(c("a and b", "a-and-b"), unique = TRUE)
# "a.and.b" "a.and.b.1"
make.names(c("a and b", "a_and_b"), unique = TRUE)
# "a.and.b" "a_and_b"
make.names(c("a and b", "a_and_b"), unique = TRUE, allow_ = FALSE)
# "a.and.b" "a.and.b.1"
make.names(c("", "X"), unique = TRUE)
# "X.1" "X" currently; R up to 3.0.2 gave "X" "X.1"
state.name[make.names(state.name) != state.name] # those 10 with a space
```
r None
`license` The R License Terms
------------------------------
### Description
The license terms under which **R** is distributed.
### Usage
```
license()
licence()
```
### Details
**R** is distributed under the terms of the GNU GENERAL PUBLIC LICENSE, either Version 2, June 1991 or Version 3, June 2007. A copy of the version 2 license is in file ‘[R\_HOME](rhome)/doc/COPYING’ and can be viewed by `RShowDoc("COPYING")`. Version 3 of the license can be displayed by `RShowDoc("GPL-3")`.
A small number of files (some of the API header files) are distributed under the LESSER GNU GENERAL PUBLIC LICENSE, version 2.1 or later. A copy of this license is in file ‘$R\_SHARE\_DIR/licenses/LGPL-2.1’ and can be viewed by `RShowDoc("LGPL-2.1")`. Version 3 of the license can be displayed by `RShowDoc("LGPL-3")`.
r None
`weekday.POSIXt` Extract Parts of a POSIXt or Date Object
----------------------------------------------------------
### Description
Extract the weekday, month or quarter, or the Julian time (days since some origin). These are generic functions: the methods for the internal date-time classes are documented here.
### Usage
```
weekdays(x, abbreviate)
## S3 method for class 'POSIXt'
weekdays(x, abbreviate = FALSE)
## S3 method for class 'Date'
weekdays(x, abbreviate = FALSE)
months(x, abbreviate)
## S3 method for class 'POSIXt'
months(x, abbreviate = FALSE)
## S3 method for class 'Date'
months(x, abbreviate = FALSE)
quarters(x, abbreviate)
## S3 method for class 'POSIXt'
quarters(x, ...)
## S3 method for class 'Date'
quarters(x, ...)
julian(x, ...)
## S3 method for class 'POSIXt'
julian(x, origin = as.POSIXct("1970-01-01", tz = "GMT"), ...)
## S3 method for class 'Date'
julian(x, origin = as.Date("1970-01-01"), ...)
```
### Arguments
| | |
| --- | --- |
| `x` | an object inheriting from class `"POSIXt"` or `"Date"`. |
| `abbreviate` | logical vector (possibly recycled). Should the names be abbreviated? |
| `origin` | an length-one object inheriting from class `"POSIXt"` or `"Date"`. |
| `...` | arguments for other methods. |
### Value
`weekdays` and `months` return a character vector of names in the locale in use.
`quarters` returns a character vector of `"Q1"` to `"Q4"`.
`julian` returns the number of days (possibly fractional) since the origin, with the origin as a `"origin"` attribute. All time calculations in **R** are done ignoring leap-seconds.
### Note
Other components such as the day of the month or the year are very easy to compute: just use `[as.POSIXlt](as.posixlt)` and extract the relevant component. Alternatively (especially if the components are desired as character strings), use `[strftime](strptime)`.
### See Also
`[DateTimeClasses](datetimeclasses)`, `[Date](dates)`
### Examples
```
weekdays(.leap.seconds)
months(.leap.seconds)
quarters(.leap.seconds)
## Show how easily you get month, day, year, day (of {month, week, yr}), ... :
## (remember to count from 0 (!): mon = 0..11, wday = 0..6, etc !!)
##' Transform (Time-)Date vector to convenient data frame :
dt2df <- function(dt, dName = deparse(substitute(dt)), stringsAsFactors = FALSE) {
DF <- as.data.frame(unclass(as.POSIXlt( dt )), stringsAsFactors=stringsAsFactors)
`names<-`(cbind(dt, DF, deparse.level=0L), c(dName, names(DF)))
}
## e.g.,
dt2df(.leap.seconds) # date+time
dt2df(Sys.Date() + 0:9) # date
##' Even simpler: Date -> Matrix - dropping time info {sec,min,hour, isdst}
d2mat <- function(x) simplify2array(unclass(as.POSIXlt(x))[4:7])
## e.g.,
d2mat(seq(as.Date("2000-02-02"), by=1, length.out=30)) # has R 1.0.0's release date
## Julian Day Number (JDN, https://en.wikipedia.org/wiki/Julian_day)
## is the number of days since noon UTC on the first day of 4317 BC.
## in the proleptic Julian calendar. To more recently, in
## 'Terrestrial Time' which differs from UTC by a few seconds
## See https://en.wikipedia.org/wiki/Terrestrial_Time
julian(Sys.Date(), -2440588) # from a day
floor(as.numeric(julian(Sys.time())) + 2440587.5) # from a date-time
```
r None
`det` Calculate the Determinant of a Matrix
--------------------------------------------
### Description
`det` calculates the determinant of a matrix. `determinant` is a generic function that returns separately the modulus of the determinant, optionally on the logarithm scale, and the sign of the determinant.
### Usage
```
det(x, ...)
determinant(x, logarithm = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | numeric matrix: logical matrices are coerced to numeric. |
| `logarithm` | logical; if `TRUE` (default) return the logarithm of the modulus of the determinant. |
| `...` | Optional arguments. At present none are used. Previous versions of `det` allowed an optional `method` argument. This argument will be ignored but will not produce an error. |
### Details
The `determinant` function uses an LU decomposition and the `det` function is simply a wrapper around a call to `determinant`.
Often, computing the determinant is *not* what you should be doing to solve a given problem.
### Value
For `det`, the determinant of `x`. For `determinant`, a list with components
| | |
| --- | --- |
| `modulus` | a numeric value. The modulus (absolute value) of the determinant if `logarithm` is `FALSE`; otherwise the logarithm of the modulus. |
| `sign` | integer; either *+1* or *-1* according to whether the determinant is positive or negative. |
### Examples
```
(x <- matrix(1:4, ncol = 2))
unlist(determinant(x))
det(x)
det(print(cbind(1, 1:3, c(2,0,1))))
```
r None
`Logic` Logical Operators
--------------------------
### Description
These operators act on raw, logical and number-like vectors.
### Usage
```
! x
x & y
x && y
x | y
x || y
xor(x, y)
isTRUE (x)
isFALSE(x)
```
### Arguments
| | |
| --- | --- |
| `x, y` | `<raw>`, `<logical>` or ‘number-like’ vectors (i.e., of types `<double>` (class `<numeric>`), `<integer>` and `<complex>`), or objects for which methods have been written. |
### Details
`!` indicates logical negation (NOT).
`&` and `&&` indicate logical AND and `|` and `||` indicate logical OR. The shorter form performs elementwise comparisons in much the same way as arithmetic operators. The longer form evaluates left to right examining only the first element of each vector. Evaluation proceeds only until the result is determined. The longer form is appropriate for programming control-flow and typically preferred in `[if](control)` clauses.
`xor` indicates elementwise exclusive OR.
`isTRUE(x)` is the same as `{ is.logical(x) && length(x) == 1 && !is.na(x) && x }`; `isFALSE()` is defined analogously. Consequently, `if(isTRUE(cond))` may be preferable to `if(cond)` because of `[NA](na)`s.
In earlier **R** versions, `isTRUE <- function(x) identical(x, TRUE)`, had the drawback to be false e.g., for `x <- c(val = TRUE)`.
Numeric and complex vectors will be coerced to logical values, with zero being false and all non-zero values being true. Raw vectors are handled without any coercion for `!`, `&`, `|` and `xor`, with these operators being applied bitwise (so `!` is the 1s-complement).
The operators `!`, `&` and `|` are generic functions: methods can be written for them individually or via the `[Ops](groupgeneric)` (or S4 `Logic`, see below) group generic function. (See `[Ops](groupgeneric)` for how dispatch is computed.)
`[NA](na)` is a valid logical object. Where a component of `x` or `y` is `NA`, the result will be `NA` if the outcome is ambiguous. In other words `NA & TRUE` evaluates to `NA`, but `NA & FALSE` evaluates to `FALSE`. See the examples below.
See [Syntax](syntax) for the precedence of these operators: unlike many other languages (including S) the AND and OR operators do not have the same precedence (the AND operators have higher precedence than the OR operators).
### Value
For `!`, a logical or raw vector(for raw `x`) of the same length as `x`: names, dims and dimnames are copied from `x`, and all other attributes (including class) if no coercion is done.
For `|`, `&` and `xor` a logical or raw vector. If involving a zero-length vector the result has length zero. Otherwise, the elements of shorter vectors are recycled as necessary (with a `<warning>` when they are recycled only *fractionally*). The rules for determining the attributes of the result are rather complicated. Most attributes are taken from the longer argument, the first if they are of the same length. Names will be copied from the first if it is the same length as the answer, otherwise from the second if that is. For time series, these operations are allowed only if the series are compatible, when the class and `[tsp](../../stats/html/tsp)` attribute of whichever is a time series (the same, if both are) are used. For arrays (and an array result) the dimensions and dimnames are taken from first argument if it is an array, otherwise the second.
For `||`, `&&` and `isTRUE`, a length-one logical vector.
### S4 methods
`!`, `&` and `|` are S4 generics, the latter two part of the `[Logic](../../methods/html/s4groupgeneric)` group generic (and hence methods need argument names `e1, e2`).
### Note
The elementwise operators are sometimes called as functions as e.g. ``&`(x, y)`: see the description of how argument-matching is done in `[Ops](groupgeneric)`.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[TRUE](logical)` or `<logical>`.
`<any>` and `<all>` for OR and AND on many scalar arguments.
`[Syntax](syntax)` for operator precedence.
`[bitwAnd](bitwise)` for bitwise versions for integer vectors.
### Examples
```
y <- 1 + (x <- stats::rpois(50, lambda = 1.5) / 4 - 1)
x[(x > 0) & (x < 1)] # all x values between 0 and 1
if (any(x == 0) || any(y == 0)) "zero encountered"
## construct truth tables :
x <- c(NA, FALSE, TRUE)
names(x) <- as.character(x)
outer(x, x, "&") ## AND table
outer(x, x, "|") ## OR table
```
| programming_docs |
r None
`list` Lists – Generic and Dotted Pairs
----------------------------------------
### Description
Functions to construct, coerce and check for both kinds of **R** lists.
### Usage
```
list(...)
pairlist(...)
as.list(x, ...)
## S3 method for class 'environment'
as.list(x, all.names = FALSE, sorted = FALSE, ...)
as.pairlist(x)
is.list(x)
is.pairlist(x)
alist(...)
```
### Arguments
| | |
| --- | --- |
| `...` | objects, possibly named. |
| `x` | object to be coerced or tested. |
| `all.names` | a logical indicating whether to copy all values or (default) only those whose names do not begin with a dot. |
| `sorted` | a logical indicating whether the `<names>` of the resulting list should be sorted (increasingly). Note that this is somewhat costly, but may be useful for comparison of environments. |
### Details
Almost all lists in **R** internally are *Generic Vectors*, whereas traditional *dotted pair* lists (as in LISP) remain available but rarely seen by users (except as `<formals>` of functions).
The arguments to `list` or `pairlist` are of the form `value` or `tag = value`. The functions return a list or dotted pair list composed of its arguments with each value either tagged or untagged, depending on how the argument was specified.
`alist` handles its arguments as if they described function arguments. So the values are not evaluated, and tagged arguments with no value are allowed whereas `list` simply ignores them. `alist` is most often used in conjunction with `<formals>`.
`as.list` attempts to coerce its argument to a list. For functions, this returns the concatenation of the list of formal arguments and the function body. For expressions, the list of constituent elements is returned. `as.list` is generic, and as the default method calls `[as.vector](vector)(mode = "list")` for a non-list, methods for `as.vector` may be invoked. `as.list` turns a factor into a list of one-element factors. Attributes may be dropped unless the argument already is a list or expression. (This is inconsistent with functions such as `[as.character](character)` which always drop attributes, and is for efficiency since lists can be expensive to copy.)
`is.list` returns `TRUE` if and only if its argument is a `list` *or* a `pairlist` of `length` *> 0*. `is.pairlist` returns `TRUE` if and only if the argument is a pairlist or `NULL` (see below).
The `"<environment>"` method for `as.list` copies the name-value pairs (for names not beginning with a dot) from an environment to a named list. The user can request that all named objects are copied. Unless `sorted = TRUE`, the list is in no particular order (the order depends on the order of creation of objects and whether the environment is hashed). No enclosing environments are searched. (Objects copied are duplicated so this can be an expensive operation.) Note that there is an inverse operation, the `<as.environment>()` method for list objects.
An empty pairlist, `pairlist()` is the same as `[NULL](null)`. This is different from `list()`: some but not all operations will promote an empty pairlist to an empty list.
`as.pairlist` is implemented as `[as.vector](vector)(x,
"pairlist")`, and hence will dispatch methods for the generic function `as.vector`. Lists are copied element-by-element into a pairlist and the names of the list used as tags for the pairlist: the return value for other types of argument is undocumented.
`list`, `is.list` and `is.pairlist` are <primitive> functions.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<vector>("list", length)` for creation of a list with empty components; `<c>`, for concatenation; `<formals>`. `<unlist>` is an approximate inverse to `as.list()`.
‘[plotmath](../../grdevices/html/plotmath)’ for the use of `list` in plot annotation.
### Examples
```
require(graphics)
# create a plotting structure
pts <- list(x = cars[,1], y = cars[,2])
plot(pts)
is.pairlist(.Options) # a user-level pairlist
## "pre-allocate" an empty list of length 5
vector("list", 5)
# Argument lists
f <- function() x
# Note the specification of a "..." argument:
formals(f) <- al <- alist(x = , y = 2+3, ... = )
f
al
## environment->list coercion
e1 <- new.env()
e1$a <- 10
e1$b <- 20
as.list(e1)
```
r None
`file.choose` Choose a File Interactively
------------------------------------------
### Description
Choose a file interactively.
### Usage
```
file.choose(new = FALSE)
```
### Arguments
| | |
| --- | --- |
| `new` | Logical: choose the style of dialog box presented to the user: at present only new = FALSE is used. |
### Value
A character vector of length one giving the file path.
### See Also
`<list.files>` for non-interactive selection.
r None
`print.default` Default Printing
---------------------------------
### Description
`print.default` is the *default* method of the generic `<print>` function which prints its argument.
### Usage
```
## Default S3 method:
print(x, digits = NULL, quote = TRUE,
na.print = NULL, print.gap = NULL, right = FALSE,
max = NULL, width = NULL, useSource = TRUE, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | the object to be printed. |
| `digits` | a non-null value for `digits` specifies the minimum number of significant digits to be printed in values. The default, `NULL`, uses `[getOption](options)("digits")`. (For the interpretation for complex numbers see `[signif](round)`.) Non-integer values will be rounded down, and only values greater than or equal to 1 and no greater than 22 are accepted. |
| `quote` | logical, indicating whether or not strings (`<character>`s) should be printed with surrounding quotes. |
| `na.print` | a character string which is used to indicate `[NA](na)` values in printed output, or `NULL` (see ‘Details’). |
| `print.gap` | a non-negative integer *≤ 1024*, or `NULL` (meaning 1), giving the spacing between adjacent columns in printed vectors, matrices and arrays. |
| `right` | logical, indicating whether or not strings should be right aligned. The default is left alignment. |
| `max` | a non-null value for `max` specifies the approximate maximum number of entries to be printed. The default, `NULL`, uses `[getOption](options)("max.print")`: see that help page for more details. |
| `width` | controls the maximum number of columns on a line used in printing vectors, matrices, etc. The default, `NULL`, uses `[getOption](options)("width")`: see that help page for more details including allowed values. |
| `useSource` | logical, indicating whether to use source references or copies rather than deparsing [language objects](is.language). The default is to use the original source if it is available. |
| `...` | further arguments to be passed to or from other methods. They are ignored in this function. |
### Details
The default for printing `NA`s is to print `NA` (without quotes) unless this is a character `NA` *and* `quote =
FALSE`, when <NA> is printed.
The same number of decimal places is used throughout a vector. This means that `digits` specifies the minimum number of significant digits to be used, and that at least one entry will be encoded with that minimum number. However, if all the encoded elements then have trailing zeroes, the number of decimal places is reduced until at least one element has a non-zero final digit. Decimal points are only included if at least one decimal place is selected.
Attributes are printed respecting their class(es), using the values of `digits` to `print.default`, but using the default values (for the methods called) of the other arguments.
Option `width` controls the printing of vectors, matrices and arrays, and option `deparse.cutoff` controls the printing of [language objects](is.language) such as calls and formulae.
When the methods package is attached, `print` will call `[show](../../methods/html/show)` for **R** objects with formal classes (‘S4’) if called with no optional arguments.
### Large number of digits
Note that for large values of `digits`, currently for `digits >= 16`, the calculation of the number of significant digits will depend on the platform's internal (C library) implementation of sprintf() functionality.
### Single-byte locales
If a non-printable character is encountered during output, it is represented as one of the ANSI escape sequences (\a, \b, \f, \n, \r, \t, \v, \\ and \0: see [Quotes](quotes)), or failing that as a 3-digit octal code: for example the UK currency pound sign in the C locale (if implemented correctly) is printed as \243. Which characters are non-printable depends on the locale. (Because some versions of Windows get this wrong, all bytes with the upper bit set are regarded as printable on Windows in a single-byte locale.)
### Unicode and other multi-byte locales
In all locales, the characters in the ASCII range (0x00 to 0x7f) are printed in the same way, as-is if printable, otherwise via ANSI escape sequences or 3-digit octal escapes as described for single-byte locales. Whether a character is printable depends on the current locale and the operating system (C library).
Multi-byte non-printing characters are printed as an escape sequence of the form \uxxxx or \Uxxxxxxxx (in hexadecimal). This is the internal code for the wide-character representation of the character. If this is not known to be Unicode code points, a warning is issued. The only known exceptions are certain Japanese ISO 2022 locales on commercial Unixes, which use a concatenation of the bytes: it is unlikely that **R** compiles on such a system.
It is possible to have a character string in a character vector that is not valid in the current locale. If a byte is encountered that is not part of a valid character it is printed in hex in the form \xab and this is repeated until the start of a valid character. (This will rapidly recover from minor errors in UTF-8.)
### See Also
The generic `<print>`, `<options>`. The `"<noquote>"` class and print method.
`[encodeString](encodestring)`, which encodes a character vector the way it would be printed.
### Examples
```
pi
print(pi, digits = 16)
LETTERS[1:16]
print(LETTERS, quote = FALSE)
M <- cbind(I = 1, matrix(1:10000, ncol = 10,
dimnames = list(NULL, LETTERS[1:10])))
utils::head(M) # makes more sense than
print(M, max = 1000) # prints 90 rows and a message about omitting 910
```
r None
`chartr` Character Translation and Casefolding
-----------------------------------------------
### Description
Translate characters in character vectors, in particular from upper to lower case or vice versa.
### Usage
```
chartr(old, new, x)
tolower(x)
toupper(x)
casefold(x, upper = FALSE)
```
### Arguments
| | |
| --- | --- |
| `x` | a character vector, or an object that can be coerced to character by `[as.character](character)`. |
| `old` | a character string specifying the characters to be translated. If a character vector of length 2 or more is supplied, the first element is used with a warning. |
| `new` | a character string specifying the translations. If a character vector of length 2 or more is supplied, the first element is used with a warning. |
| `upper` | logical: translate to upper or lower case?. |
### Details
`chartr` translates each character in `x` that is specified in `old` to the corresponding character specified in `new`. Ranges are supported in the specifications, but character classes and repeated characters are not. If `old` contains more characters than new, an error is signaled; if it contains fewer characters, the extra characters at the end of `new` are ignored.
`tolower` and `toupper` convert upper-case characters in a character vector to lower-case, or vice versa. Non-alphabetic characters are left unchanged. More than one character can be mapped to a single upper-case character.
`casefold` is a wrapper for `tolower` and `toupper` provided for compatibility with S-PLUS.
### Value
A character vector of the same length and with the same attributes as `x` (after possible coercion).
Elements of the result will be have the encoding declared as that of the current locale (see `[Encoding](encoding)`) if the corresponding input had a declared encoding and the current locale is either Latin-1 or UTF-8. The result will be in the current locale's encoding unless the corresponding input was in UTF-8 or Latin-1, when it will be in UTF-8.
### Note
These functions are platform-dependent, usually using OS services. The latter can be quite deficient, for example only covering ASCII characters in 8-bit locales. The definition of ‘alphabetic’ is platform-dependent and liable to change over time as most platforms are based on the frequently-updated Unicode tables.
### See Also
`[sub](grep)` and `[gsub](grep)` for other substitutions in strings.
### Examples
```
x <- "MiXeD cAsE 123"
chartr("iXs", "why", x)
chartr("a-cX", "D-Fw", x)
tolower(x)
toupper(x)
## "Mixed Case" Capitalizing - toupper( every first letter of a word ) :
.simpleCap <- function(x) {
s <- strsplit(x, " ")[[1]]
paste(toupper(substring(s, 1, 1)), substring(s, 2),
sep = "", collapse = " ")
}
.simpleCap("the quick red fox jumps over the lazy brown dog")
## -> [1] "The Quick Red Fox Jumps Over The Lazy Brown Dog"
## and the better, more sophisticated version:
capwords <- function(s, strict = FALSE) {
cap <- function(s) paste(toupper(substring(s, 1, 1)),
{s <- substring(s, 2); if(strict) tolower(s) else s},
sep = "", collapse = " " )
sapply(strsplit(s, split = " "), cap, USE.NAMES = !is.null(names(s)))
}
capwords(c("using AIC for model selection"))
## -> [1] "Using AIC For Model Selection"
capwords(c("using AIC", "for MODEL selection"), strict = TRUE)
## -> [1] "Using Aic" "For Model Selection"
## ^^^ ^^^^^
## 'bad' 'good'
## -- Very simple insecure crypto --
rot <- function(ch, k = 13) {
p0 <- function(...) paste(c(...), collapse = "")
A <- c(letters, LETTERS, " '")
I <- seq_len(k); chartr(p0(A), p0(c(A[-I], A[I])), ch)
}
pw <- "my secret pass phrase"
(crypw <- rot(pw, 13)) #-> you can send this off
## now ``decrypt'' :
rot(crypw, 54 - 13) # -> the original:
stopifnot(identical(pw, rot(crypw, 54 - 13)))
```
r None
`base-package` The R Base Package
----------------------------------
### Description
Base R functions
### Details
This package contains the basic functions which let **R** function as a language: arithmetic, input/output, basic programming support, etc. Its contents are available through inheritance from any environment.
For a complete list of functions, use `library(help = "base")`.
r None
`body` Access to and Manipulation of the Body of a Function
------------------------------------------------------------
### Description
Get or set the *body* of a function which is basically all of the function definition but its formal arguments (`<formals>`), see the ‘Details’.
### Usage
```
body(fun = sys.function(sys.parent()))
body(fun, envir = environment(fun)) <- value
```
### Arguments
| | |
| --- | --- |
| `fun` | a function object, or see ‘Details’. |
| `envir` | environment in which the function should be defined. |
| `value` | an object, usually a [language object](is.language): see section ‘Value’. |
### Details
For the first form, `fun` can be a character string naming the function to be manipulated, which is searched for from the parent frame. If it is not specified, the function calling `body` is used.
The bodies of all but the simplest are braced expressions, that is calls to `{`: see the ‘Examples’ section for how to create such a call.
### Value
`body` returns the body of the function specified. This is normally a [language object](is.language), most often a call to `{`, but it can also be a `[symbol](../../grdevices/html/plotmath)` such as `pi` or a constant (e.g., `3` or `"R"`) to be the return value of the function.
The replacement form sets the body of a function to the object on the right hand side, and (potentially) resets the `<environment>` of the function, and drops `<attributes>`. If `value` is of class `"<expression>"` the first element is used as the body: any additional elements are ignored, with a warning.
### See Also
The three parts of a (non-primitive) function are its `<formals>`, `body`, and `<environment>`.
Further, see `[alist](list)`, `<args>`, `<function>`.
### Examples
```
body(body)
f <- function(x) x^5
body(f) <- quote(5^x)
## or equivalently body(f) <- expression(5^x)
f(3) # = 125
body(f)
## creating a multi-expression body
e <- expression(y <- x^2, return(y)) # or a list
body(f) <- as.call(c(as.name("{"), e))
f
f(8)
## Using substitute() may be simpler than 'as.call(c(as.name("{",..)))':
stopifnot(identical(body(f), substitute({ y <- x^2; return(y) })))
```
r None
`force` Force Evaluation of an Argument
----------------------------------------
### Description
Forces the evaluation of a function argument.
### Usage
```
force(x)
```
### Arguments
| | |
| --- | --- |
| `x` | a formal argument of the enclosing function. |
### Details
`force` forces the evaluation of a formal argument. This can be useful if the argument will be captured in a closure by the lexical scoping rules and will later be altered by an explicit assignment or an implicit assignment in a loop or an apply function.
### Note
This is semantic sugar: just evaluating the symbol will do the same thing (see the examples).
`force` does not force the evaluation of other [promises](delayedassign). (It works by forcing the promise that is created when the actual arguments of a call are matched to the formal arguments of a closure, the mechanism which implements *lazy evaluation*.)
### Examples
```
f <- function(y) function() y
lf <- vector("list", 5)
for (i in seq_along(lf)) lf[[i]] <- f(i)
lf[[1]]() # returns 5
g <- function(y) { force(y); function() y }
lg <- vector("list", 5)
for (i in seq_along(lg)) lg[[i]] <- g(i)
lg[[1]]() # returns 1
## This is identical to
g <- function(y) { y; function() y }
```
r None
`file.show` Display One or More Text Files
-------------------------------------------
### Description
Display one or more (plain) text files, in a platform specific way, typically via a ‘pager’.
### Usage
```
file.show(..., header = rep("", nfiles),
title = "R Information",
delete.file = FALSE, pager = getOption("pager"),
encoding = "")
```
### Arguments
| | |
| --- | --- |
| `...` | one or more character vectors containing the names of the files to be displayed. Paths with have [tilde expansion](path.expand). |
| `header` | character vector (of the same length as the number of files specified in `...`) giving a header for each file being displayed. Defaults to empty strings. |
| `title` | an overall title for the display. If a single separate window is used for the display, `title` will be used as the window title. If multiple windows are used, their titles should combine the title and the file-specific header. |
| `delete.file` | should the files be deleted after display? Used for temporary files. |
| `pager` | the pager to be used: not used on all platforms |
| `encoding` | character string giving the encoding to be assumed for the file(s). |
### Details
This function provides the core of the R help system, but it can be used for other purposes as well, such as `[page](../../utils/html/page)`.
How the pager is implemented is highly system-dependent.
The basic Unix version concatenates the files (using the headers) to a temporary file, and displays it in the pager selected by the `pager` argument, which is a character vector specifying a system command (a full path or a command found on the PATH) to run on the set of files. The ‘factory-fresh’ default is to use ‘R\_HOME/bin/pager’, which is a shell script running the command-line specified by the environment variable PAGER whose default is set at configuration, usually to `less`. On a Unix-alike `more` is used if `pager` is empty.
Most GUI systems will use a separate pager window for each file, and let the user leave it up while **R** continues running. The selection of such pagers could either be done using special pager names being intercepted by lower-level code (such as `"internal"` and `"console"` on Windows), or by letting `pager` be an **R** function which will be called with arguments `(files, header,
title, delete.file)` corresponding to the first four arguments of `file.show` and take care of interfacing to the GUI.
The `R.app` GUI on macOS uses its internal pager irrespective of the setting of `pager`.
Not all implementations will honour `delete.file`. In particular, using an external pager on Windows does not, as there is no way to know when the external application has finished with the file.
### Author(s)
Ross Ihaka, Brian Ripley.
### See Also
`<files>`, `<list.files>`, `[help](../../utils/html/help)`; `[RShowDoc](../../utils/html/rshowdoc)` call `file.show()` for `type =
"text"`. Consider `[getOption](options)("pdfviewer")` and e.g., `<system>` for displaying pdf files.
`[file.edit](../../utils/html/file.edit)`.
### Examples
```
file.show(file.path(R.home("doc"), "COPYRIGHTS"))
```
| programming_docs |
r None
`zutils` Miscellaneous Internal/Programming Utilities
------------------------------------------------------
### Description
Miscellaneous internal/programming utilities.
### Usage
```
.standard_regexps()
```
### Details
`.standard_regexps` returns a list of ‘standard’ regexps, including elements named `valid_package_name` and `valid_package_version` with the obvious meanings. The regexps are not anchored.
r None
`attr` Object Attributes
-------------------------
### Description
Get or set specific attributes of an object.
### Usage
```
attr(x, which, exact = FALSE)
attr(x, which) <- value
```
### Arguments
| | |
| --- | --- |
| `x` | an object whose attributes are to be accessed. |
| `which` | a non-empty character string specifying which attribute is to be accessed. |
| `exact` | logical: should `which` be matched exactly? |
| `value` | an object, the new value of the attribute, or `NULL` to remove the attribute. |
### Details
These functions provide access to a single attribute of an object. The replacement form causes the named attribute to take the value specified (or create a new attribute with the value given).
The extraction function first looks for an exact match to `which` amongst the attributes of `x`, then (unless `exact = TRUE`) a unique partial match. (Setting `<options>(warnPartialMatchAttr = TRUE)` causes partial matches to give warnings.)
The replacement function only uses exact matches.
Note that some attributes (namely `<class>`, `<comment>`, `<dim>`, `<dimnames>`, `<names>`, `<row.names>` and `[tsp](../../stats/html/tsp)`) are treated specially and have restrictions on the values which can be set. (Note that this is not true of `<levels>` which should be set for factors via the `levels` replacement function.)
The extractor function allows (and does not match) empty and missing values of `which`: the replacement function does not.
`[NULL](null)` objects cannot have attributes and attempting to assign one by `attr` gives an error.
Both are <primitive> functions.
### Value
For the extractor, the value of the attribute matched, or `NULL` if no exact match is found and no or more than one partial match is found.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<attributes>`
### Examples
```
# create a 2 by 5 matrix
x <- 1:10
attr(x,"dim") <- c(2, 5)
```
r None
`readLines` Read Text Lines from a Connection
----------------------------------------------
### Description
Read some or all text lines from a connection.
### Usage
```
readLines(con = stdin(), n = -1L, ok = TRUE, warn = TRUE,
encoding = "unknown", skipNul = FALSE)
```
### Arguments
| | |
| --- | --- |
| `con` | a [connection](connections) object or a character string. |
| `n` | integer. The (maximal) number of lines to read. Negative values indicate that one should read up to the end of input on the connection. |
| `ok` | logical. Is it OK to reach the end of the connection before `n > 0` lines are read? If not, an error will be generated. |
| `warn` | logical. Warn if a text file is missing a final EOL or if there are embedded nuls in the file. |
| `encoding` | encoding to be assumed for input strings. It is used to mark character strings as known to be in Latin-1 or UTF-8: it is not used to re-encode the input. To do the latter, specify the encoding as part of the connection `con` or via `<options>(encoding=)`: see the examples. |
| `skipNul` | logical: should nuls be skipped? |
### Details
If the `con` is a character string, the function calls `[file](connections)` to obtain a file connection which is opened for the duration of the function call. This can be a compressed file.
If the connection is open it is read from its current position. If it is not open, it is opened in `"rt"` mode for the duration of the call and then closed (but not destroyed; one must call `[close](connections)` to do that).
If the final line is incomplete (no final EOL marker) the behaviour depends on whether the connection is blocking or not. For a non-blocking text-mode connection the incomplete line is pushed back, silently. For all other connections the line will be accepted, with a warning.
Whatever mode the connection is opened in, any of LF, CRLF or CR will be accepted as the EOL marker for a line.
Embedded nuls in the input stream will terminate the line currently being read, with a warning (unless `skipNul = TRUE` or `warn
= FALSE`).
If `con` is a not-already-open [connection](connections) with a non-default `encoding` argument, the text is converted to UTF-8 and declared as such (and the `encoding` argument to `readLines` is ignored). See the examples.
### Value
A character vector of length the number of lines read.
The elements of the result have a declared encoding if `encoding` is `"latin1"` or `"UTF-8"`,
### Note
The default connection, `[stdin](showconnections)`, may be different from `con = "stdin"`: see `[file](connections)`.
### See Also
`<connections>`, `[writeLines](writelines)`, `[readBin](readbin)`, `<scan>`
### Examples
```
fil <- tempfile(fileext = ".data")
cat("TITLE extra line", "2 3 5 7", "", "11 13 17", file = fil,
sep = "\n")
readLines(fil, n = -1)
unlink(fil) # tidy up
## difference in blocking
fil <- tempfile("test")
cat("123\nabc", file = fil)
readLines(fil) # line with a warning
con <- file(fil, "r", blocking = FALSE)
readLines(con) # empty
cat(" def\n", file = fil, append = TRUE)
readLines(con) # gets both
close(con)
unlink(fil) # tidy up
## Not run:
# read a 'Windows Unicode' file
A <- readLines(con <- file("Unicode.txt", encoding = "UCS-2LE"))
close(con)
unique(Encoding(A)) # will most likely be UTF-8
## End(Not run)
```
r None
`Colon` Colon Operator
-----------------------
### Description
Generate regular sequences.
### Usage
```
from:to
a:b
```
### Arguments
| | |
| --- | --- |
| `from` | starting value of sequence. |
| `to` | (maximal) end value of the sequence. |
| `a, b` | `<factor>`s of the same length. |
### Details
The binary operator `:` has two meanings: for factors `a:b` is equivalent to `<interaction>(a, b)` (but the levels are ordered and labelled differently).
For other arguments `from:to` is equivalent to `seq(from, to)`, and generates a sequence from `from` to `to` in steps of `1` or `-1`. Value `to` will be included if it differs from `from` by an integer up to a numeric fuzz of about `1e-7`. Non-numeric arguments are coerced internally (hence without dispatching methods) to numeric—complex values will have their imaginary parts discarded with a warning.
### Value
For numeric arguments, a numeric vector. This will be of type `<integer>` if `from` is integer-valued and the result is representable in the **R** integer type, otherwise of type `"double"` (aka `<mode>` `"<numeric>"`).
For factors, an unordered factor with levels labelled as `la:lb` and ordered lexicographically (that is, `lb` varies fastest).
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
(for numeric arguments: S does not have `:` for factors.)
### See Also
`<seq>` (a *generalization* of `from:to`).
As an alternative to using `:` for factors, `<interaction>`.
For `:` used in the formal representation of an interaction, see `[formula](../../stats/html/formula)`.
### Examples
```
1:4
pi:6 # real
6:pi # integer
f1 <- gl(2, 3); f1
f2 <- gl(3, 2); f2
f1:f2 # a factor, the "cross" f1 x f2
```
r None
`mode` The (Storage) Mode of an Object
---------------------------------------
### Description
Get or set the type or storage mode of an object.
### Usage
```
mode(x)
mode(x) <- value
storage.mode(x)
storage.mode(x) <- value
```
### Arguments
| | |
| --- | --- |
| `x` | any **R** object. |
| `value` | a character string giving the desired mode or ‘storage mode’ (type) of the object. |
### Details
Both `mode` and `storage.mode` return a character string giving the (storage) mode of the object — often the same — both relying on the output of `<typeof>(x)`, see the example below.
`mode(x) <- "newmode"` changes the `mode` of object `x` to `newmode`. This is only supported if there is an appropriate `as.newmode` function, for example `"logical"`, `"integer"`, `"double"`, `"complex"`, `"raw"`, `"character"`, `"list"`, `"expression"`, `"name"`, `"symbol"` and `"function"`. Attributes are preserved (but see below).
`storage.mode(x) <- "newmode"` is a more efficient <primitive> version of `mode<-`, which works for `"newmode"` which is one of the internal types (see `<typeof>`), but not for `"single"`. Attributes are preserved.
As storage mode `"single"` is only a pseudo-mode in **R**, it will not be reported by `mode` or `storage.mode`: use `attr(object, "Csingle")` to examine this. However, `mode<-` can be used to set the mode to `"single"`, which sets the real mode to `"double"` and the `"Csingle"` attribute to `TRUE`. Setting any other mode will remove this attribute.
Note (in the examples below) that some `<call>`s have mode `"("` which is S compatible.
### Mode names
Modes have the same set of names as types (see `<typeof>`) except that
* types `"integer"` and `"double"` are returned as `"numeric"`.
* types `"special"` and `"builtin"` are returned as `"function"`.
* type `"symbol"` is called mode `"name"`.
* type `"language"` is returned as `"("` or `"call"`.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<typeof>` for the R-internal ‘mode’, `[type.convert](../../utils/html/type.convert)`, `<attributes>`.
### Examples
```
require(stats)
sapply(options(), mode)
cex3 <- c("NULL", "1", "1:1", "1i", "list(1)", "data.frame(x = 1)",
"pairlist(pi)", "c", "lm", "formals(lm)[[1]]", "formals(lm)[[2]]",
"y ~ x","expression((1))[[1]]", "(y ~ x)[[1]]",
"expression(x <- pi)[[1]][[1]]")
lex3 <- sapply(cex3, function(x) eval(str2lang(x)))
mex3 <- t(sapply(lex3,
function(x) c(typeof(x), storage.mode(x), mode(x))))
dimnames(mex3) <- list(cex3, c("typeof(.)","storage.mode(.)","mode(.)"))
mex3
## This also makes a local copy of 'pi':
storage.mode(pi) <- "complex"
storage.mode(pi)
rm(pi)
```
r None
`assignOps` Assignment Operators
---------------------------------
### Description
Assign a value to a name.
### Usage
```
x <- value
x <<- value
value -> x
value ->> x
x = value
```
### Arguments
| | |
| --- | --- |
| `x` | a variable name (possibly quoted). |
| `value` | a value to be assigned to `x`. |
### Details
There are three different assignment operators: two of them have leftwards and rightwards forms.
The operators `<-` and `=` assign into the environment in which they are evaluated. The operator `<-` can be used anywhere, whereas the operator `=` is only allowed at the top level (e.g., in the complete expression typed at the command prompt) or as one of the subexpressions in a braced list of expressions.
The operators `<<-` and `->>` are normally only used in functions, and cause a search to be made through parent environments for an existing definition of the variable being assigned. If such a variable is found (and its binding is not locked) then its value is redefined, otherwise assignment takes place in the global environment. Note that their semantics differ from that in the S language, but are useful in conjunction with the scoping rules of **R**. See ‘The R Language Definition’ manual for further details and examples.
In all the assignment operator expressions, `x` can be a name or an expression defining a part of an object to be replaced (e.g., `z[[1]]`). A syntactic name does not need to be quoted, though it can be (preferably by [backtick](quotes)s).
The leftwards forms of assignment `<- = <<-` group right to left, the other from left to right.
### Value
`value`. Thus one can use `a <- b <- c <- 6`.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Chambers, J. M. (1998) *Programming with Data. A Guide to the S Language*. Springer (for `=`).
### See Also
`<assign>` (and its inverse `<get>`), for “subassignment” such as `x[i] <- v`, see `[[<-](extract)`; further, `<environment>`.
r None
`rowsum` Give Column Sums of a Matrix or Data Frame, Based on a Grouping Variable
----------------------------------------------------------------------------------
### Description
Compute column sums across rows of a numeric matrix-like object for each level of a grouping variable. `rowsum` is generic, with a method for data frames and a default method for vectors and matrices.
### Usage
```
rowsum(x, group, reorder = TRUE, ...)
## S3 method for class 'data.frame'
rowsum(x, group, reorder = TRUE, na.rm = FALSE, ...)
## Default S3 method:
rowsum(x, group, reorder = TRUE, na.rm = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | a matrix, data frame or vector of numeric data. Missing values are allowed. A numeric vector will be treated as a column vector. |
| `group` | a vector or factor giving the grouping, with one element per row of `x`. Missing values will be treated as another group and a warning will be given. |
| `reorder` | if `TRUE`, then the result will be in order of `sort(unique(group))`, if `FALSE`, it will be in the order that groups were encountered. |
| `na.rm` | logical (`TRUE` or `FALSE`). Should `NA` (including `NaN`) values be discarded? |
| `...` | other arguments to be passed to or from methods |
### Details
The default is to reorder the rows to agree with `tapply` as in the example below. Reordering should not add noticeably to the time except when there are very many distinct values of `group` and `x` has few columns.
The original function was written by Terry Therneau, but this is a new implementation using hashing that is much faster for large matrices.
To sum over all the rows of a matrix (i.e., a single `group`) use `[colSums](colsums)`, which should be even faster.
For integer arguments, over/underflow in forming the sum results in `NA`.
### Value
A matrix or data frame containing the sums. There will be one row per unique value of `group`.
### See Also
`<tapply>`, `[aggregate](../../stats/html/aggregate)`, `[rowSums](colsums)`
### Examples
```
require(stats)
x <- matrix(runif(100), ncol = 5)
group <- sample(1:8, 20, TRUE)
(xsum <- rowsum(x, group))
## Slower versions
tapply(x, list(group[row(x)], col(x)), sum)
t(sapply(split(as.data.frame(x), group), colSums))
aggregate(x, list(group), sum)[-1]
```
r None
`deparseOpts` Options for Expression Deparsing
-----------------------------------------------
### Description
Process the deparsing options for `deparse`, `dput` and `dump`.
### Usage
```
.deparseOpts(control)
..deparseOpts
```
### Arguments
| | |
| --- | --- |
| `control` | character vector of deparsing options. |
### Details
`..deparseOpts` is the `<character>` vector of possible deparsing options used by `.deparseOpts()`.
`.deparseOpts()` is called by `<deparse>`, `<dput>` and `<dump>` to process their `control` argument.
The `control` argument is a vector containing zero or more of the following strings (exactly those in `..deparseOpts`). Partial string matching is used.
`"keepInteger"`:
Either surround integer vectors by `as.integer()` or use suffix `L`, so they are not converted to type double when parsed. This includes making sure that integer `NA`s are preserved (via `NA_integer_` if there are no non-`NA` values in the vector, unless `"S_compatible"` is set).
`"quoteExpressions"`:
Surround unevaluated expressions, but not `[formula](../../stats/html/formula)`s, with `quote()`, so they are not evaluated when re-parsed.
`"showAttributes"`:
If the object has `<attributes>` (other than a `source` attribute, see `[srcref](srcfile)`), use `<structure>()` to display them as well as the object value unless the only such attribute is `names` and the `"niceNames"` option is set. This (`"showAttributes"`) is the default for `<deparse>` and `<dput>`.
`"useSource"`:
If the object has a `source` attribute (`[srcref](srcfile)`), display that instead of deparsing the object. Currently only applies to function definitions.
`"warnIncomplete"`:
Some exotic objects such as <environment>s, external pointers, etc. can not be deparsed properly. This option causes a warning to be issued if the deparser recognizes one of these situations.
Also, the parser in **R** < 2.7.0 would only accept strings of up to 8192 bytes, and this option gives a warning for longer strings.
`"keepNA"`:
Integer, real and character `NA`s are surrounded by coercion functions where necessary to ensure that they are parsed to the same type. Since e.g. `NA_real_` can be output in **R**, this is mainly used in connection with `S_compatible`.
`"niceNames"`:
If true, `<list>`s and atomic vectors with non-`[NA](na)` names (see `<names>`) are deparsed as e.g., `c(A = 1)` instead of `structure(1, .Names = "A")`, independently of the `"showAttributes"` setting.
`"all"`:
An abbreviated way to specify all of the options listed above *plus* `"digits17"` (since **R** version 4.0.0). This is the default for `dump`, and, without `"digits17"`, the options used by `[edit](../../utils/html/edit)` (which are fixed).
`"delayPromises"`:
Deparse promises in the form <promise: expression> rather than evaluating them. The value and the environment of the promise will not be shown and the deparsed code cannot be sourced.
`"S_compatible"`:
Make deparsing as far as possible compatible with S and **R** < 2.5.0. For compatibility with S, integer values of double vectors are deparsed with a trailing decimal point. Backticks are not used.
`"hexNumeric"`:
Real and finite complex numbers are output in "%a" format as binary fractions (coded as hexadecimal: see `<sprintf>`) with maximal opportunity to be recorded exactly to full precision. Complex numbers with one or both non-finite components are output as if this option were not set.
(This relies on that format being correctly supported: known problems on Windows are worked around as from **R** 3.1.2.)
`"digits17"`:
Real and finite complex numbers are output using format "%.17g" which may give more precision than the default (but the output will depend on the platform and there may be loss of precision when read back). Complex numbers with one or both non-finite components are output as if this option were not set.
`"exact"`:
An abbreviated way to specify `control = c("all", "hexNumeric")` which is guaranteed to be exact for numbers, see also below.
For the most readable (but perhaps incomplete) display, use `control = NULL`. This displays the object's value, but not its attributes. The default in `<deparse>` is to display the attributes as well, but not to use any of the other options to make the result parseable. (`<dput>` and `<dump>` do use more default options, and printing of functions without sources uses `c("keepInteger", "keepNA")`.)
Using `control = c("all", "hexNumeric")` comes closest to making `deparse()` an inverse of `parse()`, as representing double and complex numbers as decimals may well not be exact. However, not all objects are deparse-able even with this option. A warning will be issued if the function recognizes that it is being asked to do the impossible.
Only one of `"hexNumeric"` and `"digits17"` can be specified.
### Value
An integer value corresponding to the `control` options selected.
### Examples
```
(iOpt.all <- .deparseOpts("all")) # a four digit integer
## one integer --> vector binary bits
int2bits <- function(x, base = 2L,
ndigits = 1 + floor(1e-9 + log(max(x,1), base))) {
r <- numeric(ndigits)
for (i in ndigits:1) {
r[i] <- x%%base
if (i > 1L)
x <- x%/%base
}
rev(r) # smallest bit at left
}
int2bits(iOpt.all)
## what options does "all" contain ?
depO.indiv <- setdiff(..deparseOpts, c("all", "exact"))
(oa <- depO.indiv[int2bits(iOpt.all) == 1])
stopifnot(identical(iOpt.all, .deparseOpts(oa)))
## ditto for "exact" instead of "all":
int2bits(iOpt.X <- .deparseOpts("exact"))
(oX <- depO.indiv[int2bits(iOpt.X) == 1])
diffXall <- oa != oX
stopifnot(identical(iOpt.X, .deparseOpts(oX)),
identical(oX[diffXall], "hexNumeric"),
identical(oa[diffXall], "digits17"))
```
| programming_docs |
r None
`utf8Conversion` Convert Integer Vectors to or from UTF-8-encoded Character Vectors
------------------------------------------------------------------------------------
### Description
Conversion of UTF-8 encoded character vectors to and from integer vectors representing a UTF-32 encoding.
### Usage
```
utf8ToInt(x)
intToUtf8(x, multiple = FALSE, allow_surrogate_pairs = FALSE)
```
### Arguments
| | |
| --- | --- |
| `x` | object to be converted. |
| `multiple` | logical: should the conversion be to a single character string or multiple individual characters? |
| `allow_surrogate_pairs` | logical: should interpretation of surrogate pairs be attempted? (See ‘Details’.) Only supported for `multiple = FALSE`. |
### Details
These will work in any locale, including on platforms that do not otherwise support multi-byte character sets.
Unicode defines a name and a number of all of the glyphs it encompasses: the numbers are called *code points*: since RFC3629 they run from `0` to `0x10FFFF` (with about 5% being assigned by version 13.0 of the Unicode standard and 7% reserved for ‘private use’).
`intToUtf8` does not by default handle surrogate pairs: inputs in the surrogate ranges are mapped to `NA`. They might occur if a UTF-16 byte stream has been read as 2-byte integers (in the correct byte order), in which case `allow_surrogate_pairs = TRUE` will try to interpret them (with unmatched surrogate values still treated as `NA`).
### Value
`utf8ToInt` converts a length-one character string encoded in UTF-8 to an integer vector of Unicode code points.
`intToUtf8` converts a numeric vector of Unicode code points either (default) to a single character string or a character vector of single characters. Non-integral numeric values are truncated to integers. For output to a single character string `0` is silently omitted: otherwise `0` is mapped to `""`. The `[Encoding](encoding)` of a non-`NA` return value is declared as `"UTF-8"`.
Invalid and `NA` inputs are mapped to `NA` output.
### Validity
Which code points are regarded as valid has changed over the lifetime of UTF-8. Originally all 32-bit unsigned integers were potentially valid and could be converted to up to 6 bytes in UTF-8. Since 2003 it has been stated that there will never be valid code points larger than `0x10FFFF`, and so valid UTF-8 encodings are never more than 4 bytes.
The code points in the surrogate-pair range `0xD800` to `0xDFFF` are prohibited in UTF-8 and so are regarded as invalid by `utf8ToInt` and by default by `intToUtf8`.
The position of ‘noncharacters’ (notably `0xFFFE` and `0xFFFF`) was clarified by ‘Corrigendum 9’ in 2013. These are valid but will never be given an official interpretation. (In some earlier versions of **R** `utf8ToInt` treated them as invalid.)
### References
<https://tools.ietf.org/html/rfc3629>, the current standard for UTF-8.
<https://www.unicode.org/versions/corrigendum9.html> for non-characters.
### Examples
```
## will only display in some locales and fonts
intToUtf8(0x03B2L) # Greek beta
utf8ToInt("bi\u00dfchen")
utf8ToInt("\xfa\xb4\xbf\xbf\x9f")
## A valid UTF-16 surrogate pair (for U+10437)
x <- c(0xD801, 0xDC37)
intToUtf8(x)
intToUtf8(x, TRUE)
(xx <- intToUtf8(x, , TRUE)) # will only display in some locales and fonts
charToRaw(xx)
## An example of how surrogate pairs might occur
x <- "\U10437"
charToRaw(x)
foo <- tempfile()
writeLines(x, file(foo, encoding = "UTF-16LE"))
## next two are OS-specific, but are mandated by POSIX
system(paste("od -x", foo)) # 2-byte units, correct on little-endian platforms
system(paste("od -t x1", foo)) # single bytes as hex
y <- readBin(foo, "integer", 2, 2, FALSE, endian = "little")
sprintf("%X", y)
intToUtf8(y, , TRUE)
```
r None
`row` Row Indexes
------------------
### Description
Returns a matrix of integers indicating their row number in a matrix-like object, or a factor indicating the row labels.
### Usage
```
row(x, as.factor = FALSE)
.row(dim)
```
### Arguments
| | |
| --- | --- |
| `x` | a matrix-like object, that is one with a two-dimensional `dim`. |
| `dim` | a matrix dimension, i.e., an integer valued numeric vector of length two (with non-negative entries). |
| `as.factor` | a logical value indicating whether the value should be returned as a factor of row labels (created if necessary) rather than as numbers. |
### Value
An integer (or factor) matrix with the same dimensions as `x` and whose `ij`-th element is equal to `i` (or the `i`-th row label).
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<col>` to get columns; `<slice.index>` for a general way to get slice indices in an array.
### Examples
```
x <- matrix(1:12, 3, 4)
# extract the diagonal of a matrix - more slowly than diag(x)
dx <- x[row(x) == col(x)]
dx
# create an identity 5-by-5 matrix more slowly than diag(n = 5):
x <- matrix(0, nrow = 5, ncol = 5)
x[row(x) == col(x)] <- 1
x
(i34 <- .row(3:4))
stopifnot(identical(i34, .row(c(3,4)))) # 'dim' maybe "double"
```
r None
`gettext` Translate Text Messages
----------------------------------
### Description
If Native Language Support (NLS) was enabled in this build of **R** (see the `bindtextdomain()` example), attempt to translate character vectors or set where the translations are to be found.
### Usage
```
gettext(..., domain = NULL)
ngettext(n, msg1, msg2, domain = NULL)
bindtextdomain(domain, dirname = NULL)
```
### Arguments
| | |
| --- | --- |
| `...` | One or more character vectors. |
| `domain` | The ‘domain’ for the translation. |
| `n` | a non-negative integer. |
| `msg1` | the message to be used in English for `n = 1`. |
| `msg2` | the message to be used in English for `n = 0, 2, 3, ...`. |
| `dirname` | The directory in which to find translated message catalogs for the domain. |
### Details
If `domain` is `NULL` or `""`, and `gettext` or `ngettext` is called from a function in the namespace of package pkg the domain is set to `"R-pkg"`. Otherwise there is no default domain.
If a suitable domain is found, each character string is offered for translation, and replaced by its translation into the current language if one is found. The value (logical) `NA` suppresses any translation.
The *language* to be used for message translation is determined by your OS default and/or the locale setting at **R**'s startup, see `[Sys.getlocale](locales)()`, and notably the LANGUAGE environment variable.
Conventionally the domain for **R** warning/error messages in package pkg is `"R-pkg"`, and that for C-level messages is `"pkg"`.
For `gettext`, leading and trailing whitespace is ignored when looking for the translation.
`ngettext` is used where the message needs to vary by a single integer. Translating such messages is subject to very specific rules for different languages: see the GNU Gettext Manual. The string will often contain a single instance of `%d` to be used in `<sprintf>`. If English is used, `msg1` is returned if `n == 1` and `msg2` in all other cases.
`bindtextdomain` is a wrapper for the C function of the same name: your system may have a `man` page for it. With a non-`NULL` `dirname` it specifies where to look for message catalogues: with `domain = NULL` it returns the current location.
### Value
For `gettext`, a character vector, one element per string in `...`. If translation is not enabled or no domain is found or no translation is found in that domain, the original strings are returned.
For `ngettext`, a character string.
For `bindtextdomain`, a character string giving the current base directory, or `NULL` if setting it failed.
### See Also
`<stop>` and `<warning>` make use of `gettext` to translate messages.
`[xgettext](../../tools/html/xgettext)` for extracting translatable strings from **R** source files.
### Examples
```
bindtextdomain("R") # non-null if and only if NLS is enabled
for(n in 0:3)
print(sprintf(ngettext(n, "%d variable has missing values",
"%d variables have missing values"),
n))
## Not run:
## for translation, those strings should appear in R-pkg.pot as
msgid "%d variable has missing values"
msgid_plural "%d variables have missing values"
msgstr[0] ""
msgstr[1] ""
## End(Not run)
miss <- c("one", "or", "another")
cat(ngettext(length(miss), "variable", "variables"),
paste(sQuote(miss), collapse = ", "),
ngettext(length(miss), "contains", "contain"), "missing values\n")
## better for translators would be to use
cat(sprintf(ngettext(length(miss),
"variable %s contains missing values\n",
"variables %s contain missing values\n"),
paste(sQuote(miss), collapse = ", ")))
```
r None
`sample` Random Samples and Permutations
-----------------------------------------
### Description
`sample` takes a sample of the specified size from the elements of `x` using either with or without replacement.
### Usage
```
sample(x, size, replace = FALSE, prob = NULL)
sample.int(n, size = n, replace = FALSE, prob = NULL,
useHash = (!replace && is.null(prob) && size <= n/2 && n > 1e7))
```
### Arguments
| | |
| --- | --- |
| `x` | either a vector of one or more elements from which to choose, or a positive integer. See ‘Details.’ |
| `n` | a positive number, the number of items to choose from. See ‘Details.’ |
| `size` | a non-negative integer giving the number of items to choose. |
| `replace` | should sampling be with replacement? |
| `prob` | a vector of probability weights for obtaining the elements of the vector being sampled. |
| `useHash` | `<logical>` indicating if the hash-version of the algorithm should be used. Can only be used for `replace =
FALSE`, `prob = NULL`, and `size <= n/2`, and really should be used for large `n`, as `useHash=FALSE` will use memory proportional to `n`. |
### Details
If `x` has length 1, is numeric (in the sense of `[is.numeric](numeric)`) and `x >= 1`, sampling *via* `sample` takes place from `1:x`. *Note* that this convenience feature may lead to undesired behaviour when `x` is of varying length in calls such as `sample(x)`. See the examples.
Otherwise `x` can be any **R** object for which `length` and subsetting by integers make sense: S3 or S4 methods for these operations will be dispatched as appropriate.
For `sample` the default for `size` is the number of items inferred from the first argument, so that `sample(x)` generates a random permutation of the elements of `x` (or `1:x`).
It is allowed to ask for `size = 0` samples with `n = 0` or a length-zero `x`, but otherwise `n > 0` or positive `length(x)` is required.
Non-integer positive numerical values of `n` or `x` will be truncated to the next smallest integer, which has to be no larger than `[.Machine](zmachine)$integer.max`.
The optional `prob` argument can be used to give a vector of weights for obtaining the elements of the vector being sampled. They need not sum to one, but they should be non-negative and not all zero. If `replace` is true, Walker's alias method (Ripley, 1987) is used when there are more than 200 reasonably probable values: this gives results incompatible with those from **R** < 2.2.0.
If `replace` is false, these probabilities are applied sequentially, that is the probability of choosing the next item is proportional to the weights amongst the remaining items. The number of nonzero weights must be at least `size` in this case.
`sample.int` is a bare interface in which both `n` and `size` must be supplied as integers.
Argument `n` can be larger than the largest integer of type `integer`, up to the largest representable integer in type `double`. Only uniform sampling is supported. Two random numbers are used to ensure uniform sampling of large integers.
### Value
For `sample` a vector of length `size` with elements drawn from either `x` or from the integers `1:x`.
For `sample.int`, an integer vector of length `size` with elements from `1:n`, or a double vector if *n >= 2^31*.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Ripley, B. D. (1987) *Stochastic Simulation*. Wiley.
### See Also
`[RNGkind](random)(sample.kind = ..)` about random number generation, notably the change of `sample()` results with **R** version 3.6.0.
CRAN package [sampling](https://CRAN.R-project.org/package=sampling) for other methods of weighted sampling without replacement.
### Examples
```
x <- 1:12
# a random permutation
sample(x)
# bootstrap resampling -- only if length(x) > 1 !
sample(x, replace = TRUE)
# 100 Bernoulli trials
sample(c(0,1), 100, replace = TRUE)
## More careful bootstrapping -- Consider this when using sample()
## programmatically (i.e., in your function or simulation)!
# sample()'s surprise -- example
x <- 1:10
sample(x[x > 8]) # length 2
sample(x[x > 9]) # oops -- length 10!
sample(x[x > 10]) # length 0
## safer version:
resample <- function(x, ...) x[sample.int(length(x), ...)]
resample(x[x > 8]) # length 2
resample(x[x > 9]) # length 1
resample(x[x > 10]) # length 0
## R 3.x.y only
sample.int(1e10, 12, replace = TRUE)
sample.int(1e10, 12) # not that there is much chance of duplicates
```
r None
`formatc` Formatting Using C-style Formats
-------------------------------------------
### Description
`formatC()` formats numbers individually and flexibly using `C` style format specifications.
`prettyNum()` is used for “prettifying” (possibly formatted) numbers, also in `[format.default](format)`.
`.format.zeros(x)`, an auxiliary function of `prettyNum()`, re-formats the zeros in a vector `x` of formatted numbers.
### Usage
```
formatC(x, digits = NULL, width = NULL,
format = NULL, flag = "", mode = NULL,
big.mark = "", big.interval = 3L,
small.mark = "", small.interval = 5L,
decimal.mark = getOption("OutDec"),
preserve.width = "individual",
zero.print = NULL, replace.zero = TRUE,
drop0trailing = FALSE)
prettyNum(x, big.mark = "", big.interval = 3L,
small.mark = "", small.interval = 5L,
decimal.mark = getOption("OutDec"), input.d.mark = decimal.mark,
preserve.width = c("common", "individual", "none"),
zero.print = NULL, replace.zero = FALSE,
drop0trailing = FALSE, is.cmplx = NA,
...)
.format.zeros(x, zero.print, nx = suppressWarnings(as.numeric(x)),
replace = FALSE, warn.non.fitting = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x` | an atomic numerical or character object, possibly `<complex>` only for `prettyNum()`, typically a vector of real numbers. Any class is discarded, with a warning. |
| `digits` | the desired number of digits after the decimal point (`format = "f"`) or *significant* digits (`format = "g"`, `= "e"` or `= "fg"`). Default: 2 for integer, 4 for real numbers. If less than 0, the C default of 6 digits is used. If specified as more than 50, 50 will be used with a warning unless `format = "f"` where it is limited to typically 324. (Not more than 15–21 digits need be accurate, depending on the OS and compiler used. This limit is just a precaution against segfaults in the underlying C runtime.) |
| `width` | the total field width; if both `digits` and `width` are unspecified, `width` defaults to 1, otherwise to `digits + 1`. `width = 0` will use `width = digits`, `width < 0` means left justify the number in this field (equivalent to `flag = "-"`). If necessary, the result will have more characters than `width`. For character data this is interpreted in characters (not bytes nor display width). |
| `format` | equal to `"d"` (for integers), `"f"`, `"e"`, `"E"`, `"g"`, `"G"`, `"fg"` (for reals), or `"s"` (for strings). Default is `"d"` for integers, `"g"` for reals. `"f"` gives numbers in the usual `xxx.xxx` format; `"e"` and `"E"` give `n.ddde+nn` or `n.dddE+nn` (scientific format); `"g"` and `"G"` put `x[i]` into scientific format only if it saves space to do so *and* drop trailing zeros and decimal point - unless `flag` contains `"#"` which keeps trailing zeros for the `"g", "G"` formats. `"fg"` (our own hybrid format) uses fixed format as `"f"`, but `digits` as the minimum number of *significant* digits. This can lead to quite long result strings, see examples below. Note that unlike `[signif](round)` this prints large numbers with more significant digits than `digits`. Trailing zeros are *dropped* in this format, unless `flag` contains `"#"`. |
| `flag` | for `formatC`, a character string giving a format modifier as in Kernighan and Ritchie (1988, page 243) or the C+99 standard. `"0"`
pads leading zeros; `"-"`
does left adjustment, `"+"`
ensures a sign in all cases, i.e., `"+"` for positive numbers , `" "`
if the first character is not a sign, the space character `" "` will be used instead. `"#"`
specifies “an alternative output form”, specifically depending on `format`. `"'"`
on some platform–locale combination, activates “thousands' grouping” for decimal conversion, `"I"`
in some versions of ‘glibc’ allow for integer conversion to use the locale's alternative output digits, if any. There can be more than one of these flags, in any order. Other characters used to have no effect for `character` formatting, but signal an error since **R** 3.4.0. |
| `mode` | `"double"` (or `"real"`), `"integer"` or `"character"`. Default: Determined from the storage mode of `x`. |
| `big.mark` | character; if not empty used as mark between every `big.interval` decimals *before* (hence `big`) the decimal point. |
| `big.interval` | see `big.mark` above; defaults to 3. |
| `small.mark` | character; if not empty used as mark between every `small.interval` decimals *after* (hence `small`) the decimal point. |
| `small.interval` | see `small.mark` above; defaults to 5. |
| `decimal.mark` | the character to be used to indicate the numeric decimal point. |
| `input.d.mark` | if `x` is `<character>`, the character known to have been used as the numeric decimal point in `x`. |
| `preserve.width` | string specifying if the string widths should be preserved where possible in those cases where marks (`big.mark` or `small.mark`) are added. `"common"`, the default, corresponds to `<format>`-like behavior whereas `"individual"` is the default in `formatC()`. Value can be abbreviated. |
| `zero.print` | logical, character string or `NULL` specifying if and how *zeros* should be formatted specially. Useful for pretty printing ‘sparse’ objects. |
| `replace.zero, replace` | logical; if `zero.print` is a character string, indicates if the exact zero entries in `x` should be simply replaced by `zero.print`. Otherwise, depending on the widths of the respective strings, the (formatted) zeroes are *partly* replaced by `zero.print` and then padded with `" "` to the right were applicable. In that case (false `replace[.zero]`), if the `zero.print` string does not fit, a warning is produced (if `warn.non.fitting` is true). This works via `prettyNum()`, which calls `.format.zeros(*,
replace=replace.zero)` three times in this case, see the ‘Details’. |
| `warn.non.fitting` | logical; if it is true, `replace[.zero]` is false and the `zero.print` string does not fit, a `<warning>` is signalled. |
| `drop0trailing` | logical, indicating if trailing zeros, i.e., `"0"` *after* the decimal mark, should be removed; also drops `"e+00"` in exponential formats. This is simply passed to `prettyNum()`, see the ‘Details’. |
| `is.cmplx` | optional logical, to be used when `x` is `"<character>"` to indicate if it stems from `<complex>` vector or not. By default (`NA`), `x` is checked to ‘look like’ complex. |
| `...` | arguments passed to `format`. |
| `nx` | numeric vector of the same length as `x`, typically the numbers of which the character vector `x` is the pre-format. |
### Details
For numbers, `formatC()` calls `prettyNum()` when needed which itself calls `.format.zeros(*, replace=replace.zero)`. (*“when needed”*: when `zero.print` is not `NULL`, `drop0trailing` is true, or one of `big.mark`, `small.mark`, or `decimal.mark` is not at default.)
If you set `format` it overrides the setting of `mode`, so `formatC(123.45, mode = "double", format = "d")` gives `123`.
The rendering of scientific format is platform-dependent: some systems use `n.ddde+nnn` or `n.dddenn` rather than `n.ddde+nn`.
`formatC` does not necessarily align the numbers on the decimal point, so `formatC(c(6.11, 13.1), digits = 2, format = "fg")` gives `c("6.1", " 13")`. If you want common formatting for several numbers, use `<format>`.
`prettyNum` is the utility function for prettifying `x`. `x` can be complex (or `<format>(<complex>)`), here. If `x` is not a character, `format(x[i], ...)` is applied to each element, and then it is left unchanged if all the other arguments are at their defaults. Use the `input.d.mark` argument for `prettyNum(x)` when `x` is a `character` vector not resulting from something like `format(<number>)` with a period as decimal mark.
Because `[gsub](grep)` is used to insert the `big.mark` and `small.mark`, special characters need escaping. In particular, to insert a single backslash, use `"\\\\"`.
The C doubles used for **R** numerical vectors have signed zeros, which `formatC` may output as `-0`, `-0.000` ....
There is a warning if `big.mark` and `decimal.mark` are the same: that would be confusing to those reading the output.
### Value
A character object of same size and attributes as `x` (after discarding any class), in the current locale's encoding.
Unlike `<format>`, each number is formatted individually. Looping over each element of `x`, the C function `sprintf(...)` is called for numeric inputs (inside the C function `str_signif`).
`formatC`: for character `x`, do simple (left or right) padding with white space.
### Note
The default for `decimal.mark` in `formatC()` was changed in **R** 3.2.0: for use within `<print>` methods in packages which might be used with earlier versions: use `decimal.mark = getOption("OutDec")` explicitly.
### Author(s)
`formatC` was originally written by Bill Dunlap for S-PLUS, later much improved by Martin Maechler.
It was first adapted for **R** by Friedrich Leisch and since much improved by the R Core team.
### References
Kernighan, B. W. and Ritchie, D. M. (1988) *The C Programming Language.* Second edition. Prentice Hall.
### See Also
`<format>`.
`<sprintf>` for more general C-like formatting.
### Examples
```
xx <- pi * 10^(-5:4)
cbind(format(xx, digits = 4), formatC(xx))
cbind(formatC(xx, width = 9, flag = "-"))
cbind(formatC(xx, digits = 5, width = 8, format = "f", flag = "0"))
cbind(format(xx, digits = 4), formatC(xx, digits = 4, format = "fg"))
f <- (-2:4); f <- f*16^f
# Default ("g") format:
formatC(pi*f)
# Fixed ("f") format, more than one flag ('width' partly "enlarged"):
cbind(formatC(pi*f, digits = 3, width=9, format = "f", flag = "0+"))
formatC( c("a", "Abc", "no way"), width = -7) # <=> flag = "-"
formatC(c((-1:1)/0,c(1,100)*pi), width = 8, digits = 1)
## note that some of the results here depend on the implementation
## of long-double arithmetic, which is platform-specific.
xx <- c(1e-12,-3.98765e-10,1.45645e-69,1e-70,pi*1e37,3.44e4)
## 1 2 3 4 5 6
formatC(xx)
formatC(xx, format = "fg") # special "fixed" format.
formatC(xx[1:4], format = "f", digits = 75) #>> even longer strings
formatC(c(3.24, 2.3e-6), format = "f", digits = 11)
formatC(c(3.24, 2.3e-6), format = "f", digits = 11, drop0trailing = TRUE)
r <- c("76491283764.97430", "29.12345678901", "-7.1234", "-100.1","1123")
## American:
prettyNum(r, big.mark = ",")
## Some Europeans:
prettyNum(r, big.mark = "'", decimal.mark = ",")
(dd <- sapply(1:10, function(i) paste((9:0)[1:i], collapse = "")))
prettyNum(dd, big.mark = "'")
## examples of 'small.mark'
pN <- stats::pnorm(1:7, lower.tail = FALSE)
cbind(format (pN, small.mark = " ", digits = 15))
cbind(formatC(pN, small.mark = " ", digits = 17, format = "f"))
cbind(ff <- format(1.2345 + 10^(0:5), width = 11, big.mark = "'"))
## all with same width (one more than the specified minimum)
## individual formatting to common width:
fc <- formatC(1.234 + 10^(0:8), format = "fg", width = 11, big.mark = "'")
cbind(fc)
## Powers of two, stored exactly, formatted individually:
pow.2 <- formatC(2^-(1:32), digits = 24, width = 1, format = "fg")
## nicely printed (the last line showing 5^32 exactly):
noquote(cbind(pow.2))
## complex numbers:
r <- 10.0000001; rv <- (r/10)^(1:10)
(zv <- (rv + 1i*rv))
op <- options(digits = 7) ## (system default)
(pnv <- prettyNum(zv))
stopifnot(pnv == "1+1i", pnv == format(zv),
pnv == prettyNum(zv, drop0trailing = TRUE))
## more digits change the picture:
options(digits = 8)
head(fv <- format(zv), 3)
prettyNum(fv)
prettyNum(fv, drop0trailing = TRUE) # a bit nicer
options(op)
## The ' flag :
doLC <- FALSE # <= R warns, so change to TRUE manually if you want see the effect
if(doLC) {
oldLC <- Sys.getlocale("LC_NUMERIC")
Sys.setlocale("LC_NUMERIC", "de_CH.UTF-8") }
formatC(1.234 + 10^(0:4), format = "fg", width = 11, flag = "'")
## --> ..... " 1'001" " 10'001" on supported platforms
if(doLC) ## revert, typically to "C" :
Sys.setlocale("LC_NUMERIC", oldLC)
```
| programming_docs |
r None
`apply` Apply Functions Over Array Margins
-------------------------------------------
### Description
Returns a vector or array or list of values obtained by applying a function to margins of an array or matrix.
### Usage
```
apply(X, MARGIN, FUN, ..., simplify = TRUE)
```
### Arguments
| | |
| --- | --- |
| `X` | an array, including a matrix. |
| `MARGIN` | a vector giving the subscripts which the function will be applied over. E.g., for a matrix `1` indicates rows, `2` indicates columns, `c(1, 2)` indicates rows and columns. Where `X` has named dimnames, it can be a character vector selecting dimension names. |
| `FUN` | the function to be applied: see ‘Details’. In the case of functions like `+`, `%*%`, etc., the function name must be backquoted or quoted. |
| `...` | optional arguments to `FUN`. |
| `simplify` | a logical indicating whether results should be simplified if possible. |
### Details
If `X` is not an array but an object of a class with a non-null `<dim>` value (such as a data frame), `apply` attempts to coerce it to an array via `as.matrix` if it is two-dimensional (e.g., a data frame) or via `as.array`.
`FUN` is found by a call to `<match.fun>` and typically is either a function or a symbol (e.g., a backquoted name) or a character string specifying a function to be searched for from the environment of the call to `apply`.
Arguments in `...` cannot have the same name as any of the other arguments, and care may be needed to avoid partial matching to `MARGIN` or `FUN`. In general-purpose code it is good practice to name the first three arguments if `...` is passed through: this both avoids partial matching to `MARGIN` or `FUN` and ensures that a sensible error message is given if arguments named `X`, `MARGIN` or `FUN` are passed through `...`.
### Value
If each call to `FUN` returns a vector of length `n`, and `simplify` is `TRUE`, then `apply` returns an array of dimension `c(n, dim(X)[MARGIN])` if `n > 1`. If `n` equals `1`, `apply` returns a vector if `MARGIN` has length 1 and an array of dimension `dim(X)[MARGIN]` otherwise. If `n` is `0`, the result has length 0 but not necessarily the ‘correct’ dimension.
If the calls to `FUN` return vectors of different lengths, or if `simplify` is `FALSE`, `apply` returns a list of length `prod(dim(X)[MARGIN])` with `dim` set to `MARGIN` if this has length greater than one.
In all cases the result is coerced by `[as.vector](vector)` to one of the basic vector types before the dimensions are set, so that (for example) factor results will be coerced to a character array.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<lapply>` and there, `[simplify2array](lapply)`; `<tapply>`, and convenience functions `<sweep>` and `[aggregate](../../stats/html/aggregate)`.
### Examples
```
## Compute row and column sums for a matrix:
x <- cbind(x1 = 3, x2 = c(4:1, 2:5))
dimnames(x)[[1]] <- letters[1:8]
apply(x, 2, mean, trim = .2)
col.sums <- apply(x, 2, sum)
row.sums <- apply(x, 1, sum)
rbind(cbind(x, Rtot = row.sums), Ctot = c(col.sums, sum(col.sums)))
stopifnot( apply(x, 2, is.vector))
## Sort the columns of a matrix
apply(x, 2, sort)
## keeping named dimnames
names(dimnames(x)) <- c("row", "col")
x3 <- array(x, dim = c(dim(x),3),
dimnames = c(dimnames(x), list(C = paste0("cop.",1:3))))
identical(x, apply( x, 2, identity))
identical(x3, apply(x3, 2:3, identity))
##- function with extra args:
cave <- function(x, c1, c2) c(mean(x[c1]), mean(x[c2]))
apply(x, 1, cave, c1 = "x1", c2 = c("x1","x2"))
ma <- matrix(c(1:4, 1, 6:8), nrow = 2)
ma
apply(ma, 1, table) #--> a list of length 2
apply(ma, 1, stats::quantile) # 5 x n matrix with rownames
stopifnot(dim(ma) == dim(apply(ma, 1:2, sum)))
## Example with different lengths for each call
z <- array(1:24, dim = 2:4)
zseq <- apply(z, 1:2, function(x) seq_len(max(x)))
zseq ## a 2 x 3 matrix
typeof(zseq) ## list
dim(zseq) ## 2 3
zseq[1,]
apply(z, 3, function(x) seq_len(max(x)))
# a list without a dim attribute
```
r None
`matrix` Matrices
------------------
### Description
`matrix` creates a matrix from the given set of values.
`as.matrix` attempts to turn its argument into a matrix.
`is.matrix` tests if its argument is a (strict) matrix.
### Usage
```
matrix(data = NA, nrow = 1, ncol = 1, byrow = FALSE,
dimnames = NULL)
as.matrix(x, ...)
## S3 method for class 'data.frame'
as.matrix(x, rownames.force = NA, ...)
is.matrix(x)
```
### Arguments
| | |
| --- | --- |
| `data` | an optional data vector (including a list or `<expression>` vector). Non-atomic classed **R** objects are coerced by `[as.vector](vector)` and all attributes discarded. |
| `nrow` | the desired number of rows. |
| `ncol` | the desired number of columns. |
| `byrow` | logical. If `FALSE` (the default) the matrix is filled by columns, otherwise the matrix is filled by rows. |
| `dimnames` | A `<dimnames>` attribute for the matrix: `NULL` or a `list` of length 2 giving the row and column names respectively. An empty list is treated as `NULL`, and a list of length one as row names. The list can be named, and the list names will be used as names for the dimensions. |
| `x` | an **R** object. |
| `...` | additional arguments to be passed to or from methods. |
| `rownames.force` | logical indicating if the resulting matrix should have character (rather than `NULL`) `[rownames](colnames)`. The default, `NA`, uses `NULL` rownames if the data frame has ‘automatic’ row.names or for a zero-row data frame. |
### Details
If one of `nrow` or `ncol` is not given, an attempt is made to infer it from the length of `data` and the other parameter. If neither is given, a one-column matrix is returned.
If there are too few elements in `data` to fill the matrix, then the elements in `data` are recycled. If `data` has length zero, `NA` of an appropriate type is used for atomic vectors (`0` for raw vectors) and `NULL` for lists.
`is.matrix` returns `TRUE` if `x` is a vector and has a `"<dim>"` attribute of length 2 and `FALSE` otherwise. Note that a `<data.frame>` is **not** a matrix by this test. The function is generic: you can write methods to handle specific classes of objects, see [InternalMethods](internalmethods).
`as.matrix` is a generic function. The method for data frames will return a character matrix if there is only atomic columns and any non-(numeric/logical/complex) column, applying `[as.vector](vector)` to factors and `<format>` to other non-character columns. Otherwise, the usual coercion hierarchy (logical < integer < double < complex) will be used, e.g., all-logical data frames will be coerced to a logical matrix, mixed logical-integer will give a integer matrix, etc.
The default method for `as.matrix` calls `as.vector(x)`, and hence e.g. coerces factors to character vectors.
When coercing a vector, it produces a one-column matrix, and promotes the names (if any) of the vector to the rownames of the matrix.
`is.matrix` is a <primitive> function.
The `print` method for a matrix gives a rectangular layout with dimnames or indices. For a list matrix, the entries of length not one are printed in the form integer,7 indicating the type and length.
### Note
If you just want to convert a vector to a matrix, something like
```
dim(x) <- c(nx, ny)
dimnames(x) <- list(row_names, col_names)
```
will avoid duplicating `x` *and* preserve `<class>(x)` which may be useful, e.g., for `[Date](dates)` objects.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<data.matrix>`, which attempts to convert to a numeric matrix.
A matrix is the special case of a two-dimensional `<array>`. Since **R** 4.0.0, `[inherits](class)(m, "array")` is true for a `matrix` `m`.
### Examples
```
is.matrix(as.matrix(1:10))
!is.matrix(warpbreaks) # data.frame, NOT matrix!
warpbreaks[1:10,]
as.matrix(warpbreaks[1:10,]) # using as.matrix.data.frame(.) method
## Example of setting row and column names
mdat <- matrix(c(1,2,3, 11,12,13), nrow = 2, ncol = 3, byrow = TRUE,
dimnames = list(c("row1", "row2"),
c("C.1", "C.2", "C.3")))
mdat
```
r None
`taskCallbackNames` Query the Names of the Current Internal Top-Level Task Callbacks
-------------------------------------------------------------------------------------
### Description
This provides a way to get the names (or identifiers) for the currently registered task callbacks that are invoked at the conclusion of each top-level task. These identifiers can be used to remove a callback.
### Usage
```
getTaskCallbackNames()
```
### Value
A character vector giving the name for each of the registered callbacks which are invoked when a top-level task is completed successfully. Each name is the one used when registering the callbacks and returned as the in the call to `[addTaskCallback](taskcallback)`.
### Note
One can use `[taskCallbackManager](taskcallbackmanager)` to manage user-level task callbacks, i.e., S-language functions, entirely within the S language and access the names more directly.
### See Also
`[addTaskCallback](taskcallback)`, `[removeTaskCallback](taskcallback)`, `[taskCallbackManager](taskcallbackmanager)`\ <https://developer.r-project.org/TaskHandlers.pdf>
### Examples
```
n <- addTaskCallback(function(expr, value, ok, visible) {
cat("In handler\n")
return(TRUE)
}, name = "simpleHandler")
getTaskCallbackNames()
# now remove it by name
removeTaskCallback("simpleHandler")
h <- taskCallbackManager()
h$add(function(expr, value, ok, visible) {
cat("In handler\n")
return(TRUE)
}, name = "simpleHandler")
getTaskCallbackNames()
removeTaskCallback("R-taskCallbackManager")
```
r None
`list2DF` Create Data Frame From List
--------------------------------------
### Description
Create a data frame from a list of variables.
### Usage
```
list2DF(x = list(), nrow = NULL)
```
### Arguments
| | |
| --- | --- |
| `x` | A list of variables for the data frame. |
| `nrow` | An integer giving the desired number of rows for the data frame, or `NULL` (default), in which case the maximal length of the elements of the list will be used. If necessary, list elements will be replicated to the same length given by the number of rows. |
### Details
Note that all list elements are taken “as is” (apart from possibly replicating to the same length).
### Value
A data frame with the given variables.
### See Also
`<data.frame>`
### Examples
```
## Create a data frame holding a list of character vectors and the
## corresponding lengths:
x <- list(character(), "A", c("B", "C"))
n <- lengths(x)
list2DF(list(x = x, n = n))
## Create data frames with no variables and the desired number of rows:
list2DF()
list2DF(nrow = 3L)
```
r None
`icuSetCollate` Setup Collation by ICU
---------------------------------------
### Description
Controls the way collation is done by ICU (an optional part of the **R** build).
### Usage
```
icuSetCollate(...)
icuGetCollate(type = c("actual", "valid"))
```
### Arguments
| | |
| --- | --- |
| `...` | Named arguments, see ‘Details’. |
| `type` | character string: can be abbreviated. Either the actual locale in use for collation or the most specific locale which would be valid. |
### Details
Optionally, **R** can be built to collate character strings by ICU (<http://site.icu-project.org>). For such systems, `icuSetCollate` can be used to tune the way collation is done. On other builds calling this function does nothing, with a warning.
Possible arguments are
`locale`:
A character string such as `"da_DK"` giving the language and country whose collation rules are to be used. If present, this should be the first argument.
`case_first`:
`"upper"`, `"lower"` or `"default"`, asking for upper- or lower-case characters to be sorted first. The default is usually lower-case first, but not in all languages (not under the default settings for Danish, for example).
`alternate_handling`:
Controls the handling of ‘variable’ characters (mainly punctuation and symbols). Possible values are `"non_ignorable"` (primary strength) and `"shifted"` (quaternary strength).
`strength`:
Which components should be used? Possible values `"primary"`, `"secondary"`, `"tertiary"` (default), `"quaternary"` and `"identical"`.
`french_collation`:
In a French locale the way accents affect collation is from right to left, whereas in most other locales it is from left to right. Possible values `"on"`, `"off"` and `"default"`.
`normalization`:
Should strings be normalized? Possible values are `"on"` and `"off"` (default). This affects the collation of composite characters.
`case_level`:
An additional level between secondary and tertiary, used to distinguish large and small Japanese Kana characters. Possible values `"on"` and `"off"` (default).
`hiragana_quaternary`:
Possible values `"on"` (sort Hiragana first at quaternary level) and `"off"`.
Only the first three are likely to be of interest except to those with a detailed understanding of collation and specialized requirements.
Some special values are accepted for `locale`:
`"none"`:
ICU is not used for collation: the OS's collation services are used instead.
`"ASCII"`:
ICU is not used for collation: the C function `strcmp` is used instead, which should sort byte-by-byte in (unsigned) numerical order.
`"default"`:
obtains the locale from the OS as is done at the start of the session. If environment variable R\_ICU\_LOCALE is set to a non-empty value, its value is used rather than consulting the OS, unless environment variable LC\_ALL is set to 'C' (or unset but LC\_COLLATE is set to 'C').
`""`, `"root"`:
the ‘root’ collation: see <https://www.unicode.org/reports/tr35/tr35-collation.html#Root_Collation>.
For the specifications of ‘real’ ICU locales, see <http://userguide.icu-project.org/locale>. Note that ICU does not report that a locale is not supported, but falls back to its idea of ‘best fit’ (which could be rather different and is reported by `icuGetCollate("actual")`, often `"root"`). Most English locales fall back to `"root"` as although e.g. `"en_GB"` is a valid locale (at least on some platforms), it contains no special rules for collation. Note that `"C"` is not a supported ICU locale and hence R\_ICU\_LOCALE should never be set to `"C"`.
Some examples are `case_level = "on", strength = "primary"` to ignore accent differences and `alternate_handling = "shifted"` to ignore space and punctuation characters.
Initially ICU will not be used for collation if the OS is set to use the `C` locale for collation and R\_ICU\_LOCALE is not set. Once this function is called with a value for `locale`, ICU will be used until it is called again with `locale = "none"`. ICU will not be used once `Sys.setlocale` is called with a `"C"` value for `LC_ALL` or `LC_COLLATE`, even if R\_ICU\_LOCALE is set. ICU will be used again honoring R\_ICU\_LOCALE once `Sys.setlocale` is called to set a different collation order. Environment variables LC\_ALL (or LC\_COLLATE) take precedence over R\_ICU\_LOCALE if and only if they are set to 'C'. Due to the interaction with other ways of setting the collation order, R\_ICU\_LOCALE should be used with care and only when needed.
All customizations are reset to the default for the locale if `locale` is specified: the collation engine is reset if the OS collation locate category is changed by `[Sys.setlocale](locales)`.
### Value
For `icuGetCollate`, a character string describing the ICU locale in use (which may be reported as `"ICU not in use"`). The ‘actual’ locale may be simpler than the requested locale: for example `"da"` rather than `"da_DK"`: English locales are likely to report `"root"`.
### Note
ICU is used by default wherever it is available: this include macOS, Solaris and many Linux installations. As it works internally in UTF-8, it will be most efficient in UTF-8 locales.
It is optional on Windows: if **R** has been built against ICU, it will only be used if environment variable R\_ICU\_LOCALE is set or once `icuSetCollate` is called to select the locale (as ICU and Windows differ in their idea of locale names). Note that `icuSetCollate(locale = "default")` should work reasonably well for **R** >= 3.2.0 and Windows Vista/Server 2008 and later (but finds the system default ignoring environment variables such as LC\_COLLATE).
### See Also
[Comparison](comparison), `<sort>`.
`<capabilities>` for whether ICU is available; `[extSoftVersion](extsoftversion)` for its version.
The ICU user guide chapter on collation (<http://userguide.icu-project.org/collation>).
### Examples
```
## These examples depend on having ICU available, and on the locale.
## As we don't know the current settings, we can only reset to the default.
if(capabilities("ICU")) withAutoprint({
icuGetCollate()
icuGetCollate("valid")
x <- c("Aarhus", "aarhus", "safe", "test", "Zoo")
sort(x)
icuSetCollate(case_first = "upper"); sort(x)
icuSetCollate(case_first = "lower"); sort(x)
## Danish collates upper-case-first and with 'aa' as a single letter
icuSetCollate(locale = "da_DK", case_first = "default"); sort(x)
## Estonian collates Z between S and T
icuSetCollate(locale = "et_EE"); sort(x)
icuSetCollate(locale = "default"); icuGetCollate("valid")
})
```
r None
`sign` Sign Function
---------------------
### Description
`sign` returns a vector with the signs of the corresponding elements of `x` (the sign of a real number is 1, 0, or *-1* if the number is positive, zero, or negative, respectively).
Note that `sign` does not operate on complex vectors.
### Usage
```
sign(x)
```
### Arguments
| | |
| --- | --- |
| `x` | a numeric vector |
### Details
This is an [internal generic](internalmethods) <primitive> function: methods can be defined for it directly or via the `[Math](groupgeneric)` group generic.
### See Also
`[abs](mathfun)`
### Examples
```
sign(pi) # == 1
sign(-2:3) # -1 -1 0 1 1 1
```
r None
`is.function` Is an Object of Type (Primitive) Function?
---------------------------------------------------------
### Description
Checks whether its argument is a (primitive) function.
### Usage
```
is.function(x)
is.primitive(x)
```
### Arguments
| | |
| --- | --- |
| `x` | an **R** object. |
### Details
`is.primitive(x)` tests if `x` is a <primitive> function, i.e, if `<typeof>(x)` is either `"builtin"` or `"special"`.
### Value
`TRUE` if `x` is a (primitive) function, and `FALSE` otherwise.
### Examples
```
is.function(1) # FALSE
is.function (is.primitive) # TRUE: it is a function, but ..
is.primitive(is.primitive) # FALSE: it's not a primitive one, whereas
is.primitive(is.function) # TRUE: that one *is*
```
r None
`NULL` The Null Object
-----------------------
### Description
`NULL` represents the null object in **R**: it is a <reserved> word. `NULL` is often returned by expressions and functions whose value is undefined.
### Usage
```
NULL
as.null(x, ...)
is.null(x)
```
### Arguments
| | |
| --- | --- |
| `x` | an object to be tested or coerced. |
| `...` | ignored. |
### Details
`NULL` can be indexed (see [Extract](extract)) in just about any syntactically legal way: whether it makes sense or not, the result is always `NULL`. Objects with value `NULL` can be changed by replacement operators and will be coerced to the type of the right-hand side.
`NULL` is also used as the empty [pairlist](list): see the examples. Because pairlists are often promoted to lists, you may encounter `NULL` being promoted to an empty list.
Objects with value `NULL` cannot have attributes as there is only one null object: attempts to assign them are either an error (`<attr>`) or promote the object to an empty list with attribute(s) (`<attributes>` and `<structure>`).
### Value
`as.null` ignores its argument and returns `NULL`.
`is.null` returns `TRUE` if its argument's value is `NULL` and `FALSE` otherwise.
### Note
`is.null` is a <primitive> function.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### Examples
```
is.null(list()) # FALSE (on purpose!)
is.null(pairlist()) # TRUE
is.null(integer(0)) # FALSE
is.null(logical(0)) # FALSE
as.null(list(a = 1, b = "c"))
```
| programming_docs |
r None
`maxCol` Find Maximum Position in Matrix
-----------------------------------------
### Description
Find the maximum position for each row of a matrix, breaking ties at random.
### Usage
```
max.col(m, ties.method = c("random", "first", "last"))
```
### Arguments
| | |
| --- | --- |
| `m` | numerical matrix |
| `ties.method` | a character string specifying how ties are handled, `"random"` by default; can be abbreviated; see ‘Details’. |
### Details
When `ties.method = "random"`, as per default, ties are broken at random. In this case, the determination of a tie assumes that the entries are probabilities: there is a relative tolerance of *1e-5*, relative to the largest (in magnitude, omitting infinity) entry in the row.
If `ties.method = "first"`, `max.col` returns the column number of the *first* of several maxima in every row, the same as `<unname>(<apply>(m, 1, [which.max](which.min)))`.
Correspondingly, `ties.method = "last"` returns the *last* of possibly several indices.
### Value
index of a maximal value for each row, an integer vector of length `nrow(m)`.
### References
Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* New York: Springer (4th ed).
### See Also
`[which.max](which.min)` for vectors.
### Examples
```
table(mc <- max.col(swiss)) # mostly "1" and "5", 5 x "2" and once "4"
swiss[unique(print(mr <- max.col(t(swiss)))) , ] # 3 33 45 45 33 6
set.seed(1) # reproducible example:
(mm <- rbind(x = round(2*stats::runif(12)),
y = round(5*stats::runif(12)),
z = round(8*stats::runif(12))))
## Not run:
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12]
x 1 1 1 2 0 2 2 1 1 0 0 0
y 3 2 4 2 4 5 2 4 5 1 3 1
z 2 3 0 3 7 3 4 5 4 1 7 5
## End(Not run)
## column indices of all row maxima :
utils::str(lapply(1:3, function(i) which(mm[i,] == max(mm[i,]))))
max.col(mm) ; max.col(mm) # "random"
max.col(mm, "first") # -> 4 6 5
max.col(mm, "last") # -> 7 9 11
```
r None
`ns-internal` Namespace Internals
----------------------------------
### Description
Internal namespace support functions. Not intended to be called directly, and only visible because of the special nature of the base namespace.
### Usage
```
asNamespace(ns, base.OK = TRUE)
getNamespaceInfo (ns, which)
.getNamespaceInfo(ns, which)
importIntoEnv(impenv, impnames, expenv, expnames)
isBaseNamespace(ns)
isNamespace(ns)
namespaceExport(ns, vars)
namespaceImport(self, ..., from = NULL, except = character(0L))
namespaceImportFrom(self, ns, vars, generics, packages,
from = "non-package environment",
except = character(0L))
namespaceImportClasses(self, ns, vars, from = NULL)
namespaceImportMethods(self, ns, vars, from = NULL)
packageHasNamespace(package, package.lib)
parseNamespaceFile(package, package.lib, mustExist = TRUE)
registerS3method(genname, class, method, envir = parent.frame())
registerS3methods(info, package, env)
setNamespaceInfo(ns, which, val)
.mergeExportMethods(new, ns)
.mergeImportMethods(impenv, expenv, metaname)
.knownS3Generics
loadingNamespaceInfo()
.getNamespace(name)
..getNamespace(name, where)
```
### Arguments
| | |
| --- | --- |
| `ns` | string or namespace environment. |
| `base.OK` | logical. |
| `impenv` | environment. |
| `expenv` | namespace environment. |
| `vars` | character vector. |
| `generics` | optional character vector. |
| `self` | namespace environment. |
| `package` | string naming the package/namespace to load. |
| `packages` | vector of package names parallel to `generics`. |
| `package.lib` | character vector specifying library. |
| `mustExist` | logical. |
| `genname` | character. |
| `class` | character. |
| `envir, env` | environment. |
| `info` | a 3-column character matrix. |
| `which` | character. |
| `val` | any object. |
| `...` | character arguments. |
| `metaname` | the methods table name. |
| `name` | symbol: name of namespace |
| `except` | character vector naming symbols to exclude from the import, particularly useful when `vars` is missing. |
### Details
`packageHasNamespace` does not indicate if the package has a namespace (all now do), rather if it has a ‘NAMESPACE’ file, which base and some legacy packages do not. But then you are not intended to be using it ....
### Author(s)
Luke Tierney and other members of the R Core Team.
### See Also
`[loadNamespace](ns-load)` or `[getNamespace](ns-reflect)` are somewhat higher level namespace related functions.
### Examples
```
nsName <- "stats"
(ns <- asNamespace(nsName)) # <environment: namespace:stats>
## Inverse function of asNamespace() :
environmentName(asNamespace("stats")) # "stats"
environmentName(asNamespace("base")) # "base"
getNamespaceInfo(ns, "spec")[["name"]] ## -> "stats"
## Only for for the daring ones, trying to get into the bowels :
lsNamespaceInfo <- function(ns, ...) {
ns <- asNamespace(ns, base.OK = FALSE)
ls(..., envir = get(".__NAMESPACE__.", envir = ns, inherits = FALSE))
}
allinfoNS <- function(ns) sapply(lsNamespaceInfo(ns), getNamespaceInfo, ns=ns)
utils::str(allinfoNS("stats"))
utils::str(allinfoNS("stats4"))
```
r None
`as.function` Convert Object to Function
-----------------------------------------
### Description
`as.function` is a generic function which is used to convert objects to functions.
`as.function.default` works on a list `x`, which should contain the concatenation of a formal argument list and an expression or an object of mode `"<call>"` which will become the function body. The function will be defined in a specified environment, by default that of the caller.
### Usage
```
as.function(x, ...)
## Default S3 method:
as.function(x, envir = parent.frame(), ...)
```
### Arguments
| | |
| --- | --- |
| `x` | object to convert, a list for the default method. |
| `...` | additional arguments, depending on object |
| `envir` | environment in which the function should be defined |
### Value
The desired function.
### Note
For ancient historical reasons, `envir = NULL` uses the global environment rather than the base environment. Please use `envir = [globalenv](environment)()` instead if this is what you want, as the special handling of `NULL` may change in a future release.
### Author(s)
Peter Dalgaard
### See Also
`<function>`; `[alist](list)` which is handy for the construction of argument lists, etc.
### Examples
```
as.function(alist(a = , b = 2, a+b))
as.function(alist(a = , b = 2, a+b))(3)
```
r None
`bquote` Partial substitution in expressions
---------------------------------------------
### Description
An analogue of the LISP backquote macro. `bquote` quotes its argument except that terms wrapped in `.()` are evaluated in the specified `where` environment. If `splice = TRUE` then terms wrapped in `..()` are evaluated and spliced into a call.
### Usage
```
bquote(expr, where = parent.frame(), splice = FALSE)
```
### Arguments
| | |
| --- | --- |
| `expr` | A [language object](is.language). |
| `where` | An environment. |
| `splice` | Logical; if `TRUE` splicing is enabled. |
### Value
A [language object](is.language).
### See Also
`[quote](substitute)`, `<substitute>`
### Examples
```
require(graphics)
a <- 2
bquote(a == a)
quote(a == a)
bquote(a == .(a))
substitute(a == A, list(A = a))
plot(1:10, a*(1:10), main = bquote(a == .(a)))
## to set a function default arg
default <- 1
bquote( function(x, y = .(default)) x+y )
exprs <- expression(x <- 1, y <- 2, x + y)
bquote(function() {..(exprs)}, splice = TRUE)
```
r None
`isS4` Test for an S4 object
-----------------------------
### Description
Tests whether the object is an instance of an S4 class.
### Usage
```
isS4(object)
asS4(object, flag = TRUE, complete = TRUE)
asS3(object, flag = TRUE, complete = TRUE)
```
### Arguments
| | |
| --- | --- |
| `object` | Any R object. |
| `flag` | Optional, logical: indicate direction of conversion. |
| `complete` | Optional, logical: whether conversion to S3 is completed. Not usually needed, but see the details section. |
### Details
Note that `isS4` does not rely on the methods package, so in particular it can be used to detect the need to `[require](library)` that package.
`asS3` uses the value of `complete` to control whether an attempt is made to transform `object` into a valid object of the implied S3 class. If `complete` is `TRUE`, then an object from an S4 class extending an S3 class will be transformed into an S3 object with the corresponding S3 class (see `[S3Part](../../methods/html/s3part)`). This includes classes extending the pseudo-classes `array` and `matrix`: such objects will have their class attribute set to `NULL`.
`isS4` is <primitive>.
### Value
`isS4` always returns `TRUE` or `FALSE` according to whether the internal flag marking an S4 object has been turned on for this object.
`asS4` and `asS3` will turn this flag on or off, and `asS3` will set the class from the objects `.S3Class` slot if one exists. Note that `asS3` will *not* turn the object into an S3 object unless there is a valid conversion; that is, an object of type other than `"S4"` for which the S4 object is an extension, unless argument `complete` is `FALSE`.
### See Also
`<is.object>` for a more general test; [Introduction](../../methods/html/introduction) for general information on S4; [Classes\_Details](../../methods/html/classes_details) for more on S4 class definitions.
### Examples
```
isS4(pi) # FALSE
isS4(getClass("MethodDefinition")) # TRUE
```
r None
`ns-load` Loading and Unloading Name Spaces
--------------------------------------------
### Description
Functions to load and unload name spaces.
### Usage
```
attachNamespace(ns, pos = 2L, depends = NULL, exclude, include.only)
loadNamespace(package, lib.loc = NULL,
keep.source = getOption("keep.source.pkgs"),
partial = FALSE, versionCheck = NULL,
keep.parse.data = getOption("keep.parse.data.pkgs"))
requireNamespace(package, ..., quietly = FALSE)
loadedNamespaces()
unloadNamespace(ns)
isNamespaceLoaded(name)
```
### Arguments
| | |
| --- | --- |
| `ns` | string or name space object. |
| `pos` | integer specifying position to attach. |
| `depends` | `NULL` or a character vector of dependencies to be recorded in object `.Depends` in the package. |
| `package` | string naming the package/name space to load. |
| `lib.loc` | character vector specifying library search path. |
| `keep.source` | Now ignored except during package installation. |
| `keep.parse.data` | Ignored except during package installation. |
| `partial` | logical; if true, stop just after loading code. |
| `versionCheck` | `NULL` or a version specification (a list with components `op` and `version`). |
| `quietly` | logical: should progress and error messages be suppressed? |
| `name` | string or ‘name’, see `[as.symbol](name)`, of a package, e.g., `"stats"`. |
| `exclude, include.only` | character vectors; see `<library>`. |
| `...` | further arguments to be passed to `loadNamespace`. |
### Details
The functions `loadNamespace` and `attachNamespace` are usually called implicitly when `<library>` is used to load a name space and any imports needed. However it may be useful at times to call these functions directly.
`loadNamespace` loads the specified name space and registers it in an internal data base. A request to load a name space when one of that name is already loaded has no effect. The arguments have the same meaning as the corresponding arguments to `<library>`, whose help page explains the details of how a particular installed package comes to be chosen. After loading, `loadNamespace` looks for a hook function named `[.onLoad](ns-hooks)` as an internal variable in the name space (it should not be exported). Partial loading is used to support installation with lazy-loading.
Optionally the package licence is checked during loading: see section ‘Licenses’ in the help for `<library>`.
`loadNamespace` does not attach the name space it loads to the search path. `attachNamespace` can be used to attach a frame containing the exported values of a name space to the search path (but this is almost always done *via* `<library>`). The hook function `[.onAttach](ns-hooks)` is run after the name space exports are attached.
`requireNamespace` is a wrapper for `loadNamespace` analogous to `[require](library)` that returns a logical value.
`loadedNamespaces` returns a character vector of the names of the loaded name spaces.
`isNamespaceLoaded(pkg)` is equivalent to but more efficient than `pkg %in% loadedNamespaces()`.
`unloadNamespace` can be used to attempt to force a name space to be unloaded. If the name space is attached, it is first `<detach>`ed, thereby running a `[.onDetach](ns-hooks)` or `.Last.lib` function in the name space if one is exported. An error is signaled and the name space is not unloaded if the name space is imported by other loaded name spaces. If defined, a hook function `[.onUnload](ns-hooks)` is run before removing the name space from the internal registry.
See the comments in the help for `<detach>` about some issues with unloading and reloading name spaces.
### Value
`attachNamespace` returns invisibly the package environment it adds to the search path.
`loadNamespace` returns the name space environment, either one already loaded or the one the function causes to be loaded.
`requireNamespace` returns `TRUE` if it succeeds or `FALSE`.
`loadedNamespaces` returns a `<character>` vector.
`unloadNamespace` returns `NULL`, invisibly.
### Tracing
As from **R** 4.1.0 the operation of `loadNamespace` can be traced, which can help track down the causes of unexpected messages (including which package(s) they come from since `loadNamespace` is called in many ways including from itself and by `::` and can be called by `load`). Setting the environment variable \_R\_TRACE\_LOADNAMESPACE\_ to a numerical value will generate additional messages on progress. Non-zero values, e.g. `1`, report which namespace is being loaded and when loading completes: values `2` to `4` report in increasing detail. Negative values are reserved for tracing specific features and their current meanings are documented in source-code comments.
Loading standard packages is never traced.
### Author(s)
Luke Tierney and R-core
### References
The ‘Writing R Extensions’ manual, section “Package namespaces”.
### See Also
`[getNamespace](ns-reflect)`, `[asNamespace](ns-internal)`, `[topenv](ns-topenv)`, `[.onLoad](ns-hooks)` (etc); further `<environment>`.
### Examples
```
(lns <- loadedNamespaces())
statL <- isNamespaceLoaded("stats")
stopifnot( identical(statL, "stats" %in% lns) )
## The string "foo" and the symbol 'foo' can be used interchangably here:
stopifnot( identical(isNamespaceLoaded( "foo" ), FALSE),
identical(isNamespaceLoaded(quote(foo)), FALSE),
identical(isNamespaceLoaded(quote(stats)), statL))
hasS <- isNamespaceLoaded("splines") # (to restore if needed)
Sns <- asNamespace("splines") # loads it if not already
stopifnot( isNamespaceLoaded("splines"))
unloadNamespace(Sns) # unloading the NS 'object'
stopifnot( ! isNamespaceLoaded("splines"))
if (hasS) loadNamespace("splines") # (restoring previous state)
```
r None
`path.expand` Expand File Paths
--------------------------------
### Description
Expand a path name, for example by replacing a leading tilde by the user's home directory (if defined on that platform).
### Usage
```
path.expand(path)
```
### Arguments
| | |
| --- | --- |
| `path` | character vector containing one or more path names. |
### Details
On Unix - alikes:
On most builds of **R** a leading `~user` will expand to the home directory of `user` (since **R** 4.1.0 also without `readline` in use).
There are possibly different concepts of ‘home directory’: that usually used is the setting of the environment variable HOME.
The ‘path names’ need not exist nor be valid path names but they do need to be representable in the session encoding.
On Windows:
The definition of the ‘home’ directory is in the ‘rw-FAQ’ Q2.14: it is taken from the R\_USER environment variable when `path.expand` is first called in a session.
The ‘path names’ need not exist nor be valid path names.
### Value
A character vector of possibly expanded path names: where the home directory is unknown or none is specified the path is returned unchanged.
### See Also
`<basename>`, `[normalizePath](normalizepath)`, `<file.path>`.
### Examples
```
path.expand("~/foo")
```
r None
`extSoftVersion` Report Versions of Third-Party Software
---------------------------------------------------------
### Description
Report versions of (external) third-party software used.
### Usage
```
extSoftVersion()
```
### Details
The reports the versions of third-party software libraries in use. These are often external but might have been compiled into **R** when it was installed.
With dynamic linking, these are the versions of the libraries linked to in this session: with static linking, of those compiled in.
### Value
A named character vector, currently with components
| | |
| --- | --- |
| `zlib` | The version of `zlib` in use. |
| `bzlib` | The version of `bzlib` (from `bzip2`) in use. |
| `xz` | The version of `liblzma` (from `xz`) in use. |
| `PCRE` | The version of `PCRE` in use. PCRE1 has versions < 10.00, PCRE2 has versions >= 10.00. |
| `ICU` | The version of `ICU` in use (if any, otherwise `""`). |
| `TRE` | The version of `libtre` in use. |
| `iconv` | The implementation and version of the `iconv` library in use (if known). |
| `readline` | The version of `readline` in use (if any, otherwise `""`). If using the emulation by `libedit` aka `editline` this will be `"EditLine wrapper"` preceded by the `readline` version it emulates: that is most likely to be seen on macOS. |
| `BLAS` | Name of the binary/executable file with the implementation of `BLAS` in use (if known, otherwise `""`). |
Note that the values for `bzlib` and `pcre` normally contain a date as well as the version number, and that for `tre` includes several items separated by spaces, the version number being the second.
For `iconv` this will give the implementation as well as the version, for example `"GNU libiconv 1.14"`, `"glibc
2.18"` or `"win_iconv"` (which has no version number).
The name of the binary/executable file for `BLAS` can be used as an indication of which implementation is in use. Typically, the R version of BLAS will appear as `libR.so` (`libR.dylib`), `R` or `libRblas.so` (`libRblas.dylib`), depending on how R was built. Note that `libRblas.so` (`libRblas.dylib`) may also be shown for an external BLAS implementation that had been copied, hard-linked or renamed by the system administrator. For an external BLAS, a shared object file will be given and its path/name may indicate the vendor/version. The detection does not work on Windows.
### See Also
`[libcurlVersion](libcurlversion)` for the version of `libCurl`.
`[La\_version](la_version)` for the version of LAPACK in use.
`[La\_library](la_library)` for binary/executable file with LAPACK in use.
`[grSoftVersion](../../grdevices/html/grsoftversion)` for third-party graphics software.
`[tclVersion](../../tcltk/html/tclinterface)` for the version of Tcl/Tk.
`<pcre_config>` for PCRE configuration options.
### Examples
```
extSoftVersion()
## the PCRE version
sub(" .*", "", extSoftVersion()["PCRE"])
```
r None
`append` Vector Merging
------------------------
### Description
Add elements to a vector.
### Usage
```
append(x, values, after = length(x))
```
### Arguments
| | |
| --- | --- |
| `x` | the vector the values are to be appended to. |
| `values` | to be included in the modified vector. |
| `after` | a subscript, after which the values are to be appended. |
### Value
A vector containing the values in `x` with the elements of `values` appended after the specified element of `x`.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### Examples
```
append(1:5, 0:1, after = 3)
```
| programming_docs |
r None
`normalizePath` Express File Paths in Canonical Form
-----------------------------------------------------
### Description
Convert file paths to canonical form for the platform, to display them in a user-understandable form and so that relative and absolute paths can be compared.
### Usage
```
normalizePath(path, winslash = "\\", mustWork = NA)
```
### Arguments
| | |
| --- | --- |
| `path` | character vector of file paths. |
| `winslash` | the separator to be used on Windows – ignored elsewhere. Must be one of `c("/", "\\")`. |
| `mustWork` | logical: if `TRUE` then an error is given if the result cannot be determined; if `NA` then a warning. |
### Details
Tilde-expansion (see `<path.expand>`) is first done on `paths`.
Where the Unix-alike platform supports it attempts to turn paths into absolute paths in their canonical form (no ./, ../ nor symbolic links). It relies on the POSIX system function `realpath`: if the platform does not have that (we know of no current example) then the result will be an absolute path but might not be canonical. Even where `realpath` is used the canonical path need not be unique, for example *via* hard links or multiple mounts.
On Windows it converts relative paths to absolute paths, resolves symbolic links, converts short names for path elements to long names and ensures the separator is that specified by `winslash`. It will match each path element case-insensitively or case-sensitively as during the usual name lookup and return the canonical case. It relies on Windows API function `GetFinalPathNameByHandle` and in case of an error (such as insufficient permissions) it currently falls back to the **R** 3.6 (and older) implementation, which relies on `GetFullPathName` and `GetLongPathName` with limitations described in the Notes section. An attempt is made not to introduce UNC paths in presence of mapped drives or symbolic links: if `GetFinalPathNameByHandle` returns a UNC path, but `GetLongPathName` returns a path starting with a drive letter, R falls back to the **R** 3.6 (and older) implementation. UTF-8-encoded paths not valid in the current locale can be used.
`mustWork = FALSE` is useful for expressing paths for use in messages.
### Value
A character vector.
If an input is not a real path the result is system-dependent (unless `mustWork = TRUE`, when this should be an error). It will be either the corresponding input element or a transformation of it into an absolute path.
Converting to an absolute file path can fail for a large number of reasons. The most common are
* One of more components of the file path does not exist.
* A component before the last is not a directory, or there is insufficient permission to read the directory.
* For a relative path, the current directory cannot be determined.
* A symbolic link points to a non-existent place or links form a loop.
* The canonicalized path would be exceed the maximum supported length of a file path.
### Note
The canonical form of paths may not be what you expect. For example, on macOS absolute paths such as ‘/tmp’ and ‘/var’ are symbolic links. On Linux, a path produced by bash process substitution is a symbolic link (such as ‘/proc/fd/63’) to a pipe and there is no canonical form of such path. In **R** 3.6 and older on Windows, symlinks will not be resolved and the long names for path elements will be returned with the case in which they are in `path`, which may not be canonical in case-insensitive folders.
### Examples
```
# random tempdir
cat(normalizePath(c(R.home(), tempdir())), sep = "\n")
```
r None
`quit` Terminate an R Session
------------------------------
### Description
The function `quit` or its alias `q` terminate the current **R** session.
### Usage
```
quit(save = "default", status = 0, runLast = TRUE)
q(save = "default", status = 0, runLast = TRUE)
```
### Arguments
| | |
| --- | --- |
| `save` | a character string indicating whether the environment (workspace) should be saved, one of `"no"`, `"yes"`, `"ask"` or `"default"`. |
| `status` | the (numerical) error status to be returned to the operating system, where relevant. Conventionally `0` indicates successful completion. |
| `runLast` | should `.Last()` be executed? |
### Details
`save` must be one of `"no"`, `"yes"`, `"ask"` or `"default"`. In the first case the workspace is not saved, in the second it is saved and in the third the user is prompted and can also decide *not* to quit. The default is to ask in interactive use but may be overridden by command-line arguments (which must be supplied in non-interactive use).
Immediately *before* normal termination, `.Last()` is executed if the function `.Last` exists and `runLast` is true. If in interactive use there are errors in the `.Last` function, control will be returned to the command prompt, so do test the function thoroughly. There is a system analogue, `.Last.sys()`, which is run after `.Last()` if `runLast` is true.
Exactly what happens at termination of an **R** session depends on the platform and GUI interface in use. A typical sequence is to run `.Last()` and `.Last.sys()` (unless `runLast` is false), to save the workspace if requested (and in most cases also to save the session history: see `[savehistory](../../utils/html/savehistory)`), then run any finalizers (see `<reg.finalizer>`) that have been set to be run on exit, close all open graphics devices, remove the session temporary directory and print any remaining warnings (e.g., from `.Last()` and device closure).
Some error status values are used by **R** itself. The default error handler for non-interactive use effectively calls `q("no", 1,
FALSE)` and returns error status 1. Error status 2 is used for **R** ‘suicide’, that is a catastrophic failure, and other small numbers are used by specific ports for initialization failures. It is recommended that users choose statuses of 10 or more.
Valid values of `status` are system-dependent, but `0:255` are normally valid. (Many OSes will report the last byte of the value, that is report the value modulo 256. But not all.)
### Warning
The value of `.Last` is for the end user to control: as it can be replaced later in the session, it cannot safely be used programmatically, e.g. by a package. The other way to set code to be run at the end of the session is to use a *finalizer*: see `<reg.finalizer>`.
### Note
The `R.app` GUI on macOS has its own version of these functions with slightly different behaviour for the `save` argument (the GUI's ‘Startup’ preferences for this action are taken into account).
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[.First](startup)` for setting things on startup.
### Examples
```
## Not run: ## Unix-flavour example
.Last <- function() {
graphics.off() # close devices before printing
cat("Now sending PDF graphics to the printer:\n")
system("lpr Rplots.pdf")
cat("bye bye...\n")
}
quit("yes")
## End(Not run)
```
r None
`pos.to.env` Convert Positions in the Search Path to Environments
------------------------------------------------------------------
### Description
Returns the environment at a specified position in the search path.
### Usage
```
pos.to.env(x)
```
### Arguments
| | |
| --- | --- |
| `x` | an integer between `1` and `length(search())`, the length of the search path, or `-1`. |
### Details
Several **R** functions for manipulating objects in environments (such as `<get>` and `<ls>`) allow specifying environments via corresponding positions in the search path. `pos.to.env` is a convenience function for programmers which converts these positions to corresponding environments; users will typically have no need for it. It is <primitive>.
`-1` is interpreted as the environment the function is called from.
This is a <primitive> function.
### Examples
```
pos.to.env(1) # R_GlobalEnv
# the next returns the base environment
pos.to.env(length(search()))
```
r None
`table` Cross Tabulation and Table Creation
--------------------------------------------
### Description
`table` uses the cross-classifying factors to build a contingency table of the counts at each combination of factor levels.
### Usage
```
table(...,
exclude = if (useNA == "no") c(NA, NaN),
useNA = c("no", "ifany", "always"),
dnn = list.names(...), deparse.level = 1)
as.table(x, ...)
is.table(x)
## S3 method for class 'table'
as.data.frame(x, row.names = NULL, ...,
responseName = "Freq", stringsAsFactors = TRUE,
sep = "", base = list(LETTERS))
```
### Arguments
| | |
| --- | --- |
| `...` | one or more objects which can be interpreted as factors (including character strings), or a list (or data frame) whose components can be so interpreted. (For `as.table`, arguments passed to specific methods; for `as.data.frame`, unused.) |
| `exclude` | levels to remove for all factors in `...`. If it does not contain `[NA](na)` and `useNA` is not specified, it implies `useNA = "ifany"`. See ‘Details’ for its interpretation for non-factor arguments. |
| `useNA` | whether to include `NA` values in the table. See ‘Details’. Can be abbreviated. |
| `dnn` | the names to be given to the dimensions in the result (the *dimnames names*). |
| `deparse.level` | controls how the default `dnn` is constructed. See ‘Details’. |
| `x` | an arbitrary **R** object, or an object inheriting from class `"table"` for the `as.data.frame` method. Note that `as.data.frame.table(x, *)` may be called explicitly for non-table `x` for “reshaping” `<array>`s. |
| `row.names` | a character vector giving the row names for the data frame. |
| `responseName` | The name to be used for the column of table entries, usually counts. |
| `stringsAsFactors` | logical: should the classifying factors be returned as factors (the default) or character vectors? |
| `sep, base` | passed to `[provideDimnames](dimnames)`. |
### Details
If the argument `dnn` is not supplied, the internal function `list.names` is called to compute the ‘dimname names’. If the arguments in `...` are named, those names are used. For the remaining arguments, `deparse.level = 0` gives an empty name, `deparse.level = 1` uses the supplied argument if it is a symbol, and `deparse.level = 2` will deparse the argument.
Only when `exclude` is specified (i.e., not by default) and non-empty, will `table` potentially drop levels of factor arguments.
`useNA` controls if the table includes counts of `NA` values: the allowed values correspond to never (`"no"`), only if the count is positive (`"ifany"`) and even for zero counts (`"always"`). Note the somewhat “pathological” case of two different kinds of `NA`s which are treated differently, depending on both `useNA` and `exclude`, see `d.patho` in the ‘Examples:’ below.
Both `exclude` and `useNA` operate on an “all or none” basis. If you want to control the dimensions of a multiway table separately, modify each argument using `<factor>` or `[addNA](factor)`.
Non-factor arguments `a` are coerced via `factor(a,
exclude=exclude)`. Since **R** 3.4.0, care is taken *not* to count the excluded values (where they were included in the `NA` count, previously).
The `summary` method for class `"table"` (used for objects created by `table` or `[xtabs](../../stats/html/xtabs)`) which gives basic information and performs a chi-squared test for independence of factors (note that the function `[chisq.test](../../stats/html/chisq.test)` currently only handles 2-d tables).
### Value
`table()` returns a *contingency table*, an object of class `"table"`, an array of integer values. Note that unlike S the result is always an `<array>`, a 1D array if one factor is given.
`as.table` and `is.table` coerce to and test for contingency table, respectively.
The `as.data.frame` method for objects inheriting from class `"table"` can be used to convert the array-based representation of a contingency table to a data frame containing the classifying factors and the corresponding entries (the latter as component named by `responseName`). This is the inverse of `[xtabs](../../stats/html/xtabs)`.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`<tabulate>` is the underlying function and allows finer control.
Use `[ftable](../../stats/html/ftable)` for printing (and more) of multidimensional tables. `[margin.table](marginsums)`, `[prop.table](proportions)`, `[addmargins](../../stats/html/addmargins)`.
`[addNA](factor)` for constructing factors with `[NA](na)` as a level.
`[xtabs](../../stats/html/xtabs)` for cross tabulation of data frames with a formula interface.
### Examples
```
require(stats) # for rpois and xtabs
## Simple frequency distribution
table(rpois(100, 5))
## Check the design:
with(warpbreaks, table(wool, tension))
table(state.division, state.region)
# simple two-way contingency table
with(airquality, table(cut(Temp, quantile(Temp)), Month))
a <- letters[1:3]
table(a, sample(a)) # dnn is c("a", "")
table(a, sample(a), deparse.level = 0) # dnn is c("", "")
table(a, sample(a), deparse.level = 2) # dnn is c("a", "sample(a)")
## xtabs() <-> as.data.frame.table() :
UCBAdmissions ## already a contingency table
DF <- as.data.frame(UCBAdmissions)
class(tab <- xtabs(Freq ~ ., DF)) # xtabs & table
## tab *is* "the same" as the original table:
all(tab == UCBAdmissions)
all.equal(dimnames(tab), dimnames(UCBAdmissions))
a <- rep(c(NA, 1/0:3), 10)
table(a) # does not report NA's
table(a, exclude = NULL) # reports NA's
b <- factor(rep(c("A","B","C"), 10))
table(b)
table(b, exclude = "B")
d <- factor(rep(c("A","B","C"), 10), levels = c("A","B","C","D","E"))
table(d, exclude = "B")
print(table(b, d), zero.print = ".")
## NA counting:
is.na(d) <- 3:4
d. <- addNA(d)
d.[1:7]
table(d.) # ", exclude = NULL" is not needed
## i.e., if you want to count the NA's of 'd', use
table(d, useNA = "ifany")
## "pathological" case:
d.patho <- addNA(c(1,NA,1:2,1:3))[-7]; is.na(d.patho) <- 3:4
d.patho
## just 3 consecutive NA's ? --- well, have *two* kinds of NAs here :
as.integer(d.patho) # 1 4 NA NA 1 2
##
## In R >= 3.4.0, table() allows to differentiate:
table(d.patho) # counts the "unusual" NA
table(d.patho, useNA = "ifany") # counts all three
table(d.patho, exclude = NULL) # (ditto)
table(d.patho, exclude = NA) # counts none
## Two-way tables with NA counts. The 3rd variant is absurd, but shows
## something that cannot be done using exclude or useNA.
with(airquality,
table(OzHi = Ozone > 80, Month, useNA = "ifany"))
with(airquality,
table(OzHi = Ozone > 80, Month, useNA = "always"))
with(airquality,
table(OzHi = Ozone > 80, addNA(Month)))
```
r None
`charmatch` Partial String Matching
------------------------------------
### Description
`charmatch` seeks matches for the elements of its first argument among those of its second.
### Usage
```
charmatch(x, table, nomatch = NA_integer_)
```
### Arguments
| | |
| --- | --- |
| `x` | the values to be matched: converted to a character vector by `[as.character](character)`. [Long vectors](longvectors) are supported. |
| `table` | the values to be matched against: converted to a character vector. [Long vectors](longvectors) are not supported. |
| `nomatch` | the (integer) value to be returned at non-matching positions. |
### Details
Exact matches are preferred to partial matches (those where the value to be matched has an exact match to the initial part of the target, but the target is longer).
If there is a single exact match or no exact match and a unique partial match then the index of the matching value is returned; if multiple exact or multiple partial matches are found then `0` is returned and if no match is found then `nomatch` is returned.
`NA` values are treated as the string constant `"NA"`.
### Value
An integer vector of the same length as `x`, giving the indices of the elements in `table` which matched, or `nomatch`.
### Author(s)
This function is based on a C function written by Terry Therneau.
### See Also
`<pmatch>`, `<match>`.
`[startsWith](startswith)` for another matching of initial parts of strings; `<grep>` or `[regexpr](grep)` for more general (regexp) matching of strings.
### Examples
```
charmatch("", "") # returns 1
charmatch("m", c("mean", "median", "mode")) # returns 0
charmatch("med", c("mean", "median", "mode")) # returns 2
```
r None
`La_version` LAPACK Version
----------------------------
### Description
Report the version of LAPACK in use.
### Usage
```
La_version()
```
### Value
A character vector of length one.
### See Also
`[extSoftVersion](extsoftversion)` for versions of other third-party software.
`[La\_library](la_library)` for binary/executable file with LAPACK in use.
### Examples
```
La_version()
```
r None
`missing` Does a Formal Argument have a Value?
-----------------------------------------------
### Description
`missing` can be used to test whether a value was specified as an argument to a function.
### Usage
```
missing(x)
```
### Arguments
| | |
| --- | --- |
| `x` | a formal argument. |
### Details
`missing(x)` is only reliable if `x` has not been altered since entering the function: in particular it will *always* be false after `x <- match.arg(x)`.
The example shows how a plotting function can be written to work with either a pair of vectors giving x and y coordinates of points to be plotted or a single vector giving y values to be plotted against their indices.
Currently `missing` can only be used in the immediate body of the function that defines the argument, not in the body of a nested function or a `local` call. This may change in the future.
This is a ‘special’ <primitive> function: it must not evaluate its argument.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Chambers, J. M. (1998) *Programming with Data. A Guide to the S Language*. Springer.
### See Also
`<substitute>` for argument expression; `[NA](na)` for missing values in data.
### Examples
```
myplot <- function(x, y) {
if(missing(y)) {
y <- x
x <- 1:length(y)
}
plot(x, y)
}
```
r None
`MathFun` Miscellaneous Mathematical Functions
-----------------------------------------------
### Description
`abs(x)` computes the absolute value of x, `sqrt(x)` computes the (principal) square root of x, *√{x}*.
The naming follows the standard for computer languages such as C or Fortran.
### Usage
```
abs(x)
sqrt(x)
```
### Arguments
| | |
| --- | --- |
| `x` | a numeric or `<complex>` vector or array. |
### Details
These are [internal generic](internalmethods) <primitive> functions: methods can be defined for them individually or via the `[Math](groupgeneric)` group generic. For complex arguments (and the default method), `z`, `abs(z) ==
[Mod](complex)(z)` and `sqrt(z) == z^0.5`.
`abs(x)` returns an `<integer>` vector when `x` is `integer` or `<logical>`.
### S4 methods
Both are S4 generic and members of the `[Math](../../methods/html/s4groupgeneric)` group generic.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[Arithmetic](arithmetic)` for simple, `<log>` for logarithmic, `[sin](trig)` for trigonometric, and `[Special](special)` for special mathematical functions.
‘[plotmath](../../grdevices/html/plotmath)’ for the use of `sqrt` in plot annotation.
### Examples
```
require(stats) # for spline
require(graphics)
xx <- -9:9
plot(xx, sqrt(abs(xx)), col = "red")
lines(spline(xx, sqrt(abs(xx)), n=101), col = "pink")
```
| programming_docs |
r None
`validUTF8` Check if a Character Vector is Validly Encoded
-----------------------------------------------------------
### Description
Check if each element of a character vector is valid in its implied encoding.
### Usage
```
validUTF8(x)
validEnc(x)
```
### Arguments
| | |
| --- | --- |
| `x` | a character vector. |
### Details
These use similar checks to those used by functions such as `<grep>`.
`validUTF8` ignores any marked encoding (see `[Encoding](encoding)`) and so looks directly if the bytes in each string are valid UTF-8. (For the validity of ‘noncharacters’ see the help for `[intToUtf8](utf8conversion)`.)
`validEnc` regards character strings as validly encoded unless their encodings are marked as UTF-8 or they are unmarked and the **R** session is in a UTF-8 or other multi-byte locale. (The checks in other multi-byte locales depend on the OS and as with `<iconv>` not all invalid inputs may be detected.)
### Value
A logical vector of the same length as `x`. `NA` elements are regarded as validly encoded.
### Note
It would be possible to check for the validity of character strings in a Latin-1 encoding, but extensions such as CP1252 are widely accepted as ‘Latin-1’ and 8-bit encodings rarely need to be checked for validity.
### Examples
```
x <-
## from example(text)
c("Jetz", "no", "chli", "z\xc3\xbcrit\xc3\xbc\xc3\xbctsch:",
"(noch", "ein", "bi\xc3\x9fchen", "Z\xc3\xbc", "deutsch)",
## from a CRAN check log
"\xfa\xb4\xbf\xbf\x9f")
validUTF8(x)
validEnc(x) # depends on the locale
Encoding(x) <-"UTF-8"
validEnc(x) # typically the last, x[10], is invalid
## Maybe advantageous to declare it "unknown":
G <- x ; Encoding(G[!validEnc(G)]) <- "unknown"
try( substr(x, 1,1) ) # gives 'invalid multibyte string' error in a UTF-8 locale
try( substr(G, 1,1) ) # works in a UTF-8 locale
nchar(G) # fine, too
## but it is not "more valid" typically:
all.equal(validEnc(x),
validEnc(G)) # typically TRUE
```
r None
`library` Loading/Attaching and Listing of Packages
----------------------------------------------------
### Description
`library` and `require` load and attach add-on packages.
### Usage
```
library(package, help, pos = 2, lib.loc = NULL,
character.only = FALSE, logical.return = FALSE,
warn.conflicts, quietly = FALSE,
verbose = getOption("verbose"),
mask.ok, exclude, include.only,
attach.required = missing(include.only))
require(package, lib.loc = NULL, quietly = FALSE,
warn.conflicts,
character.only = FALSE,
mask.ok, exclude, include.only,
attach.required = missing(include.only))
conflictRules(pkg, mask.ok = NULL, exclude = NULL)
```
### Arguments
| | |
| --- | --- |
| `package, help` | the name of a package, given as a <name> or literal character string, or a character string, depending on whether `character.only` is `FALSE` (default) or `TRUE`. |
| `pos` | the position on the search list at which to attach the loaded namespace. Can also be the name of a position on the current search list as given by `<search>()`. |
| `lib.loc` | a character vector describing the location of **R** library trees to search through, or `NULL`. The default value of `NULL` corresponds to all libraries currently known to `[.libPaths](libpaths)()`. Non-existent library trees are silently ignored. |
| `character.only` | a logical indicating whether `package` or `help` can be assumed to be character strings. |
| `logical.return` | logical. If it is `TRUE`, `FALSE` or `TRUE` is returned to indicate success. |
| `warn.conflicts` | logical. If `TRUE`, warnings are printed about `<conflicts>` from attaching the new package. A conflict is a function masking a function, or a non-function masking a non-function. The default is `TRUE` unless specified as `FALSE` in the `conflicts.policy` option. |
| `verbose` | a logical. If `TRUE`, additional diagnostics are printed. |
| `quietly` | a logical. If `TRUE`, no message confirming package attaching is printed, and most often, no errors/warnings are printed if package attaching fails. |
| `pkg` | character string naming a package. |
| `mask.ok` | character vector of names of objects that can mask objects on the search path without signaling an error when strict conflict checking is enabled |
| `exclude,include.only` | character vector of names of objects to exclude or include in the attached frame. Only one of these arguments may be used in a call to `library` or `require`. |
| `attach.required` | logical specifying whether required packages listed in the `Depends` clause of the `DESCRIPTION` file should be attached automatically. |
### Details
`library(package)` and `require(package)` both load the namespace of the package with name `package` and attach it on the search list. `require` is designed for use inside other functions; it returns `FALSE` and gives a warning (rather than an error as `library()` does by default) if the package does not exist. Both functions check and update the list of currently attached packages and do not reload a namespace which is already loaded. (If you want to reload such a package, call `<detach>(unload =
TRUE)` or `[unloadNamespace](ns-load)` first.) If you want to load a package without attaching it on the search list, see `[requireNamespace](ns-load)`.
To suppress messages during the loading of packages use `[suppressPackageStartupMessages](message)`: this will suppress all messages from **R** itself but not necessarily all those from package authors.
If `library` is called with no `package` or `help` argument, it lists all available packages in the libraries specified by `lib.loc`, and returns the corresponding information in an object of class `"libraryIQR"`. (The structure of this class may change in future versions.) Use `.packages(all = TRUE)` to obtain just the names of all available packages, and `[installed.packages](../../utils/html/installed.packages)()` for even more information.
`library(help = somename)` computes basic information about the package somename, and returns this in an object of class `"packageInfo"`. (The structure of this class may change in future versions.) When used with the default value (`NULL`) for `lib.loc`, the attached packages are searched before the libraries.
### Value
Normally `library` returns (invisibly) the list of attached packages, but `TRUE` or `FALSE` if `logical.return` is `TRUE`. When called as `library()` it returns an object of class `"libraryIQR"`, and for `library(help=)`, one of class `"packageInfo"`.
`require` returns (invisibly) a logical indicating whether the required package is available.
### Conflicts
Handling of conflicts depends on the setting of the `conflicts.policy` option. If this option is not set, then conflicts result in warning messages if the argument `warn.conflicts` is `TRUE`. If the option is set to the character string `"strict"`, then all unresolved conflicts signal errors. Conflicts can be resolved using the `mask.ok`, `exclude`, and `include.only` arguments to `library` and `require`. Defaults for `mask.ok` and `exclude` can be specified using `conflictRules`.
If the `conflicts.policy` option is set to the string `"depends.ok"` then conflicts resulting from attaching declared dependencies will not produce errors, but other conflicts will. This is likely to be the best setting for most users wanting some additional protection against unexpected conflicts.
The policy can be tuned further by specifying the `conflicts.policy` option as a named list with the following fields:
`error`:
logical; if `TRUE` treat unresolved conflicts as errors.
`warn`:
logical; unless `FALSE` issue a warning message when conflicts are found.
`generics.ok`:
logical; if `TRUE` ignore conflicts created by defining S4 generics for functions on the search path.
`depends.ok`:
logical; if `TRUE` do not treat conflicts with required packages as errors.
`can.mask`:
character vector of names of packages that are allowed to be masked. These would typically be base packages attached by default.
### Licenses
Some packages have restrictive licenses, and there is a mechanism to allow users to be aware of such licenses. If `[getOption](options)("checkPackageLicense") == TRUE`, then at first use of a package with a not-known-to-be-FOSS (see below) license the user is asked to view and accept the license: a list of accepted licenses is stored in file ‘~/.R/licensed’. In a non-interactive session it is an error to use such a package whose license has not already been recorded as accepted.
As from **R** 3.4.0 the license check is done when the namespace is loaded.
Free or Open Source Software (FOSS, e.g. <https://en.wikipedia.org/wiki/FOSS>) packages are determined by the same filters used by `[available.packages](../../utils/html/available.packages)` but applied to just the current package, not its dependencies.
There can also be a site-wide file ‘R\_HOME/etc/licensed.site’ of packages (one per line).
### Formal methods
`library` takes some further actions when package methods is attached (as it is by default). Packages may define formal generic functions as well as re-defining functions in other packages (notably base) to be generic, and this information is cached whenever such a namespace is loaded after methods and re-defined functions ([implicit generic](../../methods/html/implicitgeneric)s) are excluded from the list of conflicts. The caching and check for conflicts require looking for a pattern of objects; the search may be avoided by defining an object `.noGenerics` (with any value) in the namespace. Naturally, if the package *does* have any such methods, this will prevent them from being used.
### Note
`library` and `require` can only load/attach an *installed* package, and this is detected by having a ‘DESCRIPTION’ file containing a Built: field.
Under Unix-alikes, the code checks that the package was installed under a similar operating system as given by `R.version$platform` (the canonical name of the platform under which R was compiled), provided it contains compiled code. Packages which do not contain compiled code can be shared between Unix-alikes, but not to other OSes because of potential problems with line endings and OS-specific help files. If sub-architectures are used, the OS similarity is not checked since the OS used to build may differ (e.g. `i386-pc-linux-gnu` code can be built on an `x86_64-unknown-linux-gnu` OS).
The package name given to `library` and `require` must match the name given in the package's ‘DESCRIPTION’ file exactly, even on case-insensitive file systems such as are common on Windows and macOS.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[.libPaths](libpaths)`, `[.packages](zpackages)`.
`<attach>`, `<detach>`, `<search>`, `[objects](ls)`, `<autoload>`, `[requireNamespace](ns-load)`, `<library.dynam>`, `[data](../../utils/html/data)`, `[install.packages](../../utils/html/install.packages)` and `[installed.packages](../../utils/html/installed.packages)`; `[INSTALL](../../utils/html/install)`, `[REMOVE](../../utils/html/remove)`.
The initial set of packages attached is set by `<options>(defaultPackages=)`: see also `[Startup](startup)`.
### Examples
```
library() # list all available packages
library(lib.loc = .Library) # list all packages in the default library
library(help = splines) # documentation on package 'splines'
library(splines) # attach package 'splines'
require(splines) # the same
search() # "splines", too
detach("package:splines")
# if the package name is in a character vector, use
pkg <- "splines"
library(pkg, character.only = TRUE)
detach(pos = match(paste("package", pkg, sep = ":"), search()))
require(pkg, character.only = TRUE)
detach(pos = match(paste("package", pkg, sep = ":"), search()))
require(nonexistent) # FALSE
## Not run:
## if you want to mask as little as possible, use
library(mypkg, pos = "package:base")
## End(Not run)
```
r None
`length` Length of an Object
-----------------------------
### Description
Get or set the length of vectors (including lists) and factors, and of any other **R** object for which a method has been defined.
### Usage
```
length(x)
length(x) <- value
```
### Arguments
| | |
| --- | --- |
| `x` | an **R** object. For replacement, a vector or factor. |
| `value` | a non-negative integer or double (which will be rounded down). |
### Details
Both functions are generic: you can write methods to handle specific classes of objects, see [InternalMethods](internalmethods). `length<-` has a `"factor"` method.
The replacement form can be used to reset the length of a vector. If a vector is shortened, extra values are discarded and when a vector is lengthened, it is padded out to its new length with `[NA](na)`s (`nul` for raw vectors).
Both are <primitive> functions.
### Value
The default method for `length` currently returns a non-negative `<integer>` of length 1, except for vectors of more than *2^31 - 1* elements, when it returns a double.
For vectors (including lists) and factors the length is the number of elements. For an environment it is the number of objects in the environment, and `NULL` has length 0. For expressions and pairlists (including [language objects](is.language) and dotlists) it is the length of the pairlist chain. All other objects (including functions) have length one: note that for functions this differs from S.
The replacement form removes all the attributes of `x` except its names, which are adjusted (and if necessary extended by `""`).
### Warning
Package authors have written methods that return a result of length other than one ([Formula](https://CRAN.R-project.org/package=Formula)) and that return a vector of type `<double>` ([Matrix](https://CRAN.R-project.org/package=Matrix)), even with non-integer values (earlier versions of [sets](https://CRAN.R-project.org/package=sets)). Where a single double value is returned that can be represented as an integer it is returned as a length-one integer vector.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`nchar` for counting the number of characters in character vectors, `<lengths>` for getting the length of every element in a list.
### Examples
```
length(diag(4)) # = 16 (4 x 4)
length(options()) # 12 or more
length(y ~ x1 + x2 + x3) # 3
length(expression(x, {y <- x^2; y+2}, x^y)) # 3
## from example(warpbreaks)
require(stats)
fm1 <- lm(breaks ~ wool * tension, data = warpbreaks)
length(fm1$call) # 3, lm() and two arguments.
length(formula(fm1)) # 3, ~ lhs rhs
```
r None
`taskCallbackManager` Create an R-level Task Callback Manager
--------------------------------------------------------------
### Description
This provides an entirely **R**-language mechanism for managing callbacks or actions that are invoked at the conclusion of each top-level task. Essentially, we register a single **R** function from this manager with the underlying, native task-callback mechanism and this function handles invoking the other R callbacks under the control of the manager. The manager consists of a collection of functions that access shared variables to manage the list of user-level callbacks.
### Usage
```
taskCallbackManager(handlers = list(), registered = FALSE,
verbose = FALSE)
```
### Arguments
| | |
| --- | --- |
| `handlers` | this can be a list of callbacks in which each element is a list with an element named `"f"` which is a callback function, and an optional element named `"data"` which is the 5-th argument to be supplied to the callback when it is invoked. Typically this argument is not specified, and one uses `add` to register callbacks after the manager is created. |
| `registered` | a logical value indicating whether the `evaluate` function has already been registered with the internal task callback mechanism. This is usually `FALSE` and the first time a callback is added via the `add` function, the `evaluate` function is automatically registered. One can control when the function is registered by specifying `TRUE` for this argument and calling `[addTaskCallback](taskcallback)` manually. |
| `verbose` | a logical value, which if `TRUE`, causes information to be printed to the console about certain activities this dispatch manager performs. This is useful for debugging callbacks and the handler itself. |
### Value
A `<list>` containing 6 functions:
| | |
| --- | --- |
| ``add()`` | register a callback with this manager, giving the function, an optional 5-th argument, an optional name by which the callback is stored in the list, and a `register` argument which controls whether the `evaluate` function is registered with the internal C-level dispatch mechanism if necessary. |
| ``remove()`` | remove an element from the manager's collection of callbacks, either by name or position/index. |
| ``evaluate()`` | the ‘real’ callback function that is registered with the C-level dispatch mechanism and which invokes each of the R-level callbacks within this manager's control. |
| ``suspend()`` | a function to set the suspend state of the manager. If it is suspended, none of the callbacks will be invoked when a task is completed. One sets the state by specifying a logical value for the `status` argument. |
| ``register()`` | a function to register the `evaluate` function with the internal C-level dispatch mechanism. This is done automatically by the `add` function, but can be called manually. |
| ``callbacks()`` | returns the list of callbacks being maintained by this manager. |
### References
Duncan Temple Lang (2001) *Top-level Task Callbacks in R*, <https://developer.r-project.org/TaskHandlers.pdf>
### See Also
`[addTaskCallback](taskcallback)`, `[removeTaskCallback](taskcallback)`, `[getTaskCallbackNames](taskcallbacknames)` and the reference.
### Examples
```
# create the manager
h <- taskCallbackManager()
# add a callback
h$add(function(expr, value, ok, visible) {
cat("In handler\n")
return(TRUE)
}, name = "simpleHandler")
# look at the internal callbacks.
getTaskCallbackNames()
# look at the R-level callbacks
names(h$callbacks())
removeTaskCallback("R-taskCallbackManager")
```
r None
`Deprecated` Marking Objects as Deprecated
-------------------------------------------
### Description
When an object is about to be removed from **R** it is first deprecated and should include a call to `.Deprecated`.
### Usage
```
.Deprecated(new, package=NULL, msg,
old = as.character(sys.call(sys.parent()))[1L])
```
### Arguments
| | |
| --- | --- |
| `new` | character string: A suggestion for a replacement function. |
| `package` | character string: The package to be used when suggesting where the deprecated function might be listed. |
| `msg` | character string: A message to be printed, if missing a default message is used. |
| `old` | character string specifying the function (default) or usage which is being deprecated. |
### Details
`.Deprecated("<new name>")` is called from deprecated functions. The original help page for these functions is often available at `help("oldName-deprecated")` (note the quotes). Functions should be listed in `help("pkg-deprecated")` for an appropriate `pkg`, including `base`.
`.Deprecated` signals a warning of class `deprecatedWarning` with fields `old`, `new`, and `package`.
### See Also
`[Defunct](defunct)`
`base-deprecated` and so on which list the deprecated functions in the packages.
| programming_docs |
r None
`system.time` CPU Time Used
----------------------------
### Description
Return CPU (and other) times that `expr` used.
### Usage
```
system.time(expr, gcFirst = TRUE)
```
### Arguments
| | |
| --- | --- |
| `expr` | Valid **R** expression to be timed. |
| `gcFirst` | Logical - should a garbage collection be performed immediately before the timing? Default is `TRUE`. |
### Details
`system.time` calls the function `<proc.time>`, evaluates `expr`, and then calls `proc.time` once more, returning the difference between the two `proc.time` calls.
`unix.time` has been an alias of `system.time`, for compatibility with S, and has finally been deprecated in 2016.
Timings of evaluations of the same expression can vary considerably depending on whether the evaluation triggers a garbage collection. When `gcFirst` is `TRUE` a garbage collection (`<gc>`) will be performed immediately before the evaluation of `expr`. This will usually produce more consistent timings.
### Value
A object of class `"proc_time"`: see `<proc.time>` for details.
### See Also
`<proc.time>`, `[time](../../stats/html/time)` which is for time series.
`[setTimeLimit](settimelimit)` to limit the (CPU/elapsed) time **R** is allowed to use.
`[Sys.time](sys.time)` to get the current date & time.
### Examples
```
require(stats)
system.time(for(i in 1:100) mad(runif(1000)))
## Not run:
exT <- function(n = 10000) {
# Purpose: Test if system.time works ok; n: loop size
system.time(for(i in 1:n) x <- mean(rt(1000, df = 4)))
}
#-- Try to interrupt one of the following (using Ctrl-C / Escape):
exT() #- about 4 secs on a 2.5GHz Xeon
system.time(exT()) #~ +/- same
## End(Not run)
```
r None
`replace` Replace Values in a Vector
-------------------------------------
### Description
`replace` replaces the values in `x` with indices given in `list` by those given in `values`. If necessary, the values in `values` are recycled.
### Usage
```
replace(x, list, values)
```
### Arguments
| | |
| --- | --- |
| `x` | vector |
| `list` | an index vector |
| `values` | replacement values |
### Value
A vector with the values replaced.
### Note
`x` is unchanged: remember to assign the result.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
r None
`sink` Send R Output to a File
-------------------------------
### Description
`sink` diverts **R** output to a connection (and stops such diversions).
`sink.number()` reports how many diversions are in use.
`sink.number(type = "message")` reports the number of the connection currently being used for error messages.
### Usage
```
sink(file = NULL, append = FALSE, type = c("output", "message"),
split = FALSE)
sink.number(type = c("output", "message"))
```
### Arguments
| | |
| --- | --- |
| `file` | a writable [connection](connections) or a character string naming the file to write to, or `NULL` to stop sink-ing. |
| `append` | logical. If `TRUE`, output will be appended to `file`; otherwise, it will overwrite the contents of `file`. |
| `type` | character string. Either the output stream or the messages stream. The name will be partially matched so can be abbreviated. |
| `split` | logical: if `TRUE`, output will be sent to the new sink and to the current output stream, like the Unix program `tee`. |
### Details
`sink` diverts **R** output to a connection (and must be used again to finish such a diversion, see below!). If `file` is a character string, a file connection with that name will be established for the duration of the diversion.
Normal **R** output (to connection `[stdout](showconnections)`) is diverted by the default `type = "output"`. Only prompts and (most) messages continue to appear on the console. Messages sent to `[stderr](showconnections)()` (including those from `<message>`, `<warning>` and `<stop>`) can be diverted by `sink(type = "message")` (see below).
`sink()` or `sink(file = NULL)` ends the last diversion (of the specified type). There is a stack of diversions for normal output, so output reverts to the previous diversion (if there was one). The stack is of up to 21 connections (20 diversions).
If `file` is a connection it will be opened if necessary (in `"wt"` mode) and closed once it is removed from the stack of diversions.
`split = TRUE` only splits **R** output (via `Rvprintf`) and the default output from `[writeLines](writelines)`: it does not split all output that might be sent to `[stdout](showconnections)()`.
Sink-ing the messages stream should be done only with great care. For that stream `file` must be an already open connection, and there is no stack of connections.
If `file` is a character string, the file will be opened using the current encoding. If you want a different encoding (e.g., to represent strings which have been stored in UTF-8), use a `[file](connections)` connection — but some ways to produce **R** output will already have converted such strings to the current encoding.
### Value
`sink` returns `NULL`.
For `sink.number()` the number (0, 1, 2, ...) of diversions of output in place.
For `sink.number("message")` the connection number used for messages, 2 if no diversion has been used.
### Warning
Do not use a connection that is open for `sink` for any other purpose. The software will stop you closing one such inadvertently.
Do not sink the messages stream unless you understand the source code implementing it and hence the pitfalls.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
Chambers, J. M. (1998) *Programming with Data. A Guide to the S Language*. Springer.
### See Also
`[capture.output](../../utils/html/capture.output)`
### Examples
```
sink("sink-examp.txt")
i <- 1:10
outer(i, i, "*")
sink()
## capture all the output to a file.
zz <- file("all.Rout", open = "wt")
sink(zz)
sink(zz, type = "message")
try(log("a"))
## revert output back to the console -- only then access the file!
sink(type = "message")
sink()
file.show("all.Rout")
```
r None
`files` File Manipulation
--------------------------
### Description
These functions provide a low-level interface to the computer's file system.
### Usage
```
file.create(..., showWarnings = TRUE)
file.exists(...)
file.remove(...)
file.rename(from, to)
file.append(file1, file2)
file.copy(from, to, overwrite = recursive, recursive = FALSE,
copy.mode = TRUE, copy.date = FALSE)
file.symlink(from, to)
file.link(from, to)
```
### Arguments
| | |
| --- | --- |
| `..., file1, file2` | character vectors, containing file names or paths. |
| `from, to` | character vectors, containing file names or paths. For `file.copy` and `file.symlink` `to` can alternatively be the path to a single existing directory. |
| `overwrite` | logical; should existing destination files be overwritten? |
| `showWarnings` | logical; should the warnings on failure be shown? |
| `recursive` | logical. If `to` is a directory, should directories in `from` be copied (and their contents)? (Like `cp -R` on POSIX OSes.) |
| `copy.mode` | logical: should file permission bits be copied where possible? |
| `copy.date` | logical: should file dates be preserved where possible? See `[Sys.setFileTime](sys.setfiletime)`. |
### Details
The `...` arguments are concatenated to form one character string: you can specify the files separately or as one vector. All of these functions expand path names: see `<path.expand>`.
`file.create` creates files with the given names if they do not already exist and truncates them if they do. They are created with the maximal read/write permissions allowed by the ‘[umask](files2)’ setting (where relevant). By default a warning is given (with the reason) if the operation fails.
`file.exists` returns a logical vector indicating whether the files named by its argument exist. (Here ‘exists’ is in the sense of the system's `stat` call: a file will be reported as existing only if you have the permissions needed by `stat`. Existence can also be checked by `<file.access>`, which might use different permissions and so obtain a different result. Note that the existence of a file does not imply that it is readable: for that use `<file.access>`.) What constitutes a ‘file’ is system-dependent, but should include directories. (However, directory names must not include a trailing backslash or slash on Windows.) Note that if the file is a symbolic link on a Unix-alike, the result indicates if the link points to an actual file, not just if the link exists. Lastly, note the *different* function `<exists>` which checks for existence of **R** objects.
`file.remove` attempts to remove the files named in its argument. On most Unix platforms ‘file’ includes *empty* directories, symbolic links, fifos and sockets. On Windows, ‘file’ means a regular file and not, say, an empty directory.
`file.rename` attempts to rename files (and `from` and `to` must be of the same length). Where file permissions allow this will overwrite an existing element of `to`. This is subject to the limitations of the OS's corresponding system call (see something like `man 2 rename` on a Unix-alike): in particular in the interpretation of ‘file’: most platforms will not rename files from one file system to another. **NB:** This means that renaming a file from a temporary directory to the user's filespace or during package installation will often fail. (On Windows, `file.rename` can rename files but not directories across volumes.) On platforms which allow directories to be renamed, typically neither or both of `from` and `to` must a directory, and if `to` exists it must be an empty directory.
`file.append` attempts to append the files named by its second argument to those named by its first. The **R** subscript recycling rule is used to align names given in vectors of different lengths.
`file.copy` works in a similar way to `file.append` but with the arguments in the natural order for copying. Copying to existing destination files is skipped unless `overwrite = TRUE`. The `to` argument can specify a single existing directory. If `copy.mode = TRUE` file read/write/execute permissions are copied where possible, restricted by ‘[umask](files2)’. (On Windows this applies only to files.) Other security attributes such as ACLs are not copied. On a POSIX filesystem the targets of symbolic links will be copied rather than the links themselves, and hard links are copied separately. Using `copy.date = TRUE` may or may not copy the timestamp exactly (for example, fractional seconds may be omitted), but is more likely to do so as from **R** 3.4.0.
`file.symlink` and `file.link` make symbolic and hard links on those file systems which support them. For `file.symlink` the `to` argument can specify a single existing directory. (Unix and macOS native filesystems support both. Windows has hard links to files on NTFS file systems and concepts related to symbolic links on recent versions: see the section below on the Windows version of this help page. What happens on a FAT or SMB-mounted file system is OS-specific.)
File arguments with a marked encoding (see `[Encoding](encoding)` are if possible translated to the native encoding, except on Windows where Unicode file operations are used (so marking as UTF-8 can be used to access file paths not in the native encoding on suitable file systems).
### Value
These functions return a logical vector indicating which operation succeeded for each of the files attempted. Using a missing value for a file or path name will always be regarded as a failure.
If `showWarnings = TRUE`, `file.create` will give a warning for an unexpected failure.
### Case-insensitive file systems
Case-insensitive file systems are the norm on Windows and macOS, but can be found on all OSes (for example a FAT-formatted USB drive is probably case-insensitive).
These functions will most likely match existing files regardless of case on such file systems: however this is an OS function and it is possible that file names might be mapped to upper or lower case.
### Warning
Always check the return value of these functions when used in package code. This is especially important for `file.rename`, which has OS-specific restrictions (and note that the session temporary directory is commonly on a different file system from the working directory): it is only portable to use `file.rename` to change file name(s) within a single directory.
### Author(s)
Ross Ihaka, Brian Ripley
### See Also
`<file.info>`, `<file.access>`, `<file.path>`, `<file.show>`, `<list.files>`, `<unlink>`, `<basename>`, `<path.expand>`.
`[dir.create](files2)`.
`[Sys.glob](sys.glob)` to expand wildcards in file specifications.
`[file\_test](../../utils/html/filetest)`, `[Sys.readlink](sys.readlink)` (for ‘symlink’s).
<https://en.wikipedia.org/wiki/Hard_link> and <https://en.wikipedia.org/wiki/Symbolic_link> for the concepts of links and their limitations.
### Examples
```
cat("file A\n", file = "A")
cat("file B\n", file = "B")
file.append("A", "B")
file.create("A") # (trashing previous)
file.append("A", rep("B", 10))
if(interactive()) file.show("A") # -> the 10 lines from 'B'
file.copy("A", "C")
dir.create("tmp")
file.copy(c("A", "B"), "tmp")
list.files("tmp") # -> "A" and "B"
setwd("tmp")
file.remove("A") # the tmp/A file
file.symlink(file.path("..", c("A", "B")), ".")
# |--> (TRUE,FALSE) : ok for A but not B as it exists already
setwd("..")
unlink("tmp", recursive = TRUE)
file.remove("A", "B", "C")
```
r None
`format.pval` Format P Values
------------------------------
### Description
`format.pval` is intended for formatting p-values.
### Usage
```
format.pval(pv, digits = max(1, getOption("digits") - 2),
eps = .Machine$double.eps, na.form = "NA", ...)
```
### Arguments
| | |
| --- | --- |
| `pv` | a numeric vector. |
| `digits` | how many significant digits are to be used. |
| `eps` | a numerical tolerance: see ‘Details’. |
| `na.form` | character representation of `NA`s. |
| `...` | further arguments to be passed to `<format>` such as `nsmall`. |
### Details
`format.pval` is mainly an auxiliary function for `[print.summary.lm](../../stats/html/summary.lm)` etc., and does separate formatting for fixed, floating point and very small values; those less than `eps` are formatted as `"< [eps]"` (where ‘[eps]’ stands for `format(eps, digits)`).
### Value
A character vector.
### Examples
```
format.pval(c(stats::runif(5), pi^-100, NA))
format.pval(c(0.1, 0.0001, 1e-27))
```
r None
`slotOp` Extract or Replace A Slot
-----------------------------------
### Description
Extract or replace the contents of a slot in a object with a formal (S4) class structure.
### Usage
```
object@name
object@name <- value
```
### Arguments
| | |
| --- | --- |
| `object` | An object from a formally defined (S4) class. |
| `name` | The character-string name of the slot, quoted or not. Must be the name of a slot in the definition of the class of `object`. |
| `value` | A replacement value for the slot, which must be from a class compatible with the class defined for this slot in the definition of the class of `object`. |
### Details
These operators support the formal classes of package methods, and are enabled only when package methods is loaded (as per default). See `[slot](../../methods/html/slot)` for further details, in particular for the differences between `slot()` and the `@` operator.
It is checked that `object` is an S4 object (see `[isS4](iss4)`), and it is an error to attempt to use `@` on any other object. (There is an exception for name `.Data` for internal use only.) The replacement operator checks that the slot already exists on the object (which it should if the object is really from the class it claims to be).
These are internal generic operators: see [InternalMethods](internalmethods).
### Value
The current contents of the slot.
### See Also
`[Extract](extract)`, `[slot](../../methods/html/slot)`
r None
`subset` Subsetting Vectors, Matrices and Data Frames
------------------------------------------------------
### Description
Return subsets of vectors, matrices or data frames which meet conditions.
### Usage
```
subset(x, ...)
## Default S3 method:
subset(x, subset, ...)
## S3 method for class 'matrix'
subset(x, subset, select, drop = FALSE, ...)
## S3 method for class 'data.frame'
subset(x, subset, select, drop = FALSE, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | object to be subsetted. |
| `subset` | logical expression indicating elements or rows to keep: missing values are taken as false. |
| `select` | expression, indicating columns to select from a data frame. |
| `drop` | passed on to `[` indexing operator. |
| `...` | further arguments to be passed to or from other methods. |
### Details
This is a generic function, with methods supplied for matrices, data frames and vectors (including lists). Packages and users can add further methods.
For ordinary vectors, the result is simply `x[subset & !is.na(subset)]`.
For data frames, the `subset` argument works on the rows. Note that `subset` will be evaluated in the data frame, so columns can be referred to (by name) as variables in the expression (see the examples).
The `select` argument exists only for the methods for data frames and matrices. It works by first replacing column names in the selection expression with the corresponding column numbers in the data frame and then using the resulting integer vector to index the columns. This allows the use of the standard indexing conventions so that for example ranges of columns can be specified easily, or single columns can be dropped (see the examples).
The `drop` argument is passed on to the indexing method for matrices and data frames: note that the default for matrices is different from that for indexing.
Factors may have empty levels after subsetting; unused levels are not automatically removed. See `<droplevels>` for a way to drop all unused levels from a data frame.
### Value
An object similar to `x` contain just the selected elements (for a vector), rows and columns (for a matrix or data frame), and so on.
### Warning
This is a convenience function intended for use interactively. For programming it is better to use the standard subsetting functions like `[[](extract)`, and in particular the non-standard evaluation of argument `subset` can have unanticipated consequences.
### Author(s)
Peter Dalgaard and Brian Ripley
### See Also
`[[](extract)`, `<transform>` `<droplevels>`
### Examples
```
subset(airquality, Temp > 80, select = c(Ozone, Temp))
subset(airquality, Day == 1, select = -Temp)
subset(airquality, select = Ozone:Wind)
with(airquality, subset(Ozone, Temp > 80))
## sometimes requiring a logical 'subset' argument is a nuisance
nm <- rownames(state.x77)
start_with_M <- nm %in% grep("^M", nm, value = TRUE)
subset(state.x77, start_with_M, Illiteracy:Murder)
# but in recent versions of R this can simply be
subset(state.x77, grepl("^M", nm), Illiteracy:Murder)
```
r None
`ns-topenv` Top Level Environment
----------------------------------
### Description
Finding the top level `<environment>` from an environment `envir` and its enclosing environments.
### Usage
```
topenv(envir = parent.frame(),
matchThisEnv = getOption("topLevelEnvironment"))
```
### Arguments
| | |
| --- | --- |
| `envir` | environment. |
| `matchThisEnv` | return this environment, if it matches before any other criterion is satisfied. The default, the option topLevelEnvironment, is set by `<sys.source>`, which treats a specific environment as the top level environment. Supplying the argument as `NULL` or `emptyenv()` means it will never match. |
### Details
`topenv` returns the first top level `<environment>` found when searching `envir` and its enclosing environments. If no top level environment is found, `[.GlobalEnv](environment)` is returned. An environment is considered top level if it is the internal environment of a namespace, a package environment in the `<search>` path, or `[.GlobalEnv](environment)` .
### See Also
`<environment>`, notably `parent.env()` on “enclosing environments”; `[loadNamespace](ns-load)` for more on namespaces.
### Examples
```
topenv(.GlobalEnv)
topenv(new.env()) # also global env
topenv(environment(ls))# namespace:base
topenv(environment(lm))# namespace:stats
```
| programming_docs |
r None
`on.exit` Function Exit Code
-----------------------------
### Description
`on.exit` records the expression given as its argument as needing to be executed when the current function exits (either naturally or as the result of an error). This is useful for resetting graphical parameters or performing other cleanup actions.
If no expression is provided, i.e., the call is `on.exit()`, then the current `on.exit` code is removed.
### Usage
```
on.exit(expr = NULL, add = FALSE, after = TRUE)
```
### Arguments
| | |
| --- | --- |
| `expr` | an expression to be executed. |
| `add` | if TRUE, add `expr` to be executed after any previously set expressions (or before if `after` is FALSE); otherwise (the default) `expr` will overwrite any previously set expressions. |
| `after` | if `add` is TRUE and `after` is FALSE, then `expr` will be added on top of the expressions that were already registered. The resulting last in first out order is useful for freeing or closing resources in reverse order. |
### Details
The `expr` argument passed to `on.exit` is recorded without evaluation. If it is not subsequently removed/replaced by another `on.exit` call in the same function, it is evaluated in the evaluation frame of the function when it exits (including during standard error handling). Thus any functions or variables in the expression will be looked for in the function and its environment at the time of exit: to capture the current value in `expr` use `<substitute>` or similar.
If multiple `on.exit` expressions are set using `add = TRUE` then all expressions will be run even if one signals an error.
This is a ‘special’ <primitive> function: it only evaluates the arguments `add` and `after`.
### Value
Invisible `NULL`.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[sys.on.exit](sys.parent)` which returns the expression stored for use by `on.exit()` in the function in which `sys.on.exit()` is evaluated.
### Examples
```
require(graphics)
opar <- par(mai = c(1,1,1,1))
on.exit(par(opar))
```
r None
`abbreviate` Abbreviate Strings
--------------------------------
### Description
Abbreviate strings to at least `minlength` characters, such that they remain *unique* (if they were), unless `strict = TRUE`.
### Usage
```
abbreviate(names.arg, minlength = 4, use.classes = TRUE,
dot = FALSE, strict = FALSE,
method = c("left.kept", "both.sides"), named = TRUE)
```
### Arguments
| | |
| --- | --- |
| `names.arg` | a character vector of names to be abbreviated, or an object to be coerced to a character vector by `[as.character](character)`. |
| `minlength` | the minimum length of the abbreviations. |
| `use.classes` | logical: should lowercase characters be removed first? |
| `dot` | logical: should a dot (`"."`) be appended? |
| `strict` | logical: should `minlength` be observed strictly? Note that setting `strict = TRUE` may return *non*-unique strings. |
| `method` | a character string specifying the method used with default `"left.kept"`, see ‘Details’ below. Partial matches allowed. |
| `named` | logical: should `names` (with original vector) be returned. |
### Details
The default algorithm (`method = "left.kept"`) used is similar to that of S. For a single string it works as follows. First spaces at the ends of the string are stripped. Then (if necessary) any other spaces are stripped. Next, lower case vowels are removed followed by lower case consonants. Finally if the abbreviation is still longer than `minlength` upper case letters and symbols are stripped.
Characters are always stripped from the end of the strings first. If an element of `names.arg` contains more than one word (words are separated by spaces) then at least one letter from each word will be retained.
Missing (`NA`) values are unaltered.
If `use.classes` is `FALSE` then the only distinction is to be between letters and space.
### Value
A character vector containing abbreviations for the character strings in its first argument. Duplicates in the original `names.arg` will be given identical abbreviations. If any non-duplicated elements have the same `minlength` abbreviations then, if `method =
"both.sides"` the basic internal `abbreviate()` algorithm is applied to the characterwise *reversed* strings; if there are still duplicated abbreviations and if `strict = FALSE` as by default, `minlength` is incremented by one and new abbreviations are found for those elements only. This process is repeated until all unique elements of `names.arg` have unique abbreviations.
If `names` is true, the character version of `names.arg` is attached to the returned value as a `<names>` attribute: no other attributes are retained.
If a input element contains non-ASCII characters, the corresponding value will be in UTF-8 and marked as such (see `[Encoding](encoding)`).
### Warning
If `use.classes` is true (the default), this is really only suitable for English, and prior to **R** 3.3.0 did not work correctly with non-ASCII characters in multibyte locales. It will warn if used with non-ASCII characters (and required to reduce the length). It is unlikely to work well with inputs not in the Unicode Basic Multilingual Plane nor on (rare) platforms where wide characters are not encoded in Unicode.
As from **R** 3.3.0 the concept of ‘vowel’ is extended from English vowels by including characters which are accented versions of lower-case English vowels (including ‘o with stroke’). Of course, there are languages (even Western European languages such as Welsh) with other vowels.
### See Also
`<substr>`.
### Examples
```
x <- c("abcd", "efgh", "abce")
abbreviate(x, 2)
abbreviate(x, 2, strict = TRUE) # >> 1st and 3rd are == "ab"
(st.abb <- abbreviate(state.name, 2))
stopifnot(identical(unname(st.abb),
abbreviate(state.name, 2, named=FALSE)))
table(nchar(st.abb)) # out of 50, 3 need 4 letters :
as <- abbreviate(state.name, 3, strict = TRUE)
as[which(as == "Mss")]
## and without distinguishing vowels:
st.abb2 <- abbreviate(state.name, 2, FALSE)
cbind(st.abb, st.abb2)[st.abb2 != st.abb, ]
## method = "both.sides" helps: no 4-letters, and only 4 3-letters:
st.ab2 <- abbreviate(state.name, 2, method = "both")
table(nchar(st.ab2))
## Compare the two methods:
cbind(st.abb, st.ab2)
```
r None
`assign` Assign a Value to a Name
----------------------------------
### Description
Assign a value to a name in an environment.
### Usage
```
assign(x, value, pos = -1, envir = as.environment(pos),
inherits = FALSE, immediate = TRUE)
```
### Arguments
| | |
| --- | --- |
| `x` | a variable name, given as a character string. No coercion is done, and the first element of a character vector of length greater than one will be used, with a warning. |
| `value` | a value to be assigned to `x`. |
| `pos` | where to do the assignment. By default, assigns into the current environment. See ‘Details’ for other possibilities. |
| `envir` | the `<environment>` to use. See ‘Details’. |
| `inherits` | should the enclosing frames of the environment be inspected? |
| `immediate` | an ignored compatibility feature. |
### Details
There are no restrictions on the name given as `x`: it can be a non-syntactic name (see `<make.names>`).
The `pos` argument can specify the environment in which to assign the object in any of several ways: as `-1` (the default), as a positive integer (the position in the `<search>` list); as the character string name of an element in the search list; or as an `<environment>` (including using `[sys.frame](sys.parent)` to access the currently active function calls). The `envir` argument is an alternative way to specify an environment, but is primarily for back compatibility.
`assign` does not dispatch assignment methods, so it cannot be used to set elements of vectors, names, attributes, etc.
Note that assignment to an attached list or data frame changes the attached copy and not the original object: see `<attach>` and `<with>`.
### Value
This function is invoked for its side effect, which is assigning `value` to the variable `x`. If no `envir` is specified, then the assignment takes place in the currently active environment.
If `inherits` is `TRUE`, enclosing environments of the supplied environment are searched until the variable `x` is encountered. The value is then assigned in the environment in which the variable is encountered (provided that the binding is not locked: see `[lockBinding](bindenv)`: if it is, an error is signaled). If the symbol is not encountered then assignment takes place in the user's workspace (the global environment).
If `inherits` is `FALSE`, assignment takes place in the initial frame of `envir`, unless an existing binding is locked or there is no existing binding and the environment is locked (when an error is signaled).
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[<-](assignops)`, `<get>`, the inverse of `assign()`, `<exists>`, `<environment>`.
### Examples
```
for(i in 1:6) { #-- Create objects 'r.1', 'r.2', ... 'r.6' --
nam <- paste("r", i, sep = ".")
assign(nam, 1:i)
}
ls(pattern = "^r..$")
##-- Global assignment within a function:
myf <- function(x) {
innerf <- function(x) assign("Global.res", x^2, envir = .GlobalEnv)
innerf(x+1)
}
myf(3)
Global.res # 16
a <- 1:4
assign("a[1]", 2)
a[1] == 2 # FALSE
get("a[1]") == 2 # TRUE
```
r None
`identity` Identity Function
-----------------------------
### Description
A trivial identity function returning its argument.
### Usage
```
identity(x)
```
### Arguments
| | |
| --- | --- |
| `x` | an **R** object. |
### See Also
`<diag>` creates diagonal matrices, including identity ones.
r None
`zpackages` Listing of Packages
--------------------------------
### Description
`.packages` returns information about package availability.
### Usage
```
.packages(all.available = FALSE, lib.loc = NULL)
```
### Arguments
| | |
| --- | --- |
| `all.available` | logical; if `TRUE` return a character vector of all available packages in `lib.loc`. |
| `lib.loc` | a character vector describing the location of **R** library trees to search through, or `NULL`. The default value of `NULL` corresponds to `[.libPaths](libpaths)()`. |
### Details
`.packages()` returns the names of the currently attached packages *invisibly* whereas `.packages(all.available = TRUE)` gives (visibly) *all* packages available in the library location path `lib.loc`.
For a package to be regarded as being ‘available’ it must have valid metadata (and hence be an installed package). However, this will report a package as available if the metadata does not match the directory name: use `<find.package>` to confirm that the metadata match or `[installed.packages](../../utils/html/installed.packages)` for a much slower but more comprehensive check of ‘available’ packages.
### Value
A character vector of package base names, invisible unless `all.available = TRUE`.
### Note
`.packages(all.available = TRUE)` is not a way to find out if a small number of packages are available for use: not only is it expensive when thousands of packages are installed, it is an incomplete test. See the help for `<find.package>` for why `[require](library)` should be used.
### Author(s)
R core; Guido Masarotto for the `all.available = TRUE` part of `.packages`.
### See Also
`<library>`, `[.libPaths](libpaths)`, `[installed.packages](../../utils/html/installed.packages)`.
### Examples
```
(.packages()) # maybe just "base"
.packages(all.available = TRUE) # return all available as character vector
require(splines)
(.packages()) # "splines", too
detach("package:splines")
```
r None
`Sys.localeconv` Find Details of the Numerical and Monetary Representations in the Current Locale
--------------------------------------------------------------------------------------------------
### Description
Get details of the numerical and monetary representations in the current locale.
### Usage
```
Sys.localeconv()
```
### Details
Normally **R** is run without looking at the value of LC\_NUMERIC, so the decimal point remains '`.`'. So the first three of these components will only be useful if you have set the locale category `LC_NUMERIC` using `Sys.setlocale` in the current **R** session (when **R** may not work correctly).
The monetary components will only be set to non-default values (see the ‘Examples’ section) if the `LC_MONETARY` category is set. It often is not set: set the examples for how to trigger setting it.
### Value
A character vector with 18 named components. See your ISO C documentation for details of the meaning.
It is possible to compile **R** without support for locales, in which case the value will be `NULL`.
### See Also
`[Sys.setlocale](locales)` for ways to set locales.
### Examples
```
Sys.localeconv()
## The results in the C locale are
## decimal_point thousands_sep grouping int_curr_symbol
## "." "" "" ""
## currency_symbol mon_decimal_point mon_thousands_sep mon_grouping
## "" "" "" ""
## positive_sign negative_sign int_frac_digits frac_digits
## "" "" "127" "127"
## p_cs_precedes p_sep_by_space n_cs_precedes n_sep_by_space
## "127" "127" "127" "127"
## p_sign_posn n_sign_posn
## "127" "127"
## Now try your default locale (which might be "C").
old <- Sys.getlocale()
## The category may not be set:
## the following may do so, but it might not be supported.
Sys.setlocale("LC_MONETARY", locale = "")
Sys.localeconv()
## or set an appropriate value yourself, e.g.
Sys.setlocale("LC_MONETARY", "de_AT")
Sys.localeconv()
Sys.setlocale(locale = old)
## Not run: read.table("foo", dec=Sys.localeconv()["decimal_point"])
```
r None
`pretty` Pretty Breakpoints
----------------------------
### Description
Compute a sequence of about `n+1` equally spaced ‘round’ values which cover the range of the values in `x`. The values are chosen so that they are 1, 2 or 5 times a power of 10.
### Usage
```
pretty(x, ...)
## Default S3 method:
pretty(x, n = 5, min.n = n %/% 3, shrink.sml = 0.75,
high.u.bias = 1.5, u5.bias = .5 + 1.5*high.u.bias,
eps.correct = 0, ...)
```
### Arguments
| | |
| --- | --- |
| `x` | an object coercible to numeric by `[as.numeric](numeric)`. |
| `n` | integer giving the *desired* number of intervals. Non-integer values are rounded down. |
| `min.n` | nonnegative integer giving the *minimal* number of intervals. If `min.n == 0`, `pretty(.)` may return a single value. |
| `shrink.sml` | positive number, a factor (smaller than one) by which a default scale is shrunk in the case when `range(x)` is very small (usually 0). |
| `high.u.bias` | non-negative numeric, typically *> 1*. The interval unit is determined as {1,2,5,10} times `b`, a power of 10. Larger `high.u.bias` values favor larger units. |
| `u5.bias` | non-negative numeric multiplier favoring factor 5 over 2. Default and ‘optimal’: `u5.bias = .5 + 1.5*high.u.bias`. |
| `eps.correct` | integer code, one of {0,1,2}. If non-0, an *epsilon correction* is made at the boundaries such that the result boundaries will be outside `range(x)`; in the *small* case, the correction is only done if `eps.correct
>= 2`. |
| `...` | further arguments for methods. |
### Details
`pretty` ignores non-finite values in `x`.
Let `d <- max(x) - min(x)` *≥ 0*. If `d` is not (very close) to 0, we let `c <- d/n`, otherwise more or less `c <- max(abs(range(x)))*shrink.sml / min.n`. Then, the *10 base* `b` is *10^(floor(log10(c)))* such that *b ≤ c < 10b*.
Now determine the basic *unit* *u* as one of *{1,2,5,10} b*, depending on *c/b in [1,10)* and the two ‘*bias*’ coefficients, *h =*`high.u.bias` and *f =*`u5.bias`.
.........
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### See Also
`[axTicks](../../graphics/html/axticks)` for the computation of pretty axis tick locations in plots, particularly on the log scale.
### Examples
```
pretty(1:15) # 0 2 4 6 8 10 12 14 16
pretty(1:15, high.u.bias = 2) # 0 5 10 15
pretty(1:15, n = 4) # 0 5 10 15
pretty(1:15 * 2) # 0 5 10 15 20 25 30
pretty(1:20) # 0 5 10 15 20
pretty(1:20, n = 2) # 0 10 20
pretty(1:20, n = 10) # 0 2 4 ... 20
for(k in 5:11) {
cat("k=", k, ": "); print(diff(range(pretty(100 + c(0, pi*10^-k)))))}
##-- more bizarre, when min(x) == max(x):
pretty(pi)
add.names <- function(v) { names(v) <- paste(v); v}
utils::str(lapply(add.names(-10:20), pretty))
utils::str(lapply(add.names(0:20), pretty, min.n = 0))
sapply( add.names(0:20), pretty, min.n = 4)
pretty(1.234e100)
pretty(1001.1001)
pretty(1001.1001, shrink.sml = 0.2)
for(k in -7:3)
cat("shrink=", formatC(2^k, width = 9),":",
formatC(pretty(1001.1001, shrink.sml = 2^k), width = 6),"\n")
```
r None
`is.language` Is an Object a Language Object?
----------------------------------------------
### Description
`is.language` returns `TRUE` if `x` is a variable `<name>`, a `<call>`, or an `<expression>`.
### Usage
```
is.language(x)
```
### Arguments
| | |
| --- | --- |
| `x` | object to be tested. |
### Note
A `name` is also known as ‘symbol’, from its type (`<typeof>`), see `[is.symbol](name)`.
If `<typeof>(x) == "language"`, then `is.language(x)` is always true, but the reverse does not hold as expressions or names `y` also fulfill `is.language(y)`, see the examples.
This is a <primitive> function.
### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole.
### Examples
```
ll <- list(a = expression(x^2 - 2*x + 1), b = as.name("Jim"),
c = as.expression(exp(1)), d = call("sin", pi))
sapply(ll, typeof)
sapply(ll, mode)
stopifnot(sapply(ll, is.language))
```
r None
`iconv` Convert Character Vector between Encodings
---------------------------------------------------
### Description
This uses system facilities to convert a character vector between encodings: the ‘i’ stands for ‘internationalization’.
### Usage
```
iconv(x, from = "", to = "", sub = NA, mark = TRUE, toRaw = FALSE)
iconvlist()
```
### Arguments
| | |
| --- | --- |
| `x` | A character vector, or an object to be converted to a character vector by `[as.character](character)`, or a list with `NULL` and `raw` elements as returned by `iconv(toRaw = TRUE)`. |
| `from` | A character string describing the current encoding. |
| `to` | A character string describing the target encoding. |
| `sub` | character string. If not `NA` it is used to replace any non-convertible bytes in the input. (This would normally be a single character, but can be more.) If `"byte"`, the indication is `"<xx>"` with the hex code of the byte. If `"Unicode"` and converting from UTF-8, the Unicode point in the form `"<U+xxxx>"`. |
| `mark` | logical, for expert use. Should encodings be marked? |
| `toRaw` | logical. Should a list of raw vectors be returned rather than a character vector? |
### Details
The names of encodings and which ones are available are platform-dependent. All **R** platforms support `""` (for the encoding of the current locale), `"latin1"` and `"UTF-8"`. Generally case is ignored when specifying an encoding.
On most platforms `iconvlist` provides an alphabetical list of the supported encodings. On others, the information is on the man page for `iconv(5)` or elsewhere in the man pages (but beware that the system command `iconv` may not support the same set of encodings as the C functions **R** calls). Unfortunately, the names are rarely supported across all platforms.
Elements of `x` which cannot be converted (perhaps because they are invalid or because they cannot be represented in the target encoding) will be returned as `NA` unless `sub` is specified.
Most versions of `iconv` will allow transliteration by appending //TRANSLIT to the `to` encoding: see the examples.
Encoding `"ASCII"` is accepted, and on most systems `"C"` and `"POSIX"` are synonyms for ASCII.
Any encoding bits (see `[Encoding](encoding)`) on elements of `x` are ignored: they will always be translated as if from encoding `from` even if declared otherwise. `[enc2native](encoding)` and `[enc2utf8](encoding)` provide alternatives which do take declared encodings into account.
Note that implementations of `iconv` typically do not do much validity checking and will often mis-convert inputs which are invalid in encoding `from`.
If `sub = "Unicode"` is used for a non-UTF-8 input it is the same as `sub = "byte"`.
### Value
If `toRaw = FALSE` (the default), the value is a character vector of the same length and the same attributes as `x` (after conversion to a character vector).
If `mark = TRUE` (the default) the elements of the result have a declared encoding if `to` is `"latin1"` or `"UTF-8"`, or if `to = ""` and the current locale's encoding is detected as Latin-1 (or its superset CP1252 on Windows) or UTF-8.
If `toRaw = TRUE`, the value is a list of the same length and the same attributes as `x` whose elements are either `NULL` (if conversion fails) or a raw vector.
For `iconvlist()`, a character vector (typically of a few hundred elements) of known encoding names.
### Implementation Details
There are three main implementations of `iconv` in use. Linux's C runtime glibc contains one. Several platforms supply GNU libiconv, including macOS, FreeBSD and Cygwin, in some cases with additional encodings. On Windows we use a version of Yukihiro Nakadaira's win\_iconv, which is based on Windows' codepages. (We have added many encoding names for compatibility with other systems.) All three have `iconvlist`, ignore case in encoding names and support //TRANSLIT (but with different results, and for win\_iconv currently a ‘best fit’ strategy is used except for `to = "ASCII"`).
Most commercial Unixes contain an implementation of `iconv` but none we have encountered have supported the encoding names we need: the ‘R Installation and Administration’ manual recommends installing GNU libiconv on Solaris and AIX, for example.
There are other implementations, e.g. NetBSD has used one from the Citrus project (which does not support //TRANSLIT) and there is an older FreeBSD port (libiconv is usually used there): it has not been reported whether or not these work with **R**.
Note that you cannot rely on invalid inputs being detected, especially for `to = "ASCII"` where some implementations allow 8-bit characters and pass them through unchanged or with transliteration.
Some of the implementations have interesting extra encodings: for example GNU libiconv allows `to = "C99"` to use \uxxxx escapes for non-ASCII characters.
### Byte Order Marks
most commonly known as ‘BOMs’.
Encodings using character units which are more than one byte in size can be written on a file in either big-endian or little-endian order: this applies most commonly to UCS-2, UTF-16 and UTF-32/UCS-4 encodings. Some systems will write the Unicode character `U+FEFF` at the beginning of a file in these encodings and perhaps also in UTF-8. In that usage the character is known as a BOM, and should be handled during input (see the ‘Encodings’ section under `[connection](connections)`: re-encoded connections have some special handling of BOMs). The rest of this section applies when this has not been done so `x` starts with a BOM.
Implementations will generally interpret a BOM for `from` given as one of `"UCS-2"`, `"UTF-16"` and `"UTF-32"`. Implementations differ in how they treat BOMs in `x` in other `from` encodings: they may be discarded, returned as character `U+FEFF` or regarded as invalid.
### Note
The only reasonably portable name for the ISO 8859-15 encoding, commonly known as ‘Latin 9’, is `"latin-9"`: some platforms support `"latin9"` but GNU libiconv does not.
Encoding names `"utf8"`, `"mac"` and `"macroman"` are not portable. `"utf8"` is converted to `"UTF-8"` for `from` and `to` by `iconv`, but not for e.g. `fileEncoding` arguments. `"macintosh"` is the official (and most widely supported) name for ‘Mac Roman’ (<https://en.wikipedia.org/wiki/Mac_OS_Roman>).
### See Also
`[localeToCharset](../../utils/html/localetocharset)`, `[file](connections)`.
### Examples
```
## In principle, as not all systems have iconvlist
try(utils::head(iconvlist(), n = 50))
## Not run:
## convert from Latin-2 to UTF-8: two of the glibc iconv variants.
iconv(x, "ISO_8859-2", "UTF-8")
iconv(x, "LATIN2", "UTF-8")
## End(Not run)
## Both x below are in latin1 and will only display correctly in a
## locale that can represent and display latin1.
x <- "fa\xE7ile"
Encoding(x) <- "latin1"
x
charToRaw(xx <- iconv(x, "latin1", "UTF-8"))
xx
iconv(x, "latin1", "ASCII") # NA
iconv(x, "latin1", "ASCII", "?") # "fa?ile"
iconv(x, "latin1", "ASCII", "") # "faile"
iconv(x, "latin1", "ASCII", "byte") # "fa<e7>ile"
iconv(xx, "UTF-8", "ASCII", "Unicode") # "fa<U+00E7>ile"
## Extracts from old R help files (they are nowadays in UTF-8)
x <- c("Ekstr\xf8m", "J\xf6reskog", "bi\xdfchen Z\xfcrcher")
Encoding(x) <- "latin1"
x
try(iconv(x, "latin1", "ASCII//TRANSLIT")) # platform-dependent
iconv(x, "latin1", "ASCII", sub = "byte")
## and for Windows' 'Unicode'
str(xx <- iconv(x, "latin1", "UTF-16LE", toRaw = TRUE))
iconv(xx, "UTF-16LE", "UTF-8")
```
| programming_docs |
Subsets and Splits