code
stringlengths
2.5k
150k
kind
stringclasses
1 value
r None `all.equal` Test if Two Objects are (Nearly) Equal --------------------------------------------------- ### Description `all.equal(x, y)` is a utility to compare **R** objects `x` and `y` testing ‘near equality’. If they are different, comparison is still made to some extent, and a report of the differences is returned. Do not use `all.equal` directly in `if` expressions—either use `isTRUE(all.equal(....))` or `<identical>` if appropriate. ### Usage ``` all.equal(target, current, ...) ## S3 method for class 'numeric' all.equal(target, current, tolerance = sqrt(.Machine$double.eps), scale = NULL, countEQ = FALSE, formatFUN = function(err, what) format(err), ..., check.attributes = TRUE) ## S3 method for class 'list' all.equal(target, current, ..., check.attributes = TRUE, use.names = TRUE) ## S3 method for class 'environment' all.equal(target, current, all.names = TRUE, evaluate = TRUE, ...) ## S3 method for class 'function' all.equal(target, current, check.environment=TRUE, ...) ## S3 method for class 'POSIXt' all.equal(target, current, ..., tolerance = 1e-3, scale, check.tzone = TRUE) attr.all.equal(target, current, ..., check.attributes = TRUE, check.names = TRUE) ``` ### Arguments | | | | --- | --- | | `target` | **R** object. | | `current` | other **R** object, to be compared with `target`. | | `...` | further arguments for different methods, notably the following two, for numerical comparison: | | `tolerance` | numeric *≥* 0. Differences smaller than `tolerance` are not reported. The default value is close to `1.5e-8`. | | `scale` | `NULL` or numeric > 0, typically of length 1 or `length(target)`. See ‘Details’. | | `countEQ` | logical indicating if the `target == current` cases should be counted when computing the mean (absolute or relative) differences. The default, `FALSE` may seem misleading in cases where `target` and `current` only differ in a few places; see the extensive example. | | `formatFUN` | a `<function>` of two arguments, `err`, the relative, absolute or scaled error, and `what`, a character string indicating the *kind* of error; may be used, e.g., to format relative and absolute errors differently. | | `check.attributes` | logical indicating if the `<attributes>` of `target` and `current` (other than the names) should be compared. | | `use.names` | logical indicating if `<list>` comparison should report differing components by name (if matching) instead of integer index. Note that this comes after `...` and so must be specified by its full name. | | `all.names` | logical passed to `<ls>` indicating if “hidden” objects should also be considered in the environments. | | `evaluate` | for the `environment` method: `<logical>` indicating if “promises should be forced”, i.e., typically formal function arguments be evaluated for comparison. If false, only the `<names>` of the objects in the two environments are checked for equality. | | `check.environment` | logical requiring that the `<environment>()`s of functions should be compared, too. You may need to set `check.environment=FALSE` in unexpected cases, such as when comparing two `[nls](../../stats/html/nls)()` fits. | | `check.tzone` | logical indicating if the `"tzone"` attributes of `target` and `current` should be compared. | | `check.names` | logical indicating if the `<names>(.)` of `target` and `current` should be compared. | ### Details `all.equal` is a generic function, dispatching methods on the `target` argument. To see the available methods, use `[methods](../../utils/html/methods)("all.equal")`, but note that the default method also does some dispatching, e.g. using the raw method for logical targets. Remember that arguments which follow `...` must be specified by (unabbreviated) name. It is inadvisable to pass unnamed arguments in `...` as these will match different arguments in different methods. Numerical comparisons for `scale = NULL` (the default) are typically on *relative difference* scale unless the target values are close to zero: First, the mean absolute difference of the two numerical vectors is computed. If this is smaller than `tolerance` or not finite, absolute differences are used, otherwise relative differences scaled by the mean absolute `target` value. Note that these comparisons are computed only for those vector elements where `target` is not `[NA](na)` and differs from `current`. If `countEQ` is true, the equal and `NA` cases are *counted* in determining “sample” size. If `scale` is numeric (and positive), absolute comparisons are made after scaling (dividing) by `scale`. For complex `target`, the modulus (`[Mod](complex)`) of the difference is used: `all.equal.numeric` is called so arguments `tolerance` and `scale` are available. The `<list>` method compares components of `target` and `current` recursively, passing all other arguments, as long as both are “list-like”, i.e., fulfill either `[is.vector](vector)` or `[is.list](list)`. The `<environment>` method works via the `list` method, and is also used for reference classes (unless a specific `all.equal` method is defined). The method for date-time objects uses `all.equal.numeric` to compare times (in `"[POSIXct](datetimeclasses)"` representation) with a default `tolerance` of 0.001 seconds, ignoring `scale`. A time zone mismatch between `target` and `current` is reported unless `check.tzone = FALSE`. `attr.all.equal` is used for comparing `<attributes>`, returning `NULL` or a `character` vector. ### Value Either `TRUE` (`NULL` for `attr.all.equal`) or a vector of `<mode>` `"character"` describing the differences between `target` and `current`. ### References Chambers, J. M. (1998) *Programming with Data. A Guide to the S Language*. Springer (for `=`). ### See Also `<identical>`, `[isTRUE](logic)`, `[==](comparison)`, and `<all>` for exact equality testing. ### Examples ``` all.equal(pi, 355/113) # not precise enough (default tol) > relative error d45 <- pi*(1/4 + 1:10) stopifnot( all.equal(tan(d45), rep(1, 10))) # TRUE, but all (tan(d45) == rep(1, 10)) # FALSE, since not exactly all.equal(tan(d45), rep(1, 10), tolerance = 0) # to see difference ## advanced: equality of environments ae <- all.equal(as.environment("package:stats"), asNamespace("stats")) stopifnot(is.character(ae), length(ae) > 10, ## were incorrectly "considered equal" in R <= 3.1.1 all.equal(asNamespace("stats"), asNamespace("stats"))) ## A situation where 'countEQ = TRUE' makes sense: x1 <- x2 <- (1:100)/10; x2[2] <- 1.1*x1[2] ## 99 out of 100 pairs (x1[i], x2[i]) are equal: plot(x1,x2, main = "all.equal.numeric() -- not counting equal parts") all.equal(x1,x2) ## "Mean relative difference: 0.1" mtext(paste("all.equal(x1,x2) :", all.equal(x1,x2)), line= -2) ##' extract the 'Mean relative difference' as number: all.eqNum <- function(...) as.numeric(sub(".*:", '', all.equal(...))) set.seed(17) ## When x2 is jittered, typically all pairs (x1[i],x2[i]) do differ: summary(r <- replicate(100, all.eqNum(x1, x2*(1+rnorm(x1)*1e-7)))) mtext(paste("mean(all.equal(x1, x2*(1 + eps_k))) {100 x} Mean rel.diff.=", signif(mean(r), 3)), line = -4, adj=0) ## With argument countEQ=TRUE, get "the same" (w/o need for jittering): mtext(paste("all.equal(x1,x2, countEQ=TRUE) :", signif(all.eqNum(x1,x2, countEQ=TRUE), 3)), line= -6, col=2) ## comparison of date-time objects now <- Sys.time() stopifnot( all.equal(now, now + 1e-4) # TRUE (default tolerance = 0.001 seconds) ) all.equal(now, now + 0.2) all.equal(now, as.POSIXlt(now, "UTC")) stopifnot( all.equal(now, as.POSIXlt(now, "UTC"), check.tzone = FALSE) # TRUE ) ``` r None `print` Print Values --------------------- ### Description `print` prints its argument and returns it *invisibly* (via `<invisible>(x)`). It is a generic function which means that new printing methods can be easily added for new `<class>`es. ### Usage ``` print(x, ...) ## S3 method for class 'factor' print(x, quote = FALSE, max.levels = NULL, width = getOption("width"), ...) ## S3 method for class 'table' print(x, digits = getOption("digits"), quote = FALSE, na.print = "", zero.print = "0", right = is.numeric(x) || is.complex(x), justify = "none", ...) ## S3 method for class 'function' print(x, useSource = TRUE, ...) ``` ### Arguments | | | | --- | --- | | `x` | an object used to select a method. | | `...` | further arguments passed to or from other methods. | | `quote` | logical, indicating whether or not strings should be printed with surrounding quotes. | | `max.levels` | integer, indicating how many levels should be printed for a factor; if `0`, no extra "Levels" line will be printed. The default, `NULL`, entails choosing `max.levels` such that the levels print on one line of width `width`. | | `width` | only used when `max.levels` is NULL, see above. | | `digits` | minimal number of *significant* digits, see `<print.default>`. | | `na.print` | character string (or `NULL`) indicating `[NA](na)` values in printed output, see `<print.default>`. | | `zero.print` | character specifying how zeros (`0`) should be printed; for sparse tables, using `"."` can produce more readable results, similar to printing sparse matrices in [Matrix](https://CRAN.R-project.org/package=Matrix). | | `right` | logical, indicating whether or not strings should be right aligned. | | `justify` | character indicating if strings should left- or right-justified or left alone, passed to `<format>`. | | `useSource` | logical indicating if internally stored source should be used for printing when present, e.g., if `<options>(keep.source = TRUE)` has been in use. | ### Details The default method, `<print.default>` has its own help page. Use `[methods](../../utils/html/methods)("print")` to get all the methods for the `print` generic. `print.factor` allows some customization and is used for printing `[ordered](factor)` factors as well. `print.table` for printing `<table>`s allows other customization. As of R 3.0.0, it only prints a description in case of a table with 0-extents (this can happen if a classifier has no valid data). See `<noquote>` as an example of a class whose main purpose is a specific `print` method. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S.* Wadsworth & Brooks/Cole. ### See Also The default method `<print.default>`, and help for the methods above; further `<options>`, `<noquote>`. For more customizable (but cumbersome) printing, see `<cat>`, `<format>` or also `<write>`. For a simple prototypical print method, see `[.print.via.format](../../tools/html/print.via.format)` in package tools. ### Examples ``` require(stats) ts(1:20) #-- print is the "Default function" --> print.ts(.) is called for(i in 1:3) print(1:i) ## Printing of factors attenu$station ## 117 levels -> 'max.levels' depending on width ## ordered factors: levels "l1 < l2 < .." esoph$agegp[1:12] esoph$alcgp[1:12] ## Printing of sparse (contingency) tables set.seed(521) t1 <- round(abs(rt(200, df = 1.8))) t2 <- round(abs(rt(200, df = 1.4))) table(t1, t2) # simple print(table(t1, t2), zero.print = ".") # nicer to read ## same for non-integer "table": T <- table(t2,t1) T <- T * (1+round(rlnorm(length(T)))/4) print(T, zero.print = ".") # quite nicer, print.table(T[,2:8] * 1e9, digits=3, zero.print = ".") ## still slightly inferior to Matrix::Matrix(T) for larger T ## Corner cases with empty extents: table(1, NA) # < table of extent 1 x 0 > ``` r None `typeof` The Type of an Object ------------------------------- ### Description `typeof` determines the (**R** internal) type or storage mode of any object ### Usage ``` typeof(x) ``` ### Arguments | | | | --- | --- | | `x` | any **R** object. | ### Value A character string. The possible values are listed in the structure `TypeTable` in ‘src/main/util.c’. Current values are the vector types `"logical"`, `"integer"`, `"double"`, `"complex"`, `"character"`, `"raw"` and `"list"`, `"NULL"`, `"closure"` (function), `"special"` and `"builtin"` (basic functions and operators), `"environment"`, `"S4"` (some S4 objects) and others that are unlikely to be seen at user level (`"symbol"`, `"pairlist"`, `"promise"`, `"language"`, `"char"`, `"..."`, `"any"`, `"expression"`, `"externalptr"`, `"bytecode"` and `"weakref"`). ### See Also `<mode>`, `[storage.mode](mode)`. `[isS4](iss4)` to determine if an object has an S4 class. ### Examples ``` typeof(2) mode(2) ``` r None `memCompress` In-memory Compression and Decompression ------------------------------------------------------ ### Description In-memory compression or decompression for raw vectors. ### Usage ``` memCompress(from, type = c("gzip", "bzip2", "xz", "none")) memDecompress(from, type = c("unknown", "gzip", "bzip2", "xz", "none"), asChar = FALSE) ``` ### Arguments | | | | --- | --- | | `from` | A raw vector. For `memCompress` a character vector will be converted to a raw vector with character strings separated by `"\n"`. Types `"gzip"` and `"xz"` support long raw vectors as from **R** 4.0.0. | | `type` | character string, the type of compression. May be abbreviated to a single letter, defaults to the first of the alternatives. | | `asChar` | logical: should the result be converted to a character string? NB: character strings have a limit of *2^31 - 1* bytes, so raw vectors should be used for large inputs. | ### Details `type = "none"` passes the input through unchanged, but may be useful if `type` is a variable. `type = "unknown"` attempts to detect the type of compression applied (if any): this will always succeed for `bzip2` compression, and will succeed for other forms if there is a suitable header. It will auto-detect the ‘magic’ header (`"\x1f\x8b"`) added to files by the `gzip` program (and to files written by `[gzfile](connections)`), but `memCompress` does not add such a header. (It supports RFC 1950 format, sometimes known as ‘zlib’ format, for compression and decompression and RFC 1952 for decompression only.) `gzip` compression uses whatever is the default compression level of the underlying library (usually `6`). `bzip2` compression always adds a header (`"BZh"`). The underlying library only supports in-memory (de)compression of up to *2^31 - 1* elements. Compression is equivalent to `bzip2 -9` (the default). Compressing with `type = "xz"` is equivalent to compressing a file with `xz -9e` (including adding the ‘magic’ header): decompression should cope with the contents of any file compressed by `xz` version 4.999 and later, as well as by some versions of `lzma`. There are other versions, in particular ‘raw’ streams, that are not currently handled. All the types of compression can expand the input: for `"gzip"` and `"bzip2"` the maximum expansion is known and so `memCompress` can always allocate sufficient space. For `"xz"` it is possible (but extremely unlikely) that compression will fail if the output would have been too large. ### Value A raw vector or a character string (if `asChar = TRUE`). ### See Also <connections>. `[extSoftVersion](extsoftversion)` for the versions of the `zlib`, `bzip2` and `xz` libraries in use. <https://en.wikipedia.org/wiki/Data_compression> for background on data compression, <https://zlib.net/>, <https://en.wikipedia.org/wiki/Gzip>, <http://www.bzip.org/>, <https://en.wikipedia.org/wiki/Bzip2>, <https://tukaani.org/xz/> and <https://en.wikipedia.org/wiki/Xz> for references about the particular schemes used. ### Examples ``` txt <- readLines(file.path(R.home("doc"), "COPYING")) sum(nchar(txt)) txt.gz <- memCompress(txt, "g") length(txt.gz) txt2 <- strsplit(memDecompress(txt.gz, "g", asChar = TRUE), "\n")[[1]] stopifnot(identical(txt, txt2)) txt.bz2 <- memCompress(txt, "b") length(txt.bz2) ## can auto-detect bzip2: txt3 <- strsplit(memDecompress(txt.bz2, asChar = TRUE), "\n")[[1]] stopifnot(identical(txt, txt3)) ## xz compression is only worthwhile for large objects txt.xz <- memCompress(txt, "x") length(txt.xz) txt3 <- strsplit(memDecompress(txt.xz, asChar = TRUE), "\n")[[1]] stopifnot(identical(txt, txt3)) ``` r None `identical` Test Objects for Exact Equality -------------------------------------------- ### Description The safe and reliable way to test two objects for being *exactly* equal. It returns `TRUE` in this case, `FALSE` in every other case. ### Usage ``` identical(x, y, num.eq = TRUE, single.NA = TRUE, attrib.as.set = TRUE, ignore.bytecode = TRUE, ignore.environment = FALSE, ignore.srcref = TRUE) ``` ### Arguments | | | | --- | --- | | `x, y` | any **R** objects. | | `num.eq` | logical indicating if (`<double>` and `<complex>` non-`[NA](na)`) numbers should be compared using `[==](comparison)` (‘equal’), or by bitwise comparison. The latter (non-default) differentiates between `-0` and `+0`. | | `single.NA` | logical indicating if there is conceptually just one numeric `[NA](na)` and one `[NaN](is.finite)`; `single.NA = FALSE` differentiates bit patterns. | | `attrib.as.set` | logical indicating if `<attributes>` of `x` and `y` should be treated as *unordered* tagged pairlists (“sets”); this currently also applies to `[slot](../../methods/html/slot)`s of S4 objects. It may well be too strict to set `attrib.as.set = FALSE`. | | `ignore.bytecode` | logical indicating if byte code should be ignored when comparing [closure](function)s. | | `ignore.environment` | logical indicating if their environments should be ignored when comparing [closure](function)s. | | `ignore.srcref` | logical indicating if their `"srcref"` attributes should be ignored when comparing [closure](function)s. | ### Details A call to `identical` is the way to test exact equality in `if` and `while` statements, as well as in logical expressions that use `&&` or `||`. In all these applications you need to be assured of getting a single logical value. Users often use the comparison operators, such as `==` or `!=`, in these situations. It looks natural, but it is not what these operators are designed to do in **R**. They return an object like the arguments. If you expected `x` and `y` to be of length 1, but it happened that one of them was not, you will *not* get a single `FALSE`. Similarly, if one of the arguments is `NA`, the result is also `NA`. In either case, the expression `if(x == y)....` won't work as expected. The function `all.equal` is also sometimes used to test equality this way, but was intended for something different: it allows for small differences in numeric results. The computations in `identical` are also reliable and usually fast. There should never be an error. The only known way to kill `identical` is by having an invalid pointer at the C level, generating a memory fault. It will usually find inequality quickly. Checking equality for two large, complicated objects can take longer if the objects are identical or nearly so, but represent completely independent copies. For most applications, however, the computational cost should be negligible. If `single.NA` is true, as by default, `identical` sees `[NaN](is.finite)` as different from `[NA\_real\_](na)`, but all `NaN`s are equal (and all `NA` of the same type are equal). Character strings are regarded as identical if they are in different marked encodings but would agree when translated to UTF-8. If `attrib.as.set` is true, as by default, comparison of attributes view them as a set (and not a vector, so order is not tested). If `ignore.bytecode` is true (the default), the compiled bytecode of a function (see `[cmpfun](../../compiler/html/compile)`) will be ignored in the comparison. If it is false, functions will compare equal only if they are copies of the same compiled object (or both are uncompiled). To check whether two different compiles are equal, you should compare the results of `[disassemble](../../compiler/html/compile)()`. You almost never want to use `identical` on datetimes of class `"POSIXlt"`: not only can different times in the different time zones represent the same time and time zones have multiple names, but several of the components are optional. Note that `identical(x, y, FALSE, FALSE, FALSE, FALSE)` pickily tests for exact equality. ### Value A single logical value, `TRUE` or `FALSE`, never `NA` and never anything other than a single value. ### Author(s) John Chambers and R Core ### References Chambers, J. M. (1998) *Programming with Data. A Guide to the S Language*. Springer. ### See Also `<all.equal>` for descriptions of how two objects differ; `[Comparison](comparison)` and `[Logic](logic)` for elementwise comparisons. ### Examples ``` identical(1, NULL) ## FALSE -- don't try this with == identical(1, 1.) ## TRUE in R (both are stored as doubles) identical(1, as.integer(1)) ## FALSE, stored as different types x <- 1.0; y <- 0.99999999999 ## how to test for object equality allowing for numeric fuzz : (E <- all.equal(x, y)) identical(TRUE, E) isTRUE(E) # alternative test ## If all.equal thinks the objects are different, it returns a ## character string, and the above expression evaluates to FALSE ## even for unusual R objects : identical(.GlobalEnv, environment()) ### ------- Pickyness Flags : ----------------------------- ## the infamous example: identical(0., -0.) # TRUE, i.e. not differentiated identical(0., -0., num.eq = FALSE) ## similar: identical(NaN, -NaN) # TRUE identical(NaN, -NaN, single.NA = FALSE) # differ on bit-level ### For functions ("closure"s): ---------------------------------------------- ### ~~~~~~~~~ f <- function(x) x f g <- compiler::cmpfun(f) g identical(f, g) # TRUE, as bytecode is ignored by default identical(f, g, ignore.bytecode=FALSE) # FALSE: bytecode differs ## GLM families contain several functions, some of which share an environment: p1 <- poisson() ; p2 <- poisson() identical(p1, p2) # FALSE identical(p1, p2, ignore.environment=TRUE) # TRUE ## in interactive use, the 'keep.source' option is typically true: op <- options(keep.source = TRUE) # and so, these have differing "srcref" : f1 <- function() {} f2 <- function() {} identical(f1,f2)# ignore.srcref= TRUE : TRUE identical(f1,f2, ignore.srcref=FALSE)# FALSE options(op) # revert to previous state ```
programming_docs
r None `notyet` Not Yet Implemented Functions and Unused Arguments ------------------------------------------------------------ ### Description In order to pinpoint missing functionality, the **R** core team uses these functions for missing **R** functions and not yet used arguments of existing **R** functions (which are typically there for compatibility purposes). You are very welcome to contribute your code ... ### Usage ``` .NotYetImplemented() .NotYetUsed(arg, error = TRUE) ``` ### Arguments | | | | --- | --- | | `arg` | an argument of a function that is not yet used. | | `error` | a logical. If `TRUE`, an error is signalled; if `FALSE`; only a warning is given. | ### See Also the contrary, `[Deprecated](deprecated)` and `[Defunct](defunct)` for outdated code. ### Examples ``` require(graphics) barplot(1:5, inside = TRUE) # 'inside' is not yet used ``` r None `interaction` Compute Factor Interactions ------------------------------------------ ### Description `interaction` computes a factor which represents the interaction of the given factors. The result of `interaction` is always unordered. ### Usage ``` interaction(..., drop = FALSE, sep = ".", lex.order = FALSE) ``` ### Arguments | | | | --- | --- | | `...` | the factors for which interaction is to be computed, or a single list giving those factors. | | `drop` | if `drop` is `TRUE`, unused factor levels are dropped from the result. The default is to retain all factor levels. | | `sep` | string to construct the new level labels by joining the constituent ones. | | `lex.order` | logical indicating if the order of factor concatenation should be lexically ordered. | ### Value A factor which represents the interaction of the given factors. The levels are labelled as the levels of the individual factors joined by `sep` which is `.` by default. By default, when `lex.order = FALSE`, the levels are ordered so the level of the first factor varies fastest, then the second and so on. This is the reverse of lexicographic ordering (which you can get by `lex.order = TRUE`), and differs from `[:](colon)`. (It is done this way for compatibility with S.) ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S*. Wadsworth & Brooks/Cole. ### See Also `<factor>`; `[:](colon)` where `f:g` is similar to `interaction(f, g, sep = ":")` when `f` and `g` are factors. ### Examples ``` a <- gl(2, 4, 8) b <- gl(2, 2, 8, labels = c("ctrl", "treat")) s <- gl(2, 1, 8, labels = c("M", "F")) interaction(a, b) interaction(a, b, s, sep = ":") stopifnot(identical(a:s, interaction(a, s, sep = ":", lex.order = TRUE)), identical(a:s:b, interaction(a, s, b, sep = ":", lex.order = TRUE))) ``` r None `commandArgs` Extract Command Line Arguments --------------------------------------------- ### Description Provides access to a copy of the command line arguments supplied when this **R** session was invoked. ### Usage ``` commandArgs(trailingOnly = FALSE) ``` ### Arguments | | | | --- | --- | | `trailingOnly` | logical. Should only arguments after --args be returned? | ### Details These arguments are captured before the standard **R** command line processing takes place. This means that they are the unmodified values. This is especially useful with the --args command-line flag to **R**, as all of the command line after that flag is skipped. ### Value A character vector containing the name of the executable and the user-supplied command line arguments. The first element is the name of the executable by which **R** was invoked. The exact form of this element is platform dependent: it may be the fully qualified name, or simply the last component (or basename) of the application, or for an embedded **R** it can be anything the programmer supplied. If `trailingOnly = TRUE`, a character vector of those arguments (if any) supplied after --args. ### See Also `[R.home](rhome)()`, `[Startup](startup)` and `[BATCH](../../utils/html/batch)` ### Examples ``` commandArgs() ## Spawn a copy of this application as it was invoked, ## subject to shell quoting issues ## system(paste(commandArgs(), collapse = " ")) ``` r None `sum` Sum of Vector Elements ----------------------------- ### Description `sum` returns the sum of all the values present in its arguments. ### Usage ``` sum(..., na.rm = FALSE) ``` ### Arguments | | | | --- | --- | | `...` | numeric or complex or logical vectors. | | `na.rm` | logical. Should missing values (including `NaN`) be removed? | ### Details This is a generic function: methods can be defined for it directly or via the `[Summary](groupgeneric)` group generic. For this to work properly, the arguments `...` should be unnamed, and dispatch is on the first argument. If `na.rm` is `FALSE` an `NA` or `NaN` value in any of the arguments will cause a value of `NA` or `NaN` to be returned, otherwise `NA` and `NaN` values are ignored. Logical true values are regarded as one, false values as zero. For historical reasons, `NULL` is accepted and treated as if it were `integer(0)`. Loss of accuracy can occur when summing values of different signs: this can even occur for sufficiently long integer inputs if the partial sums would cause integer overflow. Where possible extended-precision accumulators are used, typically well supported with C99 and newer, but possibly platform-dependent. ### Value The sum. If all of the `...` arguments are of type integer or logical, then the sum is `<integer>` when possible and is `double` otherwise. Integer overflow should no longer happen since **R** version 3.5.0. For other argument types it is a length-one numeric (`<double>`) or complex vector. **NB:** the sum of an empty set is zero, by definition. ### S4 methods This is part of the S4 `[Summary](../../methods/html/s4groupgeneric)` group generic. Methods for it must use the signature `x, ..., na.rm`. ‘[plotmath](../../grdevices/html/plotmath)’ for the use of `sum` in plot annotation. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[colSums](colsums)` for row and column sums. ### Examples ``` ## Pass a vector to sum, and it will add the elements together. sum(1:5) ## Pass several numbers to sum, and it also adds the elements. sum(1, 2, 3, 4, 5) ## In fact, you can pass vectors into several arguments, and everything gets added. sum(1:2, 3:5) ## If there are missing values, the sum is unknown, i.e., also missing, .... sum(1:5, NA) ## ... unless we exclude missing values explicitly: sum(1:5, NA, na.rm = TRUE) ``` r None `Random-user` User-supplied Random Number Generation ----------------------------------------------------- ### Description Function `[RNGkind](random)` allows user-coded uniform and normal random number generators to be supplied. The details are given here. ### Details A user-specified uniform RNG is called from entry points in dynamically-loaded compiled code. The user must supply the entry point `user_unif_rand`, which takes no arguments and returns a *pointer to* a double. The example below will show the general pattern. The generator should have at least 25 bits of precision. Optionally, the user can supply the entry point `user_unif_init`, which is called with an `unsigned int` argument when `[RNGkind](random)` (or `set.seed`) is called, and is intended to be used to initialize the user's RNG code. The argument is intended to be used to set the ‘seeds’; it is the `seed` argument to `set.seed` or an essentially random seed if `[RNGkind](random)` is called. If only these functions are supplied, no information about the generator's state is recorded in `.Random.seed`. Optionally, functions `user_unif_nseed` and `user_unif_seedloc` can be supplied which are called with no arguments and should return pointers to the number of seeds and to an integer (specifically, Int32) array of seeds. Calls to `GetRNGstate` and `PutRNGstate` will then copy this array to and from `.Random.seed`. A user-specified normal RNG is specified by a single entry point `user_norm_rand`, which takes no arguments and returns a *pointer to* a double. ### Warning As with all compiled code, mis-specifying these functions can crash **R**. Do include the ‘R\_ext/Random.h’ header file for type checking. ### Examples ``` ## Not run: ## Marsaglia's congruential PRNG #include <R_ext/Random.h> static Int32 seed; static double res; static int nseed = 1; double * user_unif_rand() { seed = 69069 * seed + 1; res = seed * 2.32830643653869e-10; return &res; } void user_unif_init(Int32 seed_in) { seed = seed_in; } int * user_unif_nseed() { return &nseed; } int * user_unif_seedloc() { return (int *) &seed; } /* ratio-of-uniforms for normal */ #include <math.h> static double x; double * user_norm_rand() { double u, v, z; do { u = unif_rand(); v = 0.857764 * (2. * unif_rand() - 1); x = v/u; z = 0.25 * x * x; if (z < 1. - u) break; if (z > 0.259/u + 0.35) continue; } while (z > -log(u)); return &x; } ## Use under Unix: R CMD SHLIB urand.c R > dyn.load("urand.so") > RNGkind("user") > runif(10) > .Random.seed > RNGkind(, "user") > rnorm(10) > RNGkind() [1] "user-supplied" "user-supplied" ## End(Not run) ``` r None `cbind` Combine R Objects by Rows or Columns --------------------------------------------- ### Description Take a sequence of vector, matrix or data-frame arguments and combine by *c*olumns or *r*ows, respectively. These are generic functions with methods for other **R** classes. ### Usage ``` cbind(..., deparse.level = 1) rbind(..., deparse.level = 1) ## S3 method for class 'data.frame' rbind(..., deparse.level = 1, make.row.names = TRUE, stringsAsFactors = FALSE, factor.exclude = TRUE) ``` ### Arguments | | | | --- | --- | | `...` | (generalized) vectors or matrices. These can be given as named arguments. Other **R** objects may be coerced as appropriate, or S4 methods may be used: see sections ‘Details’ and ‘Value’. (For the `"data.frame"` method of `cbind` these can be further arguments to `<data.frame>` such as `stringsAsFactors`.) | | `deparse.level` | integer controlling the construction of labels in the case of non-matrix-like arguments (for the default method): `deparse.level = 0` constructs no labels; the default, `deparse.level = 1 or 2` constructs labels from the argument names, see the ‘Value’ section below. | | `make.row.names` | (only for data frame method:) logical indicating if unique and valid `<row.names>` should be constructed from the arguments. | | `stringsAsFactors` | logical, passed to `<as.data.frame>`; only has an effect when the `...` arguments contain a (non-`data.frame`) `<character>`. | | `factor.exclude` | if the data frames contain factors, the default `TRUE` ensures that `NA` levels of factors are kept, see [PR#17562](https://bugs.R-project.org/bugzilla3/show_bug.cgi?id=17562) and the ‘Data frame methods’. In **R** versions up to 3.6.x, `factor.exclude = NA` has been implicitly hardcoded (**R** <= 3.6.0) or the default (**R** = 3.6.x, x >= 1). | ### Details The functions `cbind` and `rbind` are S3 generic, with methods for data frames. The data frame method will be used if at least one argument is a data frame and the rest are vectors or matrices. There can be other methods; in particular, there is one for time series objects. See the section on ‘Dispatch’ for how the method to be used is selected. If some of the arguments are of an S4 class, i.e., `[isS4](iss4)(.)` is true, S4 methods are sought also, and the hidden `cbind` / `rbind` functions from package methods maybe called, which in turn build on `[cbind2](../../methods/html/cbind2)` or `[rbind2](../../methods/html/cbind2)`, respectively. In that case, `deparse.level` is obeyed, similarly to the default method. In the default method, all the vectors/matrices must be atomic (see `<vector>`) or lists. Expressions are not allowed. Language objects (such as formulae and calls) and pairlists will be coerced to lists: other objects (such as names and external pointers) will be included as elements in a list result. Any classes the inputs might have are discarded (in particular, factors are replaced by their internal codes). If there are several matrix arguments, they must all have the same number of columns (or rows) and this will be the number of columns (or rows) of the result. If all the arguments are vectors, the number of columns (rows) in the result is equal to the length of the longest vector. Values in shorter arguments are recycled to achieve this length (with a `<warning>` if they are recycled only *fractionally*). When the arguments consist of a mix of matrices and vectors the number of columns (rows) of the result is determined by the number of columns (rows) of the matrix arguments. Any vectors have their values recycled or subsetted to achieve this length. For `cbind` (`rbind`), vectors of zero length (including `NULL`) are ignored unless the result would have zero rows (columns), for S compatibility. (Zero-extent matrices do not occur in S3 and are not ignored in **R**.) Matrices are restricted to less than *2^31* rows and columns even on 64-bit systems. So input vectors have the same length restriction: as from **R** 3.2.0 input matrices with more elements (but meeting the row and column restrictions) are allowed. ### Value For the default method, a matrix combining the `...` arguments column-wise or row-wise. (Exception: if there are no inputs or all the inputs are `NULL`, the value is `NULL`.) The type of a matrix result determined from the highest type of any of the inputs in the hierarchy raw < logical < integer < double < complex < character < list . For `cbind` (`rbind`) the column (row) names are taken from the `colnames` (`rownames`) of the arguments if these are matrix-like. Otherwise from the names of the arguments or where those are not supplied and `deparse.level > 0`, by deparsing the expressions given, for `deparse.level = 1` only if that gives a sensible name (a ‘symbol’, see `[is.symbol](name)`). For `cbind` row names are taken from the first argument with appropriate names: rownames for a matrix, or names for a vector of length the number of rows of the result. For `rbind` column names are taken from the first argument with appropriate names: colnames for a matrix, or names for a vector of length the number of columns of the result. ### Data frame methods The `cbind` data frame method is just a wrapper for `<data.frame>(..., check.names = FALSE)`. This means that it will split matrix columns in data frame arguments, and convert character columns to factors unless `stringsAsFactors = FALSE` is specified. The `rbind` data frame method first drops all zero-column and zero-row arguments. (If that leaves none, it returns the first argument with columns otherwise a zero-column zero-row data frame.) It then takes the classes of the columns from the first data frame, and matches columns by name (rather than by position). Factors have their levels expanded as necessary (in the order of the levels of the level sets of the factors encountered) and the result is an ordered factor if and only if all the components were ordered factors. (The last point differs from S-PLUS.) Old-style categories (integer vectors with levels) are promoted to factors. Note that for result column `j`, `<factor>(., exclude = X(j))` is applied, where ``` X(j) := if(isTRUE(factor.exclude)) { if(!NA.lev[j]) NA # else NULL } else factor.exclude ``` where `NA.lev[j]` is true iff any contributing data frame has had a `<factor>` in column `j` with an explicit `NA` level. ### Dispatch The method dispatching is *not* done via `[UseMethod](usemethod)()`, but by C-internal dispatching. Therefore there is no need for, e.g., `rbind.default`. The dispatch algorithm is described in the source file (‘.../src/main/bind.c’) as 1. For each argument we get the list of possible class memberships from the class attribute. 2. We inspect each class in turn to see if there is an applicable method. 3. If we find a method, we use it. Otherwise, if there was an S4 object among the arguments, we try S4 dispatch; otherwise, we use the default code. (Before **R** 4.0.0, an applicable method found was used only if identical to any method determined for prior arguments.) If you want to combine other objects with data frames, it may be necessary to coerce them to data frames first. (Note that this algorithm can result in calling the data frame method if all the arguments are either data frames or vectors, and this will result in the coercion of character vectors to factors.) ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<c>` to combine vectors (and lists) as vectors, `<data.frame>` to combine vectors and matrices as a data frame. ### Examples ``` m <- cbind(1, 1:7) # the '1' (= shorter vector) is recycled m m <- cbind(m, 8:14)[, c(1, 3, 2)] # insert a column m cbind(1:7, diag(3)) # vector is subset -> warning cbind(0, rbind(1, 1:3)) cbind(I = 0, X = rbind(a = 1, b = 1:3)) # use some names xx <- data.frame(I = rep(0,2)) cbind(xx, X = rbind(a = 1, b = 1:3)) # named differently cbind(0, matrix(1, nrow = 0, ncol = 4)) #> Warning (making sense) dim(cbind(0, matrix(1, nrow = 2, ncol = 0))) #-> 2 x 1 ## deparse.level dd <- 10 rbind(1:4, c = 2, "a++" = 10, dd, deparse.level = 0) # middle 2 rownames rbind(1:4, c = 2, "a++" = 10, dd, deparse.level = 1) # 3 rownames (default) rbind(1:4, c = 2, "a++" = 10, dd, deparse.level = 2) # 4 rownames ## cheap row names: b0 <- gl(3,4, labels=letters[1:3]) bf <- setNames(b0, paste0("o", seq_along(b0))) df <- data.frame(a = 1, B = b0, f = gl(4,3)) df. <- data.frame(a = 1, B = bf, f = gl(4,3)) new <- data.frame(a = 8, B ="B", f = "1") (df1 <- rbind(df , new)) (df.1 <- rbind(df., new)) stopifnot(identical(df1, rbind(df, new, make.row.names=FALSE)), identical(df1, rbind(df., new, make.row.names=FALSE))) ``` r None `names` The Names of an Object ------------------------------- ### Description Functions to get or set the names of an object. ### Usage ``` names(x) names(x) <- value ``` ### Arguments | | | | --- | --- | | `x` | an **R** object. | | `value` | a character vector of up to the same length as `x`, or `NULL`. | ### Details `names` is a generic accessor function, and `names<-` is a generic replacement function. The default methods get and set the `"names"` attribute of a vector (including a list) or pairlist. For an `<environment>` `env`, `names(env)` gives the names of the corresponding list, i.e., `names(as.list(env, all.names = TRUE))` which are also given by `<ls>(env, all.names = TRUE, sorted = FALSE)`. If the environment is used as a hash table, `names(env)` are its “keys”. If `value` is shorter than `x`, it is extended by character `NA`s to the length of `x`. It is possible to update just part of the names attribute via the general rules: see the examples. This works because the expression there is evaluated as `z <- "names<-"(z, "[<-"(names(z), 3, "c2"))`. The name `""` is special: it is used to indicate that there is no name associated with an element of a (atomic or generic) vector. Subscripting by `""` will match nothing (not even elements which have no name). A name can be character `NA`, but such a name will never be matched and is likely to lead to confusion. Both are <primitive> functions. ### Value For `names`, `NULL` or a character vector of the same length as `x`. (`NULL` is given if the object has no names, including for objects of types which cannot have names.) For an environment, the length is the number of objects in the environment but the order of the names is arbitrary. For `names<-`, the updated object. (Note that the value of `names(x) <- value` is that of the assignment, `value`, not the return value from the left-hand side.) ### Note For vectors, the names are one of the <attributes> with restrictions on the possible values. For pairlists, the names are the tags and converted to and from a character vector. For a one-dimensional array the `names` attribute really is `<dimnames>[[1]]`. Formally classed aka “S4” objects typically have `[slotNames](../../methods/html/slot)()` (and no `names()`). ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[slotNames](../../methods/html/slot)`, `<dimnames>`. ### Examples ``` # print the names attribute of the islands data set names(islands) # remove the names attribute names(islands) <- NULL islands rm(islands) # remove the copy made z <- list(a = 1, b = "c", c = 1:3) names(z) # change just the name of the third element. names(z)[3] <- "c2" z z <- 1:3 names(z) ## assign just one name names(z)[2] <- "b" z ```
programming_docs
r None `files2` Manipulation of Directories and File Permissions ---------------------------------------------------------- ### Description These functions provide a low-level interface to the computer's file system. ### Usage ``` dir.exists(paths) dir.create(path, showWarnings = TRUE, recursive = FALSE, mode = "0777") Sys.chmod(paths, mode = "0777", use_umask = TRUE) Sys.umask(mode = NA) ``` ### Arguments | | | | --- | --- | | `path` | a character vector containing a single path name. Tilde expansion (see `<path.expand>`) is done. | | `paths` | character vectors containing file or directory paths. Tilde expansion (see `<path.expand>`) is done. | | `showWarnings` | logical; should the warnings on failure be shown? | | `recursive` | logical. Should elements of the path other than the last be created? If true, like the Unix command `mkdir -p`. | | `mode` | the mode to be used on Unix-alikes: it will be coerced by `[as.octmode](octmode)`. For `Sys.chmod` it is recycled along `paths`. | | `use_umask` | logical: should the mode be restricted by the `umask` setting? | ### Details `dir.create` creates the last element of the path, unless `recursive = TRUE`. Trailing path separators are discarded. The mode will be modified by the `umask` setting in the same way as for the system function `mkdir`. What modes can be set is OS-dependent, and it is unsafe to assume that more than three octal digits will be used. For more details see your OS's documentation on the system call `mkdir`, e.g. `man 2 mkdir` (and not that on the command-line utility of that name). One of the idiosyncrasies of Windows is that directory creation may report success but create a directory with a different name, for example `dir.create("G.S.")` creates ‘"G.S"’. This is undocumented, and what are the precise circumstances is unknown (and might depend on the version of Windows). Also avoid directory names with a trailing space. `Sys.chmod` sets the file permissions of one or more files. It may not be supported on a system (when a warning is issued). See the comments for `dir.create` for how modes are interpreted. Changing mode on a symbolic link is unlikely to work (nor be necessary). For more details see your OS's documentation on the system call `chmod`, e.g. `man 2 chmod` (and not that on the command-line utility of that name). Whether this changes the permission of a symbolic link or its target is OS-dependent (although to change the target is more common, and POSIX does not support modes for symbolic links: BSD-based Unixes do, though). `Sys.umask` sets the `umask` and returns the previous value: as a special case `mode = NA` just returns the current value. It may not be supported (when a warning is issued and `"0"` is returned). For more details see your OS's documentation on the system call `umask`, e.g. `man 2 umask`. How modes are handled depends on the file system, even on Unix-alikes (although their documentation is often written assuming a POSIX file system). So treat documentation cautiously if you are using, say, a FAT/FAT32 or network-mounted file system. See <files> for how file paths with marked encodings are interpreted. ### Value `dir.exists` returns a logical vector of `TRUE` or `FALSE` values (without names). `dir.create` and `Sys.chmod` return invisibly a logical vector indicating if the operation succeeded for each of the files attempted. Using a missing value for a path name will always be regarded as a failure. `dir.create` indicates failure if the directory already exists. If `showWarnings = TRUE`, `dir.create` will give a warning for an unexpected failure (e.g., not for a missing value nor for an already existing component for `recursive = TRUE`). `Sys.umask` returns the previous value of the `umask`, as a length-one object of class `"<octmode>"`: the visibility flag is off unless `mode` is `NA`. See also the section in the help for `[file.exists](files)` on case-insensitive file systems for the interpretation of `path` and `paths`. ### Author(s) Ross Ihaka, Brian Ripley ### See Also `<file.info>`, `[file.exists](files)`, `<file.path>`, `<list.files>`, `<unlink>`, `<basename>`, `<path.expand>`. ### Examples ``` ## Not run: ## Fix up maximal allowed permissions in a file tree Sys.chmod(list.dirs("."), "777") f <- list.files(".", all.files = TRUE, full.names = TRUE, recursive = TRUE) Sys.chmod(f, (file.info(f)$mode | "664")) ## End(Not run) ``` r None `class` Object Classes ----------------------- ### Description **R** possesses a simple generic function mechanism which can be used for an object-oriented style of programming. Method dispatch takes place based on the class of the first argument to the generic function. ### Usage ``` class(x) class(x) <- value unclass(x) inherits(x, what, which = FALSE) isa(x, what) oldClass(x) oldClass(x) <- value .class2(x) ``` ### Arguments | | | | --- | --- | | `x` | a **R** object | | `what, value` | a character vector naming classes. `value` can also be `NULL`. | | `which` | logical affecting return value: see ‘Details’. | ### Details Here, we describe the so called “S3” classes (and methods). For “S4” classes (and methods), see ‘Formal classes’ below. Many **R** objects have a `class` attribute, a character vector giving the names of the classes from which the object *inherits*. (Functions `oldClass` and `oldClass<-` get and set the attribute, which can also be done directly.) If the object does not have a class attribute, it has an implicit class, notably `"matrix"`, `"array"`, `"function"` or `"numeric"` or the result of `<typeof>(x)` (which is similar to `<mode>(x)`), but for type `"language"` and `<mode>` `"call"`, where the following extra classes exist for the corresponding function `<call>`s: `if`, `while`, `for`, `=`, `<-`, `(`, `{`, `call`. Note that for objects `x` of an implicit (or an S4) class, when a (S3) generic function `foo(x)` is called, method dispatch may use more classes than are returned by `class(x)`, e.g., for a numeric matrix, the `foo.numeric()` method may apply. The exact full `<character>` vector of the classes which `[UseMethod](usemethod)()` uses, is available as `.class2(x)` since **R** version 4.0.0. (This also applies to S4 objects when S3 dispatch is considered, see below.) Beware that using `.class2()` for other reasons than didactical, diagnostical or for debugging may rather be a misuse than smart. `[NULL](null)` objects (of implicit class `"NULL"`) cannot have attributes (hence no `class` attribute) and attempting to assign a class is an error. When a generic function `fun` is applied to an object with class attribute `c("first", "second")`, the system searches for a function called `fun.first` and, if it finds it, applies it to the object. If no such function is found, a function called `fun.second` is tried. If no class name produces a suitable function, the function `fun.default` is used (if it exists). If there is no class attribute, the implicit class is tried, then the default method. The function `class` prints the vector of names of classes an object inherits from. Correspondingly, `class<-` sets the classes an object inherits from. Assigning `NULL` removes the class attribute. `unclass` returns (a copy of) its argument with its class attribute removed. (It is not allowed for objects which cannot be copied, namely environments and external pointers.) `inherits` indicates whether its first argument inherits from any of the classes specified in the `what` argument. If `which` is `TRUE` then an integer vector of the same length as `what` is returned. Each element indicates the position in the `class(x)` matched by the element of `what`; zero indicates no match. If `which` is `FALSE` then `TRUE` is returned by `inherits` if any of the names in `what` match with any `class`. `isa` tests whether `x` is an object of class(es) as given in `what` by using `[is](../../methods/html/is)` if `x` is an S4 object, and otherwise giving `TRUE` iff *all* elements of `class(x)` are contained in `what`. All but `inherits` are <primitive> functions. ### Formal classes An additional mechanism of *formal* classes, nicknamed “S4”, is available in package methods which is attached by default. For objects which have a formal class, its name is returned by `class` as a character vector of length one and method dispatch can happen on *several* arguments, instead of only the first. However, S3 method selection attempts to treat objects from an S4 class as if they had the appropriate S3 class attribute, as does `inherits`. Therefore, S3 methods can be defined for S4 classes. See the ‘[Introduction](../../methods/html/introduction)’ and ‘[Methods\_for\_S3](../../methods/html/methods_for_s3)’ help pages for basic information on S4 methods and for the relation between these and S3 methods. The replacement version of the function sets the class to the value provided. For classes that have a formal definition, directly replacing the class this way is strongly deprecated. The expression `[as](../../methods/html/as)(object, value)` is the way to coerce an object to a particular class. The analogue of `inherits` for formal classes is `[is](../../methods/html/is)`. The two functions behave consistently with one exception: S4 classes can have conditional inheritance, with an explicit test. In this case, `is` will test the condition, but `inherits` ignores all conditional superclasses. ### Note Functions `oldClass` and `oldClass<-` behave in the same way as functions of those names in S-PLUS 5/6, *but* in **R** `[UseMethod](usemethod)` dispatches on the class as returned by `class` (with some interpolated classes: see the link) rather than `oldClass`. *However*, [group generic](groupgeneric)s dispatch on the `oldClass` for efficiency, and [internal generic](internalmethods)s only dispatch on objects for which `<is.object>` is true. In older versions of **R**, assigning a zero-length vector with `class` removed the class: it is now an error (whereas it still works for `oldClass`). It is clearer to always assign `NULL` to remove the class. ### See Also `[UseMethod](usemethod)`, `[NextMethod](usemethod)`, ‘[group generic](groupgeneric)’, ‘[internal generic](internalmethods)’ ### Examples ``` x <- 10 class(x) # "numeric" oldClass(x) # NULL inherits(x, "a") #FALSE class(x) <- c("a", "b") inherits(x,"a") #TRUE inherits(x, "a", TRUE) # 1 inherits(x, c("a", "b", "c"), TRUE) # 1 2 0 class( quote(pi) ) # "name" ## regular calls class( quote(sin(pi*x)) ) # "call" ## special calls class( quote(x <- 1) ) # "<-" class( quote((1 < 2)) ) # "(" class( quote( if(8<3) pi ) ) # "if" .class2(pi) # "double" "numeric" .class2(matrix(1:6, 2,3)) # "matrix" "array" "integer" "numeric" ``` r None `slice.index` Slice Indexes in an Array ---------------------------------------- ### Description Returns a matrix of integers indicating the number of their slice in a given array. ### Usage ``` slice.index(x, MARGIN) ``` ### Arguments | | | | --- | --- | | `x` | an array. If `x` has no dimension attribute, it is considered a one-dimensional array. | | `MARGIN` | an integer vector giving the dimension numbers to slice by. | ### Details If `MARGIN` gives a single dimension, then all elements of slice number `i` with respect to this have value `i`. In general, slice numbers are obtained by numbering all combinations of indices in the dimensions given by `MARGIN` in column-major order. I.e., with *m\_1*, ..., *m\_k* the dimension numbers (elements of `MARGIN`) sliced by and *d\_{m\_1}*, ..., *d\_{m\_k}* the corresponding extents, and *n\_1 = 1*, *n\_2 = d\_{m\_1}*, ..., *n\_k = d\_{m\_1} … d\_{m\_{k-1}}*, the number of the slice where dimension *m\_1* has value *i\_1*, ..., dimension *m\_k* has value *i\_k* is *1 + n\_1 (i\_1 - 1) + … + n\_k (i\_k - 1)*. ### Value An integer array `y` with dimensions corresponding to those of `x`. ### See Also `<row>` and `<col>` for determining row and column indexes; in fact, these are special cases of `slice.index` corresponding to `MARGIN` equal to 1 and 2, respectively when `x` is a matrix. ### Examples ``` x <- array(1 : 24, c(2, 3, 4)) slice.index(x, 2) slice.index(x, c(1, 3)) ## When slicing by dimensions 1 and 3, slice index 5 is obtained for ## dimension 1 has value 1 and dimension 3 has value 3 (see above): which(slice.index(x, c(1, 3)) == 5, arr.ind = TRUE) ``` r None `rle` Run Length Encoding -------------------------- ### Description Compute the lengths and values of runs of equal values in a vector – or the reverse operation. ### Usage ``` rle(x) inverse.rle(x, ...) ## S3 method for class 'rle' print(x, digits = getOption("digits"), prefix = "", ...) ``` ### Arguments | | | | --- | --- | | `x` | a vector (atomic, not a list) for `rle()`; an object of class `"rle"` for `inverse.rle()`. | | `...` | further arguments; ignored here. | | `digits` | number of significant digits for printing, see `<print.default>`. | | `prefix` | character string, prepended to each printed line. | ### Details ‘vector’ is used in the sense of `[is.vector](vector)`. Missing values are regarded as unequal to the previous value, even if that is also missing. `inverse.rle()` is the inverse function of `rle()`, reconstructing `x` from the runs. ### Value `rle()` returns an object of class `"rle"` which is a list with components: | | | | --- | --- | | `lengths` | an integer vector containing the length of each run. | | `values` | a vector of the same length as `lengths` with the corresponding values. | `inverse.rle()` returns an atomic vector. ### Examples ``` x <- rev(rep(6:10, 1:5)) rle(x) ## lengths [1:5] 5 4 3 2 1 ## values [1:5] 10 9 8 7 6 z <- c(TRUE, TRUE, FALSE, FALSE, TRUE, FALSE, TRUE, TRUE, TRUE) rle(z) rle(as.character(z)) print(rle(z), prefix = "..| ") N <- integer(0) stopifnot(x == inverse.rle(rle(x)), identical(N, inverse.rle(rle(N))), z == inverse.rle(rle(z))) ``` r None `kronecker` Kronecker Products on Arrays ----------------------------------------- ### Description Computes the generalised kronecker product of two arrays, `X` and `Y`. ### Usage ``` kronecker(X, Y, FUN = "*", make.dimnames = FALSE, ...) X %x% Y ``` ### Arguments | | | | --- | --- | | `X` | A vector or array. | | `Y` | A vector or array. | | `FUN` | a function; it may be a quoted string. | | `make.dimnames` | Provide dimnames that are the product of the dimnames of `X` and `Y`. | | `...` | optional arguments to be passed to `FUN`. | ### Details If `X` and `Y` do not have the same number of dimensions, the smaller array is padded with dimensions of size one. The returned array comprises submatrices constructed by taking `X` one term at a time and expanding that term as `FUN(x, Y, ...)`. `%x%` is an alias for `kronecker` (where `FUN` is hardwired to `"*"`). ### Value An array `A` with dimensions `dim(X) * dim(Y)`. ### Author(s) Jonathan Rougier ### References Shayle R. Searle (1982) *Matrix Algebra Useful for Statistics.* John Wiley and Sons. ### See Also `<outer>`, on which `kronecker` is built and `[%\*%](matmult)` for usual matrix multiplication. ### Examples ``` # simple scalar multiplication ( M <- matrix(1:6, ncol = 2) ) kronecker(4, M) # Block diagonal matrix: kronecker(diag(1, 3), M) # ask for dimnames fred <- matrix(1:12, 3, 4, dimnames = list(LETTERS[1:3], LETTERS[4:7])) bill <- c("happy" = 100, "sad" = 1000) kronecker(fred, bill, make.dimnames = TRUE) bill <- outer(bill, c("cat" = 3, "dog" = 4)) kronecker(fred, bill, make.dimnames = TRUE) ``` r None `structure` Attribute Specification ------------------------------------ ### Description `structure` returns the given object with further <attributes> set. ### Usage ``` structure(.Data, ...) ``` ### Arguments | | | | --- | --- | | `.Data` | an object which will have various attributes attached to it. | | `...` | attributes, specified in `tag = value` form, which will be attached to data. | ### Details Adding a class `"factor"` will ensure that numeric codes are given integer storage mode. For historical reasons (these names are used when deparsing), attributes `".Dim"`, `".Dimnames"`, `".Names"`, `".Tsp"` and `".Label"` are renamed to `"dim"`, `"dimnames"`, `"names"`, `"tsp"` and `"levels"`. It is possible to give the same tag more than once, in which case the last value assigned wins. As with other ways of assigning attributes, using `tag = NULL` removes attribute `tag` from `.Data` if it is present. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<attributes>`, `<attr>`. ### Examples ``` structure(1:6, dim = 2:3) ``` r None `noquote` Class for ‘no quote’ Printing of Character Strings ------------------------------------------------------------- ### Description Print character strings without quotes. ### Usage ``` noquote(obj, right = FALSE) ## S3 method for class 'noquote' print(x, quote = FALSE, right = FALSE, ...) ## S3 method for class 'noquote' c(..., recursive = FALSE) ``` ### Arguments | | | | --- | --- | | `obj` | any **R** object, typically a vector of `<character>` strings. | | `right` | optional `<logical>` eventually to be passed to `print()`, used by `<print.default>()`, indicating whether or not strings should be right aligned. | | `x` | an object of class `"noquote"`. | | `quote, ...` | further options passed to next methods, such as `<print>`. | | `recursive` | for compatibility with the generic `<c>` function. | ### Details `noquote` returns its argument as an object of class `"noquote"`. There is a method for `c()` and subscript method (`"[.noquote"`) which ensures that the class is not lost by subsetting. The print method (`print.noquote`) prints character strings *without* quotes (`"...."` is printed as `....`). If `right` is specified in a call `print(x, right=*)`, it takes precedence over a possible `right` setting of `x`, e.g., created by `x <- noquote(*, right=TRUE)`. These functions exist both as utilities and as an example of using (S3) `<class>` and object orientation. ### Author(s) Martin Maechler [[email protected]](mailto:[email protected]) ### See Also `[methods](../../utils/html/methods)`, `<class>`, `<print>`. ### Examples ``` letters nql <- noquote(letters) nql nql[1:4] <- "oh" nql[1:12] cmp.logical <- function(log.v) { ## Purpose: compact printing of logicals log.v <- as.logical(log.v) noquote(if(length(log.v) == 0)"()" else c(".","|")[1 + log.v]) } cmp.logical(stats::runif(20) > 0.8) chmat <- as.matrix(format(stackloss)) # a "typical" character matrix ## noquote(*, right=TRUE) so it prints exactly like a data frame chmat <- noquote(chmat, right = TRUE) chmat ``` r None `pushBack` Push Text Back on to a Connection --------------------------------------------- ### Description Functions to push back text lines onto a [connection](connections), and to enquire how many lines are currently pushed back. ### Usage ``` pushBack(data, connection, newLine = TRUE, encoding = c("", "bytes", "UTF-8")) pushBackLength(connection) clearPushBack(connection) ``` ### Arguments | | | | --- | --- | | `data` | a character vector. | | `connection` | A [connection](connections). | | `newLine` | logical. If true, a newline is appended to each string pushed back. | | `encoding` | character string, partially matched. See details. | ### Details Several character strings can be pushed back on one or more occasions. The occasions form a stack, so the first line to be retrieved will be the first string from the last call to `pushBack`. Lines which are pushed back are read prior to the normal input from the connection, by the normal text-reading functions such as `[readLines](readlines)` and `<scan>`. Pushback is only allowed for readable connections in text mode. Not all uses of connections respect pushbacks, in particular the input connection is still wired directly, so for example parsing commands from the console and `<scan>("")` ignore pushbacks on `[stdin](showconnections)`. When character strings with a marked encoding (see `[Encoding](encoding)`) are pushed back they are converted to the current encoding if `encoding = ""`. This may involve representing characters as <U+xxxx> if they cannot be converted. They will be converted to UTF-8 if `encoding = "UTF-8"` or left as-is if `encoding = "bytes"`. ### Value `pushBack` and `clearPushBack()` return nothing, invisibly. `pushBackLength` returns the number of lines currently pushed back. ### See Also `<connections>`, `[readLines](readlines)`. ### Examples ``` zz <- textConnection(LETTERS) readLines(zz, 2) pushBack(c("aa", "bb"), zz) pushBackLength(zz) readLines(zz, 1) pushBackLength(zz) readLines(zz, 1) readLines(zz, 1) close(zz) ```
programming_docs
r None `jitter` ‘Jitter’ (Add Noise) to Numbers ----------------------------------------- ### Description Add a small amount of noise to a numeric vector. ### Usage ``` jitter(x, factor = 1, amount = NULL) ``` ### Arguments | | | | --- | --- | | `x` | numeric vector to which *jitter* should be added. | | `factor` | numeric. | | `amount` | numeric; if positive, used as *amount* (see below), otherwise, if `= 0` the default is `factor * z/50`. Default (`NULL`): `factor * d/5` where `d` is about the smallest difference between `x` values. | ### Details The result, say `r`, is `r <- x + runif(n, -a, a)` where `n <- length(x)` and `a` is the `amount` argument (if specified). Let `z <- max(x) - min(x)` (assuming the usual case). The amount `a` to be added is either provided as *positive* argument `amount` or otherwise computed from `z`, as follows: If `amount == 0`, we set `a <- factor * z/50` (same as S). If `amount` is `NULL` (*default*), we set `a <- factor * d/5` where *d* is the smallest difference between adjacent unique (apart from fuzz) `x` values. ### Value `jitter(x, ...)` returns a numeric of the same length as `x`, but with an `amount` of noise added in order to break ties. ### Author(s) Werner Stahel and Martin Maechler, ETH Zurich ### References Chambers, J. M., Cleveland, W. S., Kleiner, B. and Tukey, P.A. (1983) *Graphical Methods for Data Analysis.* Wadsworth; figures 2.8, 4.22, 5.4. Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S.* Wadsworth & Brooks/Cole. ### See Also `[rug](../../graphics/html/rug)` which you may want to combine with `jitter`. ### Examples ``` round(jitter(c(rep(1, 3), rep(1.2, 4), rep(3, 3))), 3) ## These two 'fail' with S-plus 3.x: jitter(rep(0, 7)) jitter(rep(10000, 5)) ``` r None `as.Date` Date Conversion Functions to and from Character ---------------------------------------------------------- ### Description Functions to convert between character representations and objects of class `"Date"` representing calendar dates. ### Usage ``` as.Date(x, ...) ## S3 method for class 'character' as.Date(x, format, tryFormats = c("%Y-%m-%d", "%Y/%m/%d"), optional = FALSE, ...) ## S3 method for class 'numeric' as.Date(x, origin, ...) ## S3 method for class 'POSIXct' as.Date(x, tz = "UTC", ...) ## S3 method for class 'Date' format(x, ...) ## S3 method for class 'Date' as.character(x, ...) ``` ### Arguments | | | | --- | --- | | `x` | an object to be converted. | | `format` | `<character>` string. If not specified, it will try `tryFormats` one by one on the first non-`NA` element, and give an error if none works. Otherwise, the processing is via `<strptime>()` whose help page describes available conversion specifications. | | `tryFormats` | `<character>` vector of `format` strings to try if `format` is not specified. | | `optional` | `<logical>` indicating to return `NA` (instead of signalling an error) if the format guessing does not succeed. | | `origin` | a `Date` object, or something which can be coerced by `as.Date(origin, ...)` to such an object. | | `tz` | a time zone name. | | `...` | further arguments to be passed from or to other methods, including `format` for `as.character` and `as.Date` methods. | ### Details The usual vector re-cycling rules are applied to `x` and `format` so the answer will be of length that of the longer of the vectors. Locale-specific conversions to and from character strings are used where appropriate and available. This affects the names of the days and months. The `as.Date` methods accept character strings, factors, logical `NA` and objects of classes `"[POSIXlt](datetimeclasses)"` and `"[POSIXct](datetimeclasses)"`. (The last is converted to days by ignoring the time after midnight in the representation of the time in specified time zone, default UTC.) Also objects of class `"date"` (from package [date](../../date/html/as.date)) and `"dates"` (from package [chron](../../chron/html/chron)). Character strings are processed as far as necessary for the format specified: any trailing characters are ignored. `as.Date` will accept numeric data (the number of days since an epoch), but *only* if `origin` is supplied. The `format` and `as.character` methods ignore any fractional part of the date. ### Value The `format` and `as.character` methods return a character vector representing the date. `NA` dates are returned as `NA_character_`. The `as.Date` methods return an object of class `"[Date](dates)"`. ### Conversion from other Systems Most systems record dates internally as the number of days since some origin, but this is fraught with problems, including * Is the origin day 0 or day 1? As the ‘Examples’ show, Excel manages to use both choices for its two date systems. * If the origin is far enough back, the designers may show their ignorance of calendar systems. For example, Excel's designer thought 1900 was a leap year (claiming to copy the error from earlier DOS spreadsheets), and Matlab's designer chose the non-existent date of ‘January 0, 0000’ (there is no such day), not specifying the calendar. (There is such a year in the ‘Gregorian’ calendar as used in ISO 8601:2004, but that does say that it is only to be used for years before 1582 with the agreement of the parties in information exchange.) The only safe procedure is to check the other systems values for known dates: reports on the Internet (including R-help) are more often wrong than right. ### Note The default formats follow the rules of the ISO 8601 international standard which expresses a day as `"2001-02-03"`. If the date string does not specify the date completely, the returned answer may be system-specific. The most common behaviour is to assume that a missing year, month or day is the current one. If it specifies a date incorrectly, reliable implementations will give an error and the date is reported as `NA`. Unfortunately some common implementations (such as glibc) are unreliable and guess at the intended meaning. Years before 1CE (aka 1AD) will probably not be handled correctly. ### References International Organization for Standardization (2004, 1988, 1997, ...) *ISO 8601. Data elements and interchange formats – Information interchange – Representation of dates and times.* For links to versions available on-line see (at the time of writing) <https://www.qsl.net/g1smd/isopdf.htm>. ### See Also [Date](dates) for details of the date class; `<locales>` to query or set a locale. Your system's help pages on `strftime` and `strptime` to see how to specify their formats. Windows users will find no help page for `strptime`: code based on glibc is used (with corrections), so all the format specifiers described here are supported, but with no alternative number representation nor era available in any locale. ### Examples ``` ## locale-specific version of the date format(Sys.Date(), "%a %b %d") ## read in date info in format 'ddmmmyyyy' ## This will give NA(s) in some locales; setting the C locale ## as in the commented lines will overcome this on most systems. ## lct <- Sys.getlocale("LC_TIME"); Sys.setlocale("LC_TIME", "C") x <- c("1jan1960", "2jan1960", "31mar1960", "30jul1960") z <- as.Date(x, "%d%b%Y") ## Sys.setlocale("LC_TIME", lct) z ## read in date/time info in format 'm/d/y' dates <- c("02/27/92", "02/27/92", "01/14/92", "02/28/92", "02/01/92") as.Date(dates, "%m/%d/%y") ## date given as number of days since 1900-01-01 (a date in 1989) as.Date(32768, origin = "1900-01-01") ## Excel is said to use 1900-01-01 as day 1 (Windows default) or ## 1904-01-01 as day 0 (Mac default), but this is complicated by Excel ## incorrectly treating 1900 as a leap year. ## So for dates (post-1901) from Windows Excel as.Date(35981, origin = "1899-12-30") # 1998-07-05 ## and Mac Excel as.Date(34519, origin = "1904-01-01") # 1998-07-05 ## (these values come from http://support.microsoft.com/kb/214330) ## Experiment shows that Matlab's origin is 719529 days before ours, ## (it takes the non-existent 0000-01-01 as day 1) ## so Matlab day 734373 can be imported as as.Date(734373, origin = "1970-01-01") - 719529 # 2010-08-23 ## (value from ## http://www.mathworks.de/de/help/matlab/matlab_prog/represent-date-and-times-in-MATLAB.html) ## Time zone effect z <- ISOdate(2010, 04, 13, c(0,12)) # midnight and midday UTC as.Date(z) # in UTC ## these time zone names are common as.Date(z, tz = "NZ") as.Date(z, tz = "HST") # Hawaii ``` r None `col` Column Indexes --------------------- ### Description Returns a matrix of integers indicating their column number in a matrix-like object, or a factor of column labels. ### Usage ``` col(x, as.factor = FALSE) .col(dim) ``` ### Arguments | | | | --- | --- | | `x` | a matrix-like object, that is one with a two-dimensional `dim`. | | `dim` | a matrix dimension, i.e., an integer valued numeric vector of length two (with non-negative entries). | | `as.factor` | a logical value indicating whether the value should be returned as a factor of column labels (created if necessary) rather than as numbers. | ### Value An integer (or factor) matrix with the same dimensions as `x` and whose `ij`-th element is equal to `j` (or the `j`-th column label). ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<row>` to get rows; `<slice.index>` for a general way to get slice indices in an array. ### Examples ``` # extract an off-diagonal of a matrix ma <- matrix(1:12, 3, 4) ma[row(ma) == col(ma) + 1] # create an identity 5-by-5 matrix more slowly than diag(n = 5): x <- matrix(0, nrow = 5, ncol = 5) x[row(x) == col(x)] <- 1 (i34 <- .col(3:4)) stopifnot(identical(i34, .col(c(3,4)))) # 'dim' maybe "double" ``` r None `expression` Unevaluated Expressions ------------------------------------- ### Description Creates or tests for objects of mode `"expression"`. ### Usage ``` expression(...) is.expression(x) as.expression(x, ...) ``` ### Arguments | | | | --- | --- | | `...` | `expression`: **R** objects, typically calls, symbols or constants. `as.expression`: arguments to be passed to methods. | | `x` | an arbitrary **R** object. | ### Details ‘Expression’ here is not being used in its colloquial sense, that of mathematical expressions. Those are calls (see `<call>`) in **R**, and an **R** expression vector is a list of calls, symbols etc, for example as returned by `<parse>`. As an object of mode `"expression"` is a list, it can be subsetted by `[`, `[[` or `$`, the latter two extracting individual calls etc. The replacement forms of these operators can be used to replace or delete elements. `expression` and `is.expression` are <primitive> functions. `expression` is ‘special’: it does not evaluate its arguments. ### Value `expression` returns a vector of type `"expression"` containing its arguments (unevaluated). `is.expression` returns `TRUE` if `expr` is an expression object and `FALSE` otherwise. `as.expression` attempts to coerce its argument into an expression object. It is generic, and only the default method is described here. (The default method calls `as.vector(type = "expression")` and so may dispatch methods for `[as.vector](vector)`.) `NULL`, calls, symbols (see `[as.symbol](name)`) and pairlists are returned as the element of a length-one expression vector. Atomic vectors are placed element-by-element into an expression vector (without using any names): lists are changed type to an expression vector (keeping all attributes). Other types are not currently supported. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<call>`, `<eval>`, `<function>`. Further, `[text](../../graphics/html/text)` and `[legend](../../graphics/html/legend)` for plotting mathematical expressions. ### Examples ``` length(ex1 <- expression(1 + 0:9)) # 1 ex1 eval(ex1) # 1:10 length(ex3 <- expression(u, 2, u + 0:9)) # 3 mode(ex3 [3]) # expression mode(ex3[[3]]) # call ## but not all components are 'call's : sapply(ex3, mode ) # name numeric call sapply(ex3, typeof) # symbol double language rm(ex3) ``` r None `bitwise` Bitwise Logical Operations ------------------------------------- ### Description Logical operations on integer vectors with elements viewed as sets of bits. ### Usage ``` bitwNot(a) bitwAnd(a, b) bitwOr(a, b) bitwXor(a, b) bitwShiftL(a, n) bitwShiftR(a, n) ``` ### Arguments | | | | --- | --- | | `a, b` | integer vectors; numeric vectors are coerced to integer vectors. | | `n` | non-negative integer vector of values up to 31. | ### Details Each element of an integer vector has 32 bits. Pairwise operations can result in integer `NA`. Shifting is done assuming the values represent unsigned integers. ### Value An integer vector of length the longer of the arguments, or zero length if one is zero-length. The output element is `NA` if an input is `NA` (after coercion) or an invalid shift. ### See Also The logical operators, `[!](logic)`, `[&](logic)`, `[|](logic)`, `[xor](logic)`. Notably these *do* work bitwise for `<raw>` arguments. The classes `"<octmode>"` and `"<hexmode>"` whose implementation of the standard logical operators is based on these functions. Package [bitops](https://CRAN.R-project.org/package=bitops) has similar functions for numeric vectors which differ in the way they treat integers *2^31* or larger. ### Examples ``` bitwNot(0:12) # -1 -2 ... -13 bitwAnd(15L, 7L) # 7 bitwOr (15L, 7L) # 15 bitwXor(15L, 7L) # 8 bitwXor(-1L, 1L) # -2 ## The "same" for 'raw' instead of integer : rr12 <- as.raw(0:12) ; rbind(rr12, !rr12) c(r15 <- as.raw(15), r7 <- as.raw(7)) # 0f 07 r15 & r7 # 07 r15 | r7 # 0f xor(r15, r7)# 08 bitwShiftR(-1, 1:31) # shifts of 2^32-1 = 4294967295 ``` r None `readRenviron` Set Environment Variables from a File ----------------------------------------------------- ### Description Read as file such as ‘.Renviron’ or ‘Renviron.site’ in the format described in the help for [Startup](startup), and set environment variables as defined in the file. ### Usage ``` readRenviron(path) ``` ### Arguments | | | | --- | --- | | `path` | A length-one character vector giving the path to the file. Tilde-expansion is performed where supported. | ### Value Scalar logical indicating if the file was read successfully. Returned invisibly. If the file cannot be opened for reading, a warning is given. ### See Also `[Startup](startup)` for the file format. ### Examples ``` ## Not run: ## re-read a startup file (or read it in a vanilla session) readRenviron("~/.Renviron") ## End(Not run) ``` r None `getLoadedDLLs` Get DLLs Loaded in Current Session --------------------------------------------------- ### Description This function provides a way to get a list of all the DLLs (see `[dyn.load](dynload)`) that are currently loaded in the **R** session. ### Usage ``` getLoadedDLLs() ``` ### Details This queries the internal table that manages the DLLs. ### Value An object of class `"DLLInfoList"` which is a `<list>` with an element corresponding to each DLL that is currently loaded in the session. Each element is an object of class `"DLLInfo"` which has the following entries. | | | | --- | --- | | `name` | the abbreviated name. | | `path` | the fully qualified name of the loaded DLL. | | `dynamicLookup` | a logical value indicating whether R uses only the registration information to resolve symbols or whether it searches the entire symbol table of the DLL. | | `handle` | a reference to the C-level data structure that provides access to the contents of the DLL. This is an object of class `"DLLHandle"`. | Note that the class `DLLInfo` has a method for `$` which can be used to resolve native symbols within that DLL. Therefore, one must access the R-level elements described above using `[[`, e.g. `x[["name"]]` or `x[["handle"]]`. ### Note We are starting to use the `handle` elements in the DLL object to resolve symbols more directly in **R**. ### Author(s) Duncan Temple Lang [[email protected]](mailto:[email protected]). ### See Also `[getDLLRegisteredRoutines](getdllregisteredroutines)`, `[getNativeSymbolInfo](getnativesymbolinfo)` ### Examples ``` getLoadedDLLs() utils::tail(getLoadedDLLs(), 2) # the last 2 loaded ones, still a DLLInfoList ``` r None `ls` List Objects ------------------ ### Description `ls` and `objects` return a vector of character strings giving the names of the objects in the specified environment. When invoked with no argument at the top level prompt, `ls` shows what data sets and functions a user has defined. When invoked with no argument inside a function, `ls` returns the names of the function's local variables: this is useful in conjunction with `browser`. ### Usage ``` ls(name, pos = -1L, envir = as.environment(pos), all.names = FALSE, pattern, sorted = TRUE) objects(name, pos= -1L, envir = as.environment(pos), all.names = FALSE, pattern, sorted = TRUE) ``` ### Arguments | | | | --- | --- | | `name` | which environment to use in listing the available objects. Defaults to the *current* environment. Although called `name` for back compatibility, in fact this argument can specify the environment in any form; see the ‘Details’ section. | | `pos` | an alternative argument to `name` for specifying the environment as a position in the search list. Mostly there for back compatibility. | | `envir` | an alternative argument to `name` for specifying the environment. Mostly there for back compatibility. | | `all.names` | a logical value. If `TRUE`, all object names are returned. If `FALSE`, names which begin with a . are omitted. | | `pattern` | an optional [regular expression](regex). Only names matching `pattern` are returned. `[glob2rx](../../utils/html/glob2rx)` can be used to convert wildcard patterns to regular expressions. | | `sorted` | logical indicating if the resulting `<character>` should be sorted alphabetically. Note that this is part of `ls()` may take most of the time. | ### Details The `name` argument can specify the environment from which object names are taken in one of several forms: as an integer (the position in the `<search>` list); as the character string name of an element in the search list; or as an explicit `<environment>` (including using `[sys.frame](sys.parent)` to access the currently active function calls). By default, the environment of the call to `ls` or `objects` is used. The `pos` and `envir` arguments are an alternative way to specify an environment, but are primarily there for back compatibility. Note that the *order* of strings for `sorted = TRUE` is locale dependent, see `[Sys.getlocale](locales)`. If `sorted = FALSE` the order is arbitrary, depending if the environment is hashed, the order of insertion of objects, .... ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[glob2rx](../../utils/html/glob2rx)` for converting wildcard patterns to regular expressions. `[ls.str](../../utils/html/ls_str)` for a long listing based on `[str](../../utils/html/str)`. `[apropos](../../utils/html/apropos)` (or `[find](../../utils/html/apropos)`) for finding objects in the whole search path; `<grep>` for more details on ‘regular expressions’; `<class>`, `[methods](../../utils/html/methods)`, etc., for object-oriented programming. ### Examples ``` .Ob <- 1 ls(pattern = "O") ls(pattern= "O", all.names = TRUE) # also shows ".[foo]" # shows an empty list because inside myfunc no variables are defined myfunc <- function() {ls()} myfunc() # define a local variable inside myfunc myfunc <- function() {y <- 1; ls()} myfunc() # shows "y" ```
programming_docs
r None `cut.POSIXt` Convert a Date or Date-Time Object to a Factor ------------------------------------------------------------ ### Description Method for `<cut>` applied to date-time objects. ### Usage ``` ## S3 method for class 'POSIXt' cut(x, breaks, labels = NULL, start.on.monday = TRUE, right = FALSE, ...) ## S3 method for class 'Date' cut(x, breaks, labels = NULL, start.on.monday = TRUE, right = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `x` | an object inheriting from class `"POSIXt"` or `"Date"`. | | `breaks` | a vector of cut points *or* number giving the number of intervals which `x` is to be cut into *or* an interval specification, one of `"sec"`, `"min"`, `"hour"`, `"day"`, `"DSTday"`, `"week"`, `"month"`, `"quarter"` or `"year"`, optionally preceded by an integer and a space, or followed by `"s"`. (For `"Date"` objects only interval specifications using `"day"`, `"week"`, `"month"`, `"quarter"` and `"year"` are allowed.) | | `labels` | labels for the levels of the resulting category. By default, labels are constructed from the left-hand end of the intervals (which are included for the default value of `right`). If `labels = FALSE`, simple integer codes are returned instead of a factor. | | `start.on.monday` | logical. If `breaks = "weeks"`, should the week start on Mondays or Sundays? | | `right, ...` | arguments to be passed to or from other methods. | ### Details Note that the default for `right` differs from the [default method](cut). Using `include.lowest = TRUE` will include both ends of the range of dates. Using `breaks = "quarter"` will create intervals of 3 calendar months, with the intervals beginning on January 1, April 1, July 1 or October 1 (based upon `min(x)`) as appropriate. A vector of `breaks` will be sorted before use: `labels` should correspond to the sorted vector. ### Value A factor is returned, unless `labels = FALSE` which returns the integer level codes. Values which fall outside the range of `breaks` are coded as `NA`, as are and `NA` values. ### See Also `[seq.POSIXt](seq.posixt)`, `[seq.Date](seq.date)`, `<cut>` ### Examples ``` ## random dates in a 10-week period cut(ISOdate(2001, 1, 1) + 70*86400*stats::runif(100), "weeks") cut(as.Date("2001/1/1") + 70*stats::runif(100), "weeks") # The standards all have midnight as the start of the day, but some # people incorrectly interpret it at the end of the previous day ... tm <- seq(as.POSIXct("2012-06-01 06:00"), by = "6 hours", length.out = 24) aggregate(1:24, list(day = cut(tm, "days")), mean) # and a version with midnight included in the previous day: aggregate(1:24, list(day = cut(tm, "days", right = TRUE)), mean) ``` r None `findInterval` Find Interval Numbers or Indices ------------------------------------------------ ### Description Given a vector of non-decreasing breakpoints in `vec`, find the interval containing each element of `x`; i.e., if `i <- findInterval(x,v)`, for each index `j` in `x` *v[i[j]] ≤ x[j] < v[i[j] + 1]* where *v[0] := - Inf*, *v[N+1] := + Inf*, and `N <- length(v)`. At the two boundaries, the returned index may differ by 1, depending on the optional arguments `rightmost.closed` and `all.inside`. ### Usage ``` findInterval(x, vec, rightmost.closed = FALSE, all.inside = FALSE, left.open = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | numeric. | | `vec` | numeric, sorted (weakly) increasingly, of length `N`, say. | | `rightmost.closed` | logical; if true, the rightmost interval, `vec[N-1] .. vec[N]` is treated as *closed*, see below. | | `all.inside` | logical; if true, the returned indices are coerced into `1,...,N-1`, i.e., `0` is mapped to `1` and `N` to `N-1`. | | `left.open` | logical; if true all the intervals are open at left and closed at right; in the formulas below, *≤* should be swapped with *<* (and *>* with *≥*), and `rightmost.closed` means ‘leftmost is closed’. This may be useful, e.g., in survival analysis computations. | ### Details The function `findInterval` finds the index of one vector `x` in another, `vec`, where the latter must be non-decreasing. Where this is trivial, equivalent to `apply( outer(x, vec, ">="), 1, sum)`, as a matter of fact, the internal algorithm uses interval search ensuring *O(n \* log(N))* complexity where `n <- length(x)` (and `N <- length(vec)`). For (almost) sorted `x`, it will be even faster, basically *O(n)*. This is the same computation as for the empirical distribution function, and indeed, `findInterval(t, sort(X))` is *identical* to *n \* Fn(t; X[1],..,X[n])* where *Fn* is the empirical distribution function of *X[1],..,X[n]*. When `rightmost.closed = TRUE`, the result for `x[j] = vec[N]` ( *= max(vec)*), is `N - 1` as for all other values in the last interval. `left.open = TRUE` is occasionally useful, e.g., for survival data. For (anti-)symmetry reasons, it is equivalent to using “mirrored” data, i.e., the following is always true: ``` identical( findInterval( x, v, left.open= TRUE, ...) , N - findInterval(-x, -v[N:1], left.open=FALSE, ...) ) ``` where `N <- length(vec)` as above. ### Value vector of length `length(x)` with values in `0:N` (and `NA`) where `N <- length(vec)`, or values coerced to `1:(N-1)` if and only if `all.inside = TRUE` (equivalently coercing all x values *inside* the intervals). Note that `[NA](na)`s are propagated from `x`, and `[Inf](is.finite)` values are allowed in both `x` and `vec`. ### Author(s) Martin Maechler ### See Also `[approx](../../stats/html/approxfun)(*, method = "constant")` which is a generalization of `findInterval()`, `[ecdf](../../stats/html/ecdf)` for computing the empirical distribution function which is (up to a factor of *n*) also basically the same as `findInterval(.)`. ### Examples ``` x <- 2:18 v <- c(5, 10, 15) # create two bins [5,10) and [10,15) cbind(x, findInterval(x, v)) N <- 100 X <- sort(round(stats::rt(N, df = 2), 2)) tt <- c(-100, seq(-2, 2, length.out = 201), +100) it <- findInterval(tt, X) tt[it < 1 | it >= N] # only first and last are outside range(X) ## 'left.open = TRUE' means "mirroring" : N <- length(v) stopifnot(identical( findInterval( x, v, left.open=TRUE) , N - findInterval(-x, -v[N:1]))) ``` r None `Comparison` Relational Operators ---------------------------------- ### Description Binary operators which allow the comparison of values in atomic vectors. ### Usage ``` x < y x > y x <= y x >= y x == y x != y ``` ### Arguments | | | | --- | --- | | `x, y` | atomic vectors, symbols, calls, or other objects for which methods have been written. | ### Details The binary comparison operators are generic functions: methods can be written for them individually or via the `[Ops](groupgeneric)` group generic function. (See `[Ops](groupgeneric)` for how dispatch is computed.) Comparison of strings in character vectors is lexicographic within the strings using the collating sequence of the locale in use: see `<locales>`. The collating sequence of locales such as en\_US is normally different from C (which should use ASCII) and can be surprising. Beware of making *any* assumptions about the collation order: e.g. in Estonian `Z` comes between `S` and `T`, and collation is not necessarily character-by-character – in Danish `aa` sorts as a single letter, after `z`. In Welsh `ng` may or may not be a single sorting unit: if it is it follows `g`. Some platforms may not respect the locale and always sort in numerical order of the bytes in an 8-bit locale, or in Unicode code-point order for a UTF-8 locale (and may not sort in the same order for the same language in different character sets). Collation of non-letters (spaces, punctuation signs, hyphens, fractions and so on) is even more problematic. Character strings can be compared with different marked encodings (see `[Encoding](encoding)`): they are translated to UTF-8 before comparison. Raw vectors should not really be considered to have an order, but the numeric order of the byte representation is used. At least one of `x` and `y` must be an atomic vector, but if the other is a list **R** attempts to coerce it to the type of the atomic vector: this will succeed if the list is made up of elements of length one that can be coerced to the correct type. If the two arguments are atomic vectors of different types, one is coerced to the type of the other, the (decreasing) order of precedence being character, complex, numeric, integer, logical and raw. Missing values (`[NA](na)`) and `[NaN](is.finite)` values are regarded as non-comparable even to themselves, so comparisons involving them will always result in `NA`. Missing values can also result when character strings are compared and one is not valid in the current collation locale. Language objects such as symbols and calls are deparsed to character strings before comparison. ### Value A logical vector indicating the result of the element by element comparison. The elements of shorter vectors are recycled as necessary. Objects such as arrays or time-series can be compared this way provided they are conformable. ### S4 methods These operators are members of the S4 `[Compare](../../methods/html/s4groupgeneric)` group generic, and so methods can be written for them individually as well as for the group generic (or the `Ops` group generic), with arguments `c(e1, e2)`. ### Note Do not use `==` and `!=` for tests, such as in `if` expressions, where you must get a single `TRUE` or `FALSE`. Unless you are absolutely sure that nothing unusual can happen, you should use the `<identical>` function instead. For numerical and complex values, remember `==` and `!=` do not allow for the finite representation of fractions, nor for rounding error. Using `<all.equal>` with `identical` or `[isTRUE](logic)` is almost always preferable; see the examples. (This also applies to the other comparison operators.) These operators are sometimes called as functions as e.g. ``<`(x, y)`: see the description of how argument-matching is done in `[Ops](groupgeneric)`. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Collation of character strings is a complex topic. For an introduction see <https://en.wikipedia.org/wiki/Collating_sequence>. The *Unicode Collation Algorithm* (<https://unicode.org/reports/tr10/>) is likely to be increasingly influential. Where available **R** by default makes use of ICU (<http://site.icu-project.org/>) for collation (except in a C locale). ### See Also `[Logic](logic)` on how to *combine* results of comparisons, i.e., logical vectors. `<factor>` for the behaviour with factor arguments. `[Syntax](syntax)` for operator precedence. `<capabilities>` for whether ICU is available, and `[icuSetCollate](icusetcollate)` to tune the string collation algorithm when it is. ### Examples ``` x <- stats::rnorm(20) x < 1 x[x > 0] x1 <- 0.5 - 0.3 x2 <- 0.3 - 0.1 x1 == x2 # FALSE on most machines isTRUE(all.equal(x1, x2)) # TRUE everywhere # range of most 8-bit charsets, as well as of Latin-1 in Unicode z <- c(32:126, 160:255) x <- if(l10n_info()$MBCS) { intToUtf8(z, multiple = TRUE) } else rawToChar(as.raw(z), multiple = TRUE) ## by number writeLines(strwrap(paste(x, collapse=" "), width = 60)) ## by locale collation writeLines(strwrap(paste(sort(x), collapse=" "), width = 60)) ``` r None `list.files` List the Files in a Directory/Folder -------------------------------------------------- ### Description These functions produce a character vector of the names of files or directories in the named directory. ### Usage ``` list.files(path = ".", pattern = NULL, all.files = FALSE, full.names = FALSE, recursive = FALSE, ignore.case = FALSE, include.dirs = FALSE, no.. = FALSE) dir(path = ".", pattern = NULL, all.files = FALSE, full.names = FALSE, recursive = FALSE, ignore.case = FALSE, include.dirs = FALSE, no.. = FALSE) list.dirs(path = ".", full.names = TRUE, recursive = TRUE) ``` ### Arguments | | | | --- | --- | | `path` | a character vector of full path names; the default corresponds to the working directory, `<getwd>()`. Tilde expansion (see `<path.expand>`) is performed. Missing values will be ignored. Elements with a marked encoding will be converted to the native encoding (and if that fails, considered non-existent). | | `pattern` | an optional [regular expression](regex). Only file names which match the regular expression will be returned. | | `all.files` | a logical value. If `FALSE`, only the names of visible files are returned (following Unix-style visibility, that is files whose name does not start with a dot). If `TRUE`, all file names will be returned. | | `full.names` | a logical value. If `TRUE`, the directory path is prepended to the file names to give a relative file path. If `FALSE`, the file names (rather than paths) are returned. | | `recursive` | logical. Should the listing recurse into directories? | | `ignore.case` | logical. Should pattern-matching be case-insensitive? | | `include.dirs` | logical. Should subdirectory names be included in recursive listings? (They always are in non-recursive ones). | | `no..` | logical. Should both `"."` and `".."` be excluded also from non-recursive listings? | ### Value A character vector containing the names of the files in the specified directories (empty if there were no files). If a path does not exist or is not a directory or is unreadable it is skipped. The files are sorted in alphabetical order, on the full path if `full.names = TRUE`. `list.dirs` implicitly has `all.files = TRUE`, and if `recursive = TRUE`, the answer includes `path` itself (provided it is a readable directory). `dir` is an alias for `list.files`. ### Note File naming conventions are platform dependent. The pattern matching works with the case of file names as returned by the OS. On a POSIX filesystem recursive listings will follow symbolic links to directories. ### Author(s) Ross Ihaka, Brian Ripley ### See Also `<file.info>`, `<file.access>` and `<files>` for many more file handling functions and `<file.choose>` for interactive selection. `[glob2rx](../../utils/html/glob2rx)` to convert wildcards (as used by system file commands and shells) to regular expressions. `[Sys.glob](sys.glob)` for wildcard expansion on file paths. `<basename>` and `dirname`, useful for splitting paths into non-directory (aka ‘filename’) and directory parts. ### Examples ``` list.files(R.home()) ## Only files starting with a-l or r ## Note that a-l is locale-dependent, but using case-insensitive ## matching makes it unambiguous in English locales dir("../..", pattern = "^[a-lr]", full.names = TRUE, ignore.case = TRUE) list.dirs(R.home("doc")) list.dirs(R.home("doc"), full.names = FALSE) ``` r None `call` Function Calls ---------------------- ### Description Create or test for objects of `<mode>` `"call"` (or `"("`, see Details). ### Usage ``` call(name, ...) is.call(x) as.call(x) ``` ### Arguments | | | | --- | --- | | `name` | a non-empty character string naming the function to be called. | | `...` | arguments to be part of the call. | | `x` | an arbitrary **R** object. | ### Details `call` returns an unevaluated function call, that is, an unevaluated expression which consists of the named function applied to the given arguments (`name` must be a string which gives the name of a function to be called). Note that although the call is unevaluated, the arguments `...` are evaluated. `call` is a primitive, so the first argument is taken as `name` and the remaining arguments as arguments for the constructed call: if the first argument is named the name must partially match `name`. `is.call` is used to determine whether `x` is a call (i.e., of mode `"call"` or `"("`). Note that * `is.call(x)` is strictly equivalent to `typeof(x) == "language"`. * `<is.language>()` is also true for calls (but also for `[symbol](../../grdevices/html/plotmath)`s and `<expression>`s where `is.call()` is false). `as.call(x)`: Objects of mode `"list"` can be coerced to mode `"call"`. The first element of the list becomes the function part of the call, so should be a function or the name of one (as a symbol; a character string will not do). If you think of using `as.call(<string>)`, consider using `[str2lang](parse)(*)` which is an efficient version of `<parse>(text=*)`. Note that `<call>()` and `[as.call](call)()`, when applicable, are much preferable to these `<parse>()` based approaches. All three are <primitive> functions. `as.call` is generic: you can write methods to handle specific classes of objects, see [InternalMethods](internalmethods). ### Warning `call` should not be used to attempt to evade restrictions on the use of `.Internal` and other non-API calls. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<do.call>` for calling a function by name and argument list; `[Recall](recall)` for recursive calling of functions; further `<is.language>`, `<expression>`, `<function>`. Producing `<call>`s etc from character: `[str2lang](parse)` and `<parse>`. ### Examples ``` is.call(call) #-> FALSE: Functions are NOT calls ## set up a function call to round with argument 10.5 cl <- call("round", 10.5) is.call(cl) # TRUE cl identical(quote(round(10.5)), # <- less functional, but the same cl) # TRUE ## such a call can also be evaluated. eval(cl) # [1] 10 class(cl) # "call" typeof(cl)# "language" is.call(cl) && is.language(cl) # always TRUE for "call"s A <- 10.5 call("round", A) # round(10.5) call("round", quote(A)) # round(A) f <- "round" call(f, quote(A)) # round(A) ## if we want to supply a function we need to use as.call or similar f <- round ## Not run: call(f, quote(A)) # error: first arg must be character (g <- as.call(list(f, quote(A)))) eval(g) ## alternatively but less transparently g <- list(f, quote(A)) mode(g) <- "call" g eval(g) ## see also the examples in the help for do.call ``` r None `character` Character Vectors ------------------------------ ### Description Create or test for objects of type `"character"`. ### Usage ``` character(length = 0) as.character(x, ...) is.character(x) ``` ### Arguments | | | | --- | --- | | `length` | A non-negative integer specifying the desired length. Double values will be coerced to integer: supplying an argument of length other than one is an error. | | `x` | object to be coerced or tested. | | `...` | further arguments passed to or from other methods. | ### Details `as.character` and `is.character` are generic: you can write methods to handle specific classes of objects, see [InternalMethods](internalmethods). Further, for `as.character` the default method calls `[as.vector](vector)`, so dispatch is first on methods for `as.character` and then for methods for `as.vector`. `as.character` represents real and complex numbers to 15 significant digits (technically the compiler's setting of the ISO C constant `DBL_DIG`, which will be 15 on machines supporting IEC60559 arithmetic according to the C99 standard). This ensures that all the digits in the result will be reliable (and not the result of representation error), but does mean that conversion to character and back to numeric may change the number. If you want to convert numbers to character with the maximum possible precision, use `<format>`. ### Value `character` creates a character vector of the specified length. The elements of the vector are all equal to `""`. `as.character` attempts to coerce its argument to character type; like `[as.vector](vector)` it strips attributes including names. For lists and pairlists (including [language objects](is.language) such as calls) it deparses the elements individually, except that it extracts the first element of length-one character vectors. `is.character` returns `TRUE` or `FALSE` depending on whether its argument is of character type or not. ### Note `as.character` breaks lines in language objects at 500 characters, and inserts newlines. Prior to 2.15.0 lines were truncated. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<options>`: option `scipen` affects the conversion of numbers. `<paste>`, `<substr>` and `<strsplit>` for character concatenation and splitting, `<chartr>` for character translation and casefolding (e.g., upper to lower case) and `[sub](grep)`, `<grep>` etc for string matching and substitutions. Note that `help.search(keyword = "character")` gives even more links. `<deparse>`, which is normally preferable to `as.character` for [language objects](is.language). `[Quotes](quotes)` on how to specify `character` / string constants, including *raw* ones. ### Examples ``` form <- y ~ a + b + c as.character(form) ## length 3 deparse(form) ## like the input a0 <- 11/999 # has a repeating decimal representation (a1 <- as.character(a0)) format(a0, digits = 16) # shows one more digit a2 <- as.numeric(a1) a2 - a0 # normally around -1e-17 as.character(a2) # normally different from a1 print(c(a0, a2), digits = 16) ```
programming_docs
r None `timezones` Time Zones ----------------------- ### Description Information about time zones in **R**. `Sys.timezone` returns the name of the current time zone. ### Usage ``` Sys.timezone(location = TRUE) OlsonNames(tzdir = NULL) ``` ### Arguments | | | | --- | --- | | `location` | logical: defunct: ignored, with a warning for false values. | | `tzdir` | The time-zone database to be used: the default is to try known locations until one is found. | ### Details Time zones are a system-specific topic, but these days almost all **R** platforms use similar underlying code, used by Linux, macOS, Solaris, AIX and FreeBSD, and installed with **R** on Windows. (Unfortunately there are many system-specific errors in the implementations.) It is possible to use the **R** sources' version of the code on Unix-alikes as well as on Windows: this is the default for macOS and recommended for Solaris. It should be possible to set the current time zone via the environment variable TZ: see the section on ‘Time zone names’ for suitable values. `Sys.timezone()` will return the value of TZ if set initially (and on some OSes it is always set), otherwise it will try to retrieve from the OS a value which if set for TZ would give the initial time zone. (‘Initially’ means before any time-zone functions are used: if TZ is being set to override the OS setting or if the ‘try’ does not get this right, it should be set before the **R** process is started or (probably early enough) in file `[.Rprofile](startup)`). If TZ is set but invalid, most platforms default to UTC, the time zone colloquially known as GMT (see <https://en.wikipedia.org/wiki/Coordinated_Universal_Time>). (Some but not all platforms will give a warning for invalid values.) If it is unset or empty the *system* time zone is used (the one returned by `Sys.timezone`). Time zones did not come into use until the middle of the nineteenth century and were not widely adopted until the twentieth, and *daylight saving time* (DST, also known as *summer time*) was first introduced in the early twentieth century, most widely in 1916. Over the last 100 years places have changed their affiliation between major time zones, have opted out of (or in to) DST in various years or adopted DST rule changes late or not at all. (The UK experimented with DST throughout 1971, only.) In a few countries (one is the Irish Republic) it is the summer time which is the ‘standard’ time and a different name is used in winter. And there can be multiple changes during a year, for example for Ramadan. A quite common system implementation of `POSIXct` is as signed 32-bit integers and so only goes back to the end of 1901: on such systems **R** assumes that dates prior to that are in the same time zone as they were in 1902. Most of the world had not adopted time zones by 1902 (so used local ‘mean time’ based on longitude) but for a few places there had been time-zone changes before then. 64-bit representations are becoming common; unfortunately on some 64-bit OSes the database information is 32-bit and so only available for the range 1901–2038, and incompletely for the end years. As from **R** 3.5.0, when a time zone location is first found in a session, its value is cached in object `.sys.timezone` in the base environment. ### Value `Sys.timezone` returns an OS-specific character string, possibly `NA` or an empty string (which on some OSes means UTC). This will be a location such as `"Europe/London"` if one can be ascertained. A time zone region may be known by several names: for example "Europe/London" is also known as GB, GB-Eire, Europe/Belfast, Europe/Guernsey, Europe/Isle\_of\_Man and Europe/Jersey. A few regions are also known by a summary of their time zone, e.g. PST8PDT is an alias for America/Los\_Angeles. `OlsonNames` returns a character vector, see the examples for typical cases. It may have an attribute `"Version"`, something like "2020a". ### Time zone names Names `"UTC"` and its synonym `"GMT"` are accepted on all platforms. Where OSes describe their valid time zones can be obscure. The help for the C function `tzset` can be helpful, but it can also be inaccurate. There is a cumbersome POSIX specification (listed under environment variable TZ at <https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap08.html#tag_08>), which is often at least partially supported, but there are other more user-friendly ways to specify time zones. Almost all **R** platforms make use of a time-zone database originally compiled by Arthur David Olson and now managed by IANA, in which the preferred way to refer to a time zone is by a location (typically of a city), e.g., `Europe/London`, `America/Los_Angeles`, `Pacific/Easter` within a ‘time zone region’. Some traditional designations are also allowed such as `EST5EDT` or `GB`. (Beware that some of these designations may not be what you expect: in particular `EST` is a time zone used in Canada *without* daylight saving time, and not `EST5EDT` nor (Australian) Eastern Standard Time.) The designation can also be an optional colon prepended to the path to a file giving complied zone information (and the examples above are all files in a system-specific location). See <https://data.iana.org/time-zones/tz-link.html> for more details and references. By convention, regions with a unique time-zone history since 1970 have specific names in the database, but those with different earlier histories may not. Each time zone has one or two (the second for DST) *abbreviations* used when formatting times. The abbreviations used have changed over the years: for example France used PMT (‘Paris Mean Time’) from 1891 to 1911 then WET/WEST up to 1940 and CET/CEST from 1946. (In almost all time zones the abbreviations have been stable since 1970.) The POSIX standard allows only one or two abbreviations per time zone, so you may see the current abbreviation(s) used for older times. For some time zones abbreviations are like -03 and +0845: this is done when there is no official abbreviation. (Negative values are behind (West of) UTC, as for the `"%z"` format for `[strftime](strptime)`.) The function `OlsonNames` returns the time-zone names known to the currently selected Olson/IANA database. The system-specific location in the file system varies, e.g. ‘/usr/share/zoneinfo’ (Linux, macOS, FreeBSD), ‘/usr/share/lib/zoneinfo’ (Solaris, AIX), .... It is likely that there is a file named something like ‘zone1970.tab’ or (older) ‘zone.tab’ under that directory listing the locations known as time-zone names (but not for example `EST5EDT`). See also <https://en.wikipedia.org/wiki/Zone.tab>. Where **R** was configured with option --with-internal-tzcode (the default on Windows: recommended on Solaris), the database at `file.path(R.home("share"), "zoneinfo")` is used by default: file ‘VERSION’ in that directory states the version. That option is also the default on macOS but there whichever is more recent of the system database at ‘/var/db/timezone/zoneinfo’ and that distributed with **R** is used by default. Environment variable TZDIR can be used to point to a different ‘zoneinfo’ database: value `"internal"` indicates the database from the **R** sources and `"macOS"` indicates the system database. (Setting either of those values would not be recognized by other software using TZDIR.) Setting TZDIR is also supported by the native services on some OSes, e.g. Linux using `glibc` except in secure modes. Time zones given by name (*via* environment variable TZ, in `tz` arguments to functions such as `[as.POSIXlt](as.posixlt)` and perhaps the system time zone) are loaded from the currently selected ‘zoneinfo’ database. Most platforms support time zones of the form Etc/GMT+n and Etc/GMT-n (possibly also without prefix Etc/), which assume a fixed offset from UTC (hence no DST). Contrary to some expectations (but consistent with names such as PST8PDT), negative offsets are times ahead of (east of) UTC, positive offsets are times behind (west of) UTC. Immediately prior to the advent of legislated time zones, most people used time based on their longitude (or that of a nearby town), known as ‘Local Mean Time’ and abbreviated as LMT in the databases: in many countries that was codified with a specific name before the switch to a standard time. For example, Paris codified its LMT as ‘Paris Mean Time’ in 1891 (to be used throughout mainland France) and switched to GMT+0 in 1911. Some systems (notably Linux) have a `tzselect` command which allows the interactive selection of a supported time zone name. On systems using `systemd` (notably Linux), the OS command `timedatectl list-timezones` will list all available time zone names. ### Warning There is a system-specific upper limit on the number of bytes in (abbreviated) time-zone names which can be as low as 6 (as required by POSIX). Some OSes allow the setting of time zones with names which exceed their limit, and that can crash the **R** session. `OlsonNames` tries to find an Olson database in known locations. It might not succeed (when it returns an empty vector with a warning) and even if it does it might not locate the database used by the date-time code linked into **R**. Fortunately names are added rarely and most databases are pretty complete. ### How the system time zone is found This section is of background interest for users of a Unix-alike, but may help if an `NA` value is returned unexpectedly. Commercial Unixen such as Solaris and AIX set TZ, so the value when **R** is started is used. All other common platforms (Linux, macOS, \*BSD) use similar schemes, either derived from `tzcode` (currently distributed from <https://www.iana.org/time-zones>) or independently coded (`glibc`, `musl-libc`). Such systems read the time-zone information from a file ‘localtime’, usually under ‘/etc’ (but possibly under ‘/usr/local/etc’ or ‘/usr/local/etc/zoneinfo’). As the usual Linux manual page for `localtime` says ‘Because the time zone identifier is extracted from the symlink target name of ‘/etc/localtime’, this file may not be a normal file or hardlink.’ Nevertheless, some Linux distributions (including the one from which that quote was taken) or sysadmins have chosen to copy a time-zone file to ‘localtime’. For a non-symlink, the ultimate fallback is to compare that file to all files in the time-zone database. Some Linux platforms provide two other mechanisms which are tried in turn before looking at ‘/etc/localtime’. * ‘Modern’ Linux systems use `systemd` which provides mechanisms to set and retrieve the time zone (amongst other things). There is a command `timedatectl` to give details. (Unfortunately RHEL/Centos 6.x are not ‘modern’.) * Debian-derived systems since *ca* 2007 have supplied a file ‘/etc/timezone’. Its format is undocumented but empirically it contains a single line of text naming the time zone. In each case a sanity check is performed that the time-zone name is the name of a file in the time-zone database. (The systems probably use the time-zone file (symlinked to) ‘/etc/localtime’, but the `Sys.timezone` code does not check that is the same as the named file in the database. This is deliberate as they may be from different dates.) ### Note Since 2007 there has been considerable disruption over changes to the timings of the DST transitions, originally aimed at energy conservation. These often have short notice and time-zone databases may not be up to date. (Morocco in 2013 announced a change to the end of DST at *a days* notice, and in 2015 North Korea gave imprecise information about a change a week in advance.) On platforms with case-insensitive file systems, time zone names will be case-insensitive. They may or may not be on other platforms and so, for example, `"gmt"` is valid on some platforms and not on others. Note that except where replaced, the operation of time zones is an OS service, and even where replaced a third-party database is used and can be updated (see the section on ‘Time zone names’). Incorrect results will never be an **R** issue, so please ensure that you have the courtesy not to blame **R** for them. ### See Also `[Sys.time](sys.time)`, `[as.POSIXlt](as.posixlt)`. <https://en.wikipedia.org/wiki/Time_zone> and <https://data.iana.org/time-zones/tz-link.html> for extensive sets of links. <https://data.iana.org/time-zones/theory.html> for the ‘rules’ of the Olson/IANA database. ### Examples ``` Sys.timezone() str(OlsonNames()) ## typically close to 600 hundred names, ## typically some acronyms/aliases such as "UTC", "NZ", "MET", "Eire", ..., but ## mostly pairs (and triplets) such as "Pacific/Auckland" table(sl <- grepl("/", OlsonNames())) OlsonNames()[ !sl ] # the simple ones head(Osl <- strsplit(OlsonNames()[sl], "/")) (tOS1 <- table(vapply(Osl, `[[`, "", 1))) # Continents, countries, ... table(lengths(Osl))# most are pairs, some triplets str(Osl[lengths(Osl) >= 3])# "America" South and North ... ``` r None `sys.parent` Functions to Access the Function Call Stack --------------------------------------------------------- ### Description These functions provide access to `<environment>`s (‘frames’ in S terminology) associated with functions further up the calling stack. ### Usage ``` sys.call(which = 0) sys.frame(which = 0) sys.nframe() sys.function(which = 0) sys.parent(n = 1) sys.calls() sys.frames() sys.parents() sys.on.exit() sys.status() parent.frame(n = 1) ``` ### Arguments | | | | --- | --- | | `which` | the frame number if non-negative, the number of frames to go back if negative. | | `n` | the number of generations to go back. (See the ‘Details’ section.) | ### Details `[.GlobalEnv](environment)` is given number 0 in the list of frames. Each subsequent function evaluation increases the frame stack by 1. The call, function definition and the environment for evaluation of that function are returned by `sys.call`, `sys.function` and `sys.frame` with the appropriate index. `sys.call`, `sys.function` and `sys.frame` accept integer values for the argument `which`. Non-negative values of `which` are frame numbers starting from `[.GlobalEnv](environment)` whereas negative values are counted back from the frame number of the current evaluation. The parent frame of a function evaluation is the environment in which the function was called. It is not necessarily numbered one less than the frame number of the current evaluation, nor is it the environment within which the function was defined. `sys.parent` returns the number of the parent frame if `n` is 1 (the default), the grandparent if `n` is 2, and so on. See also the ‘Note’. `sys.nframe` returns an integer, the number of the current frame as described in the first paragraph. `sys.calls` and `sys.frames` give a pairlist of all the active calls and frames, respectively, and `sys.parents` returns an integer vector of indices of the parent frames of each of those frames. Notice that even though the `sys.`*xxx* functions (except `sys.status`) are interpreted, their contexts are not counted nor are they reported. There is no access to them. `sys.status()` returns a list with components `sys.calls`, `sys.parents` and `sys.frames`, the results of calls to those three functions (which will include the call to `sys.status`: see the first example). `sys.on.exit()` returns the expression stored for use by `<on.exit>` in the function currently being evaluated. (Note that this differs from S, which returns a list of expressions for the current frame and its parents.) `parent.frame(n)` is a convenient shorthand for `sys.frame(sys.parent(n))` (implemented slightly more efficiently). ### Value `sys.call` returns a call, `sys.function` a function definition, and `sys.frame` and `parent.frame` return an environment. For the other functions, see the ‘Details’ section. ### Note Strictly, `sys.parent` and `parent.frame` refer to the *context* of the parent interpreted function. So internal functions (which may or may not set contexts and so may or may not appear on the call stack) may not be counted, and S3 methods can also do surprising things. As an effect of lazy evaluation, these functions look at the call stack at the time they are evaluated, not at the time they are called. Passing calls to them as function arguments is unlikely to be a good idea, but these functions still look at the call stack and count frames from the frame of the function evaluation from which they were called. Hence, when these functions are called to provide default values for function arguments, they are evaluated in the evaluation of the called function and they count frames accordingly (see e.g. the `envir` argument of `<eval>`). ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. (Not `parent.frame`.) ### See Also `<eval>` for a usage of `sys.frame` and `parent.frame`. ### Examples ``` require(utils) ## Note: the first two examples will give different results ## if run by example(). ff <- function(x) gg(x) gg <- function(y) sys.status() str(ff(1)) gg <- function(y) { ggg <- function() { cat("current frame is", sys.nframe(), "\n") cat("parents are", sys.parents(), "\n") print(sys.function(0)) # ggg print(sys.function(2)) # gg } if(y > 0) gg(y-1) else ggg() } gg(3) t1 <- function() { aa <- "here" t2 <- function() { ## in frame 2 here cat("current frame is", sys.nframe(), "\n") str(sys.calls()) ## list with two components t1() and t2() cat("parents are frame numbers", sys.parents(), "\n") ## 0 1 print(ls(envir = sys.frame(-1))) ## [1] "aa" "t2" invisible() } t2() } t1() test.sys.on.exit <- function() { on.exit(print(1)) ex <- sys.on.exit() str(ex) cat("exiting...\n") } test.sys.on.exit() ## gives 'language print(1)', prints 1 on exit ## An example where the parent is not the next frame up the stack ## since method dispatch uses a frame. as.double.foo <- function(x) { str(sys.calls()) print(sys.frames()) print(sys.parents()) print(sys.frame(-1)); print(parent.frame()) x } t2 <- function(x) as.double(x) a <- structure(pi, class = "foo") t2(a) ``` r None `svd` Singular Value Decomposition of a Matrix ----------------------------------------------- ### Description Compute the singular-value decomposition of a rectangular matrix. ### Usage ``` svd(x, nu = min(n, p), nv = min(n, p), LINPACK = FALSE) La.svd(x, nu = min(n, p), nv = min(n, p)) ``` ### Arguments | | | | --- | --- | | `x` | a numeric or complex matrix whose SVD decomposition is to be computed. Logical matrices are coerced to numeric. | | `nu` | the number of left singular vectors to be computed. This must between `0` and `n = nrow(x)`. | | `nv` | the number of right singular vectors to be computed. This must be between `0` and `p = ncol(x)`. | | `LINPACK` | logical. Defunct and an error. | ### Details The singular value decomposition plays an important role in many statistical techniques. `svd` and `La.svd` provide two interfaces which differ in their return values. Computing the singular vectors is the slow part for large matrices. The computation will be more efficient if both `nu <= min(n, p)` and `nv <= min(n, p)`, and even more so if both are zero. Unsuccessful results from the underlying LAPACK code will result in an error giving a positive error code (most often `1`): these can only be interpreted by detailed study of the FORTRAN code but mean that the algorithm failed to converge. ### Value The SVD decomposition of the matrix as computed by LAPACK, *\bold{X = U D V'},* where *\bold{U}* and *\bold{V}* are orthogonal, *\bold{V'}* means *V transposed* (and conjugated for complex input), and *\bold{D}* is a diagonal matrix with the (non-negative) singular values *D[i,i]* in decreasing order. Equivalently, *\bold{D = U' X V}*, which is verified in the examples. The returned value is a list with components | | | | --- | --- | | `d` | a vector containing the singular values of `x`, of length `min(n, p)`, sorted decreasingly. | | `u` | a matrix whose columns contain the left singular vectors of `x`, present if `nu > 0`. Dimension `c(n, nu)`. | | `v` | a matrix whose columns contain the right singular vectors of `x`, present if `nv > 0`. Dimension `c(p, nv)`. | Recall that the singular vectors are only defined up to sign (a constant of modulus one in the complex case). If a left singular vector has its sign changed, changing the sign of the corresponding right vector gives an equivalent decomposition. For `La.svd` the return value replaces `v` by `vt`, the (conjugated if complex) transpose of `v`. ### Source The main functions used are the LAPACK routines `DGESDD` and `ZGESDD`. LAPACK is from <https://www.netlib.org/lapack/> and its guide is listed in the references. ### References Anderson. E. and ten others (1999) *LAPACK Users' Guide*. Third Edition. SIAM. Available on-line at <https://www.netlib.org/lapack/lug/lapack_lug.html>. The [‘Singular-value decomposition’](https://en.wikipedia.org/wiki/Singular-value_decomposition) Wikipedia article. Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<eigen>`, `<qr>`. ### Examples ``` hilbert <- function(n) { i <- 1:n; 1 / outer(i - 1, i, "+") } X <- hilbert(9)[, 1:6] (s <- svd(X)) D <- diag(s$d) s$u %*% D %*% t(s$v) # X = U D V' t(s$u) %*% X %*% s$v # D = U' X V ```
programming_docs
r None `invisible` Change the Print Mode to Invisible ----------------------------------------------- ### Description Return a (temporarily) invisible copy of an object. ### Usage ``` invisible(x) ``` ### Arguments | | | | --- | --- | | `x` | an arbitrary **R** object. | ### Details This function can be useful when it is desired to have functions return values which can be assigned, but which do not print when they are not assigned. This is a <primitive> function. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[withVisible](withvisible)`, `[return](function)`, `<function>`. ### Examples ``` # These functions both return their argument f1 <- function(x) x f2 <- function(x) invisible(x) f1(1) # prints f2(1) # does not ``` r None `rapply` Recursively Apply a Function to a List ------------------------------------------------ ### Description `rapply` is a recursive version of `<lapply>` with flexibility in *how* the result is structured (`how = ".."`). ### Usage ``` rapply(object, f, classes = "ANY", deflt = NULL, how = c("unlist", "replace", "list"), ...) ``` ### Arguments | | | | --- | --- | | `object` | a `<list>` or `<expression>`, i.e., “list-like”. | | `f` | a `<function>` of one “principal” argument, passing further arguments via `...`. | | `classes` | character vector of `<class>` names, or `"ANY"` to match any class. | | `deflt` | The default result (not used if `how = "replace"`). | | `how` | character string partially matching the three possibilities given: see ‘Details’. | | `...` | additional arguments passed to the call to `f`. | ### Details This function has two basic modes. If `how = "replace"`, each element of `object` which is not itself list-like and has a class included in `classes` is replaced by the result of applying `f` to the element. Otherwise, with mode `how = "list"` or `how = "unlist"`, conceptually `object` is copied, all non-list elements which have a class included in `classes` are replaced by the result of applying `f` to the element and all others are replaced by `deflt`. Finally, if `how = "unlist"`, `unlist(recursive = TRUE)` is called on the result. The semantics differ in detail from `<lapply>`: in particular the arguments are evaluated before calling the C code. In **R** 3.5.x and earlier, `object` was required to be a list, which was *not* the case for its list-like components. ### Value If `how = "unlist"`, a vector, otherwise “list-like” of similar structure as `object`. ### References Chambers, J. A. (1998) *Programming with Data*. Springer. (`rapply` is only described briefly there.) ### See Also `<lapply>`, `[dendrapply](../../stats/html/dendrapply)`. ### Examples ``` X <- list(list(a = pi, b = list(c = 1L)), d = "a test") # the "identity operation": rapply(X, function(x) x, how = "replace") -> X.; stopifnot(identical(X, X.)) rapply(X, sqrt, classes = "numeric", how = "replace") rapply(X, deparse, control = "all") # passing extras. argument of deparse() rapply(X, nchar, classes = "character", deflt = NA_integer_, how = "list") rapply(X, nchar, classes = "character", deflt = NA_integer_, how = "unlist") rapply(X, nchar, classes = "character", how = "unlist") rapply(X, log, classes = "numeric", how = "replace", base = 2) ## with expression() / list(): E <- expression(list(a = pi, b = expression(c = C1 * C2)), d = "a test") LE <- list(expression(a = pi, b = expression(c = C1 * C2)), d = "a test") rapply(E, nchar, how="replace") # "expression(c = C1 * C2)" are 23 chars rapply(E, nchar, classes = "character", deflt = NA_integer_, how = "unlist") rapply(LE, as.character) # a "pi" | b1 "expression" | b2 "C1 * C2" .. rapply(LE, nchar) # (see above) stopifnot(exprs = { identical(E , rapply(E , identity, how = "replace")) identical(LE, rapply(LE, identity, how = "replace")) }) ``` r None `reg.finalizer` Finalization of Objects ---------------------------------------- ### Description Registers an **R** function to be called upon garbage collection of object or (optionally) at the end of an **R** session. ### Usage ``` reg.finalizer(e, f, onexit = FALSE) ``` ### Arguments | | | | --- | --- | | `e` | Object to finalize. Must be an environment or an external pointer. | | `f` | Function to call on finalization. Must accept a single argument, which will be the object to finalize. | | `onexit` | logical: should the finalizer be run if the object is still uncollected at the end of the **R** session? | ### Details The main purpose of this function is to allow objects that refer to external items (a temporary file, say) to perform cleanup actions when they are no longer referenced from within **R**. This only makes sense for objects that are never copied on assignment, hence the restriction to environments and external pointers. *Inter alia*, it provides a way to program code to be run at the end of an **R** session without manipulating `[.Last](quit)`. For use in a package, it is often a good idea to set a finalizer on an object in the namespace: then it will be called at the end of the session, or soon after the namespace is unloaded if that is done during the session. ### Value `NULL`. ### Note **R**'s interpreter is not re-entrant and the finalizer could be run in the middle of a computation. So there are many functions which it is potentially unsafe to call from `f`: one example which caused trouble is `<options>`. Finalizers are scheduled at garbage collection but only run at a relatively safe time thereafter. ### See Also `<gc>` and `[Memory](memory)` for garbage collection and memory management. ### Examples ``` f <- function(e) print("cleaning....") g <- function(x){ e <- environment(); reg.finalizer(e, f) } g() invisible(gc()) # trigger cleanup ``` r None `list2env` From A List, Build or Add To an Environment ------------------------------------------------------- ### Description From a *named* `<list> x`, create an `<environment>` containing all list components as objects, or “multi-assign” from `x` into a pre-existing environment. ### Usage ``` list2env(x, envir = NULL, parent = parent.frame(), hash = (length(x) > 100), size = max(29L, length(x))) ``` ### Arguments | | | | --- | --- | | `x` | a `<list>`, where `<names>(x)` must not contain empty (`""`) elements. | | `envir` | an `<environment>` or `NULL`. | | `parent` | (for the case `envir = NULL`): a parent frame aka enclosing environment, see `[new.env](environment)`. | | `hash` | (for the case `envir = NULL`): logical indicating if the created environment should use hashing, see `[new.env](environment)`. | | `size` | (in the case `envir = NULL, hash = TRUE`): hash size, see `[new.env](environment)`. | ### Details This will be very slow for large inputs unless hashing is used on the environment. Environments must have uniquely named entries, but named lists need not: where the list has duplicate names it is the *last* element with the name that is used. Empty names throw an error. ### Value An `<environment>`, either newly created (as by `[new.env](environment)`) if the `envir` argument was `NULL`, otherwise the updated environment `envir`. Since environments are never duplicated, the argument `envir` is also changed. ### Author(s) Martin Maechler ### See Also `<environment>`, `[new.env](environment)`, `<as.environment>`; further, `<assign>`. The (semantical) “inverse”: `[as.list.environment](list)`. ### Examples ``` L <- list(a = 1, b = 2:4, p = pi, ff = gl(3, 4, labels = LETTERS[1:3])) e <- list2env(L) ls(e) stopifnot(ls(e) == sort(names(L)), identical(L$b, e$b)) # "$" working for environments as for lists ## consistency, when we do the inverse: ll <- as.list(e) # -> dispatching to the as.list.environment() method rbind(names(L), names(ll)) # not in the same order, typically, # but the same content: stopifnot(identical(L [sort.list(names(L ))], ll[sort.list(names(ll))])) ## now add to e -- can be seen as a fast "multi-assign": list2env(list(abc = LETTERS, note = "just an example", df = data.frame(x = rnorm(20), y = rbinom(20, 1, prob = 0.2))), envir = e) utils::ls.str(e) ``` r None `is.single` Is an Object of Single Precision Type? --------------------------------------------------- ### Description `is.single` reports an error. There are no single precision values in R. ### Usage ``` is.single(x) ``` ### Arguments | | | | --- | --- | | `x` | object to be tested. | ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. r None `Sys.which` Find Full Paths to Executables ------------------------------------------- ### Description This is an interface to the system command `which`, or to an emulation on Windows. ### Usage ``` Sys.which(names) ``` ### Arguments | | | | --- | --- | | `names` | Character vector of names or paths of possible executables. | ### Details The system command `which` reports on the full path names of an executable (including an executable script) as would be executed by a shell, accepting either absolute paths or looking on the path. On Windows an ‘executable’ is a file with extension ‘.exe’, ‘.com’, ‘.cmd’ or ‘.bat’. Such files need not actually be executable, but they are what `<system>` tries. On a Unix-alike the full path to `which` (usually ‘/usr/bin/which’) is found when **R** is installed. ### Value A character vector of the same length as `names`, named by `names`. The elements are either the full path to the executable or some indication that no executable of that name was found. Typically the indication is `""`, but this does depend on the OS (and the known exceptions are changed to `""`). Missing values in `names` have missing return values. On Windows the paths will be short paths (8+3 components, no spaces) with `\` as the path delimiter. ### Note Except on Windows this calls the system command `which`: since that is not part of e.g. the POSIX standards, exactly what it does is OS-dependent. It will usually do tilde-expansion and it may make use of `csh` aliases. ### Examples ``` ## the first two are likely to exist everywhere ## texi2dvi exists on most Unix-alikes and under MiKTeX Sys.which(c("ftp", "ping", "texi2dvi", "this-does-not-exist")) ``` r None `l10n_info` Localization Information ------------------------------------- ### Description Report on localization information. ### Usage ``` l10n_info() ``` ### Details ‘A Latin-1 locale’ includes supersets (for printable characters) such as Windows codepage 1252 but not Latin-9 (ISO 8859-15). On **Windows** (where the resulting list contains `codepage` and `system.codepage` components additionally), common codepages are 1252 (Western European), 1250 (Central European), 1251 (Cyrillic), 1253 (Greek), 1254 (Turkish), 1255 (Hebrew), 1256 (Arabic), 1257 (Baltic), 1258 (Vietnamese), 874 (Thai), 932 (Japanese), 936 (Simplified Chinese), 949 (Korean) and 950 (Traditional Chinese). Codepage 28605 is Latin-9 and 65001 is UTF-8 (where supported). **R** does not allow the C locale, and uses 1252 as the default codepage. ### Value A list with three logical elements and further OS-specific elements: | | | | --- | --- | | `MBCS` | If a multi-byte character set in use? | | `UTF-8` | Is this known to be a UTF-8 locale? | | `Latin-1` | Is this known to be a Latin-1 locale? | *Not* on Windows: | | | | --- | --- | | `codeset` | character. The encoding name as reported by the OS, possibly `""`. (Added in **R** 4.1.0. Encoding names are OS-specific.) | Only on Windows: | | | | --- | --- | | `codepage` | integer: the Windows codepage corresponding to the locale **R** is using (and not necessarily that Windows is using). | | `system.codepage` | integer: the Windows system/ANSI codepage (the codepage Windows is using). Added in **R** 4.1.0. | ### See Also `[Sys.getlocale](locales)`, `[localeconv](sys.localeconv)` ### Examples ``` l10n_info() ``` r None `colnames` Row and Column Names -------------------------------- ### Description Retrieve or set the row or column names of a matrix-like object. ### Usage ``` rownames(x, do.NULL = TRUE, prefix = "row") rownames(x) <- value colnames(x, do.NULL = TRUE, prefix = "col") colnames(x) <- value ``` ### Arguments | | | | --- | --- | | `x` | a matrix-like **R** object, with at least two dimensions for `colnames`. | | `do.NULL` | logical. If `FALSE` and names are `NULL`, names are created. | | `prefix` | for created names. | | `value` | a valid value for that component of `<dimnames>(x)`. For a matrix or array this is either `NULL` or a character vector of non-zero length equal to the appropriate dimension. | ### Details The extractor functions try to do something sensible for any matrix-like object `x`. If the object has `<dimnames>` the first component is used as the row names, and the second component (if any) is used for the column names. For a data frame, `rownames` and `colnames` eventually call `<row.names>` and `<names>` respectively, but the latter are preferred. If `do.NULL` is `FALSE`, a character vector (of length `[NROW](nrow)(x)` or `[NCOL](nrow)(x)`) is returned in any case, prepending `prefix` to simple numbers, if there are no dimnames or the corresponding component of the dimnames is `NULL`. The replacement methods for arrays/matrices coerce vector and factor values of `value` to character, but do not dispatch methods for `as.character`. For a data frame, `value` for `rownames` should be a character vector of non-duplicated and non-missing names (this is enforced), and for `colnames` a character vector of (preferably) unique syntactically-valid names. In both cases, `value` will be coerced by `[as.character](character)`, and setting `colnames` will convert the row names to character. ### Note If the replacement versions are called on a matrix without any existing dimnames, they will add suitable dimnames. But constructions such as ``` rownames(x)[3] <- "c" ``` may not work unless `x` already has dimnames, since this will create a length-3 `value` from the `NULL` value of `rownames(x)`. ### See Also `<dimnames>`, `[case.names](../../stats/html/case.names)`, `[variable.names](../../stats/html/case.names)`. ### Examples ``` m0 <- matrix(NA, 4, 0) rownames(m0) m2 <- cbind(1, 1:4) colnames(m2, do.NULL = FALSE) colnames(m2) <- c("x","Y") rownames(m2) <- rownames(m2, do.NULL = FALSE, prefix = "Obs.") m2 ``` r None `dev` Lists of Open/Active Graphics Devices -------------------------------------------- ### Description A pairlist of the names of open graphics devices is stored in `.Devices`. The name of the active device (see `[dev.cur](../../grdevices/html/dev)`) is stored in `.Device`. Both are symbols and so appear in the base namespace. ### Value `.Device` is a length-one character vector. `.Devices` is a [pairlist](list) of length-one character vectors. The first entry is always `"null device"`, and there are as many entries as the maximal number of graphics devices which have been simultaneously active. If a device has been removed, its entry will be `""` until the device number is reused. Devices may add attributes to the character vector: for example devices which write to a file may record its path in attribute `"filepath"`. r None `Ops.Date` Operators on the Date Class --------------------------------------- ### Description Operators for the `"[Date](dates)"` class. There is an `[Ops](groupgeneric)` method and specific methods for `+` and `-` for the `[Date](dates)` class. ### Usage ``` date + x x + date date - x date1 lop date2 ``` ### Arguments | | | | --- | --- | | `date` | date objects | | `date1, date2` | date objects or character vectors. (Character vectors are converted by `[as.Date](as.date)`.) | | `x` | a numeric vector (in days) *or* an object of class `"<difftime>"`, rounded to the nearest whole day. | | `lop` | One of `==`, `!=`, `<`, `<=`, `>` or `>=`. | ### Details `x` does not need to be integer if specified as a numeric vector, but see the comments about fractional days in the help for `[Dates](dates)`. ### Examples ``` (z <- Sys.Date()) z + 10 z < c("2009-06-01", "2010-01-01", "2015-01-01") ``` r None `system2` Invoke a System Command ---------------------------------- ### Description `system2` invokes the OS command specified by `command`. ### Usage ``` system2(command, args = character(), stdout = "", stderr = "", stdin = "", input = NULL, env = character(), wait = TRUE, minimized = FALSE, invisible = TRUE, timeout = 0) ``` ### Arguments | | | | --- | --- | | `command` | the system command to be invoked, as a character string. | | `args` | a character vector of arguments to `command`. | | `stdout, stderr` | where output to ‘stdout’ or ‘stderr’ should be sent. Possible values are `""`, to the **R** console (the default), `NULL` or `FALSE` (discard output), `TRUE` (capture the output in a character vector) or a character string naming a file. | | `stdin` | should input be diverted? `""` means the default, alternatively a character string naming a file. Ignored if `input` is supplied. | | `input` | if a character vector is supplied, this is copied one string per line to a temporary file, and the standard input of `command` is redirected to the file. | | `env` | character vector of name=value strings to set environment variables. | | `wait` | a logical (not `NA`) indicating whether the **R** interpreter should wait for the command to finish, or run it asynchronously. This will be ignored (and the interpreter will always wait) if `stdout = TRUE` or `stderr = TRUE`. When running the command asynchronously, no output will be displayed on the `Rgui` console in Windows (it will be dropped, instead). | | `timeout` | timeout in seconds, ignored if 0. This is a limit for the elapsed time running `command` in a separate process. Fractions of seconds are ignored. | | `minimized, invisible` | arguments that are accepted on Windows but ignored on this platform, with a warning. | ### Details Unlike `<system>`, `command` is always quoted by `[shQuote](shquote)`, so it must be a single command without arguments. For details of how `command` is found see `<system>`. On Windows, `env` is only supported for commands such as `R` and `make` which accept environment variables on their command line. Some Unix commands (such as some implementations of `ls`) change their output if they consider it to be piped or redirected: `stdout = TRUE` uses a pipe whereas `stdout = "some_file_name"` uses redirection. Because of the way it is implemented, on a Unix-alike `stderr = TRUE` implies `stdout = TRUE`: a warning is given if this is not what was specified. When `timeout` is non-zero, the command is terminated after the given number of seconds. The termination works for typical commands, but is not guaranteed: it is possible to write a program that would keep running after the time is out. Timeouts can only be set with `wait = TRUE`. Timeouts cannot be used with interactive commands: the command is run with standard input redirected from `/dev/null` and it must not modify terminal settings. As long as tty `tostop` option is disabled, which it usually is by default, the executed command may write to standard output and standard error. ### Value If `stdout = TRUE` or `stderr = TRUE`, a character vector giving the output of the command, one line per character string. (Output lines of more than 8095 bytes will be split.) If the command could not be run an **R** error is generated. If `command` runs but gives a non-zero exit status this will be reported with a warning and in the attribute `"status"` of the result: an attribute `"errmsg"` may also be available. In other cases, the return value is an error code (`0` for success), given the <invisible> attribute (so needs to be printed explicitly). If the command could not be run for any reason, the value is `127` and a warning is issued (as from **R** 3.5.0). Otherwise if `wait = TRUE` the value is the exit status returned by the command, and if `wait = FALSE` it is `0` (the conventional success value). If the command times out, a warning is issued and the exit status is `124`. ### Note `system2` is a more portable and flexible interface than `<system>`. It allows redirection of output without needing to invoke a shell on Windows, a portable way to set environment variables for the execution of `command`, and finer control over the redirection of `stdout` and `stderr`. Conversely, `system` (and `shell` on Windows) allows the invocation of arbitrary command lines. There is no guarantee that if `stdout` and `stderr` are both `TRUE` or the same file that the two streams will be interleaved in order. This depends on both the buffering used by the command and the OS. ### See Also `<system>`.
programming_docs
r None `sys.source` Parse and Evaluate Expressions from a File -------------------------------------------------------- ### Description Parses expressions in the given file, and then successively evaluates them in the specified environment. ### Usage ``` sys.source(file, envir = baseenv(), chdir = FALSE, keep.source = getOption("keep.source.pkgs"), keep.parse.data = getOption("keep.parse.data.pkgs"), toplevel.env = as.environment(envir)) ``` ### Arguments | | | | --- | --- | | `file` | a character string naming the file to be read from | | `envir` | an **R** object specifying the environment in which the expressions are to be evaluated. May also be a list or an integer. The default value `NULL` corresponds to evaluation in the base environment. This is probably not what you want; you should typically supply an explicit `envir` argument. | | `chdir` | logical; if `TRUE`, the **R** working directory is changed to the directory containing `file` for evaluating. | | `keep.source` | logical. If `TRUE`, functions keep their source including comments, see `<options>(keep.source = *)` for more details. | | `keep.parse.data` | logical. If `TRUE` and `keep.source` is also `TRUE`, functions keep parse data with their source, see `<options>(keep.parse.data = *)` for more details. | | `toplevel.env` | an **R** environment to be used as top level while evaluating the expressions. This argument is useful for frameworks running package tests; the default should be used in other cases | ### Details For large files, `keep.source = FALSE` may save quite a bit of memory. Disabling only parse data via `keep.parse.data = FALSE` can already save a lot. In order for the code being evaluated to use the correct environment (for example, in global assignments), source code in packages should call `[topenv](ns-topenv)()`, which will return the namespace, if any, the environment set up by `sys.source`, or the global environment if a saved image is being used. ### See Also `<source>`, and `<library>` which uses `sys.source`. ### Examples ``` ## a simple way to put some objects in an environment ## high on the search path tmp <- tempfile() writeLines("aaa <- pi", tmp) env <- attach(NULL, name = "myenv") sys.source(tmp, env) unlink(tmp) search() aaa detach("myenv") ``` r None `strwrap` Wrap Character Strings to Format Paragraphs ------------------------------------------------------ ### Description Each character string in the input is first split into paragraphs (or lines containing whitespace only). The paragraphs are then formatted by breaking lines at word boundaries. The target columns for wrapping lines and the indentation of the first and all subsequent lines of a paragraph can be controlled independently. ### Usage ``` strwrap(x, width = 0.9 * getOption("width"), indent = 0, exdent = 0, prefix = "", simplify = TRUE, initial = prefix) ``` ### Arguments | | | | --- | --- | | `x` | a character vector, or an object which can be converted to a character vector by `[as.character](character)`. | | `width` | a positive integer giving the target column for wrapping lines in the output. | | `indent` | a non-negative integer giving the indentation of the first line in a paragraph. | | `exdent` | a non-negative integer specifying the indentation of subsequent lines in paragraphs. | | `prefix, initial` | a character string to be used as prefix for each line except the first, for which `initial` is used. | | `simplify` | a logical. If `TRUE`, the result is a single character vector of line text; otherwise, it is a list of the same length as `x` the elements of which are character vectors of line text obtained from the corresponding element of `x`. (Hence, the result in the former case is obtained by unlisting that of the latter.) | ### Details Whitespace (space, tab or newline characters) in the input is destroyed. Double spaces after periods, question and explanation marks (thought as representing sentence ends) are preserved. Currently, possible sentence ends at line breaks are not considered specially. Indentation is relative to the number of characters in the prefix string. ### Value A character vector (if `simplify` is `TRUE`), or a list of such character vectors, with declared input encodings preserved. ### Examples ``` ## Read in file 'THANKS'. x <- paste(readLines(file.path(R.home("doc"), "THANKS")), collapse = "\n") ## Split into paragraphs and remove the first three ones x <- unlist(strsplit(x, "\n[ \t\n]*\n"))[-(1:3)] ## Join the rest x <- paste(x, collapse = "\n\n") ## Now for some fun: writeLines(strwrap(x, width = 60)) writeLines(strwrap(x, width = 60, indent = 5)) writeLines(strwrap(x, width = 60, exdent = 5)) writeLines(strwrap(x, prefix = "THANKS> ")) ## Note that messages are wrapped AT the target column indicated by ## 'width' (and not beyond it). ## From an R-devel posting by J. Hosking <[email protected]>. x <- paste(sapply(sample(10, 100, replace = TRUE), function(x) substring("aaaaaaaaaa", 1, x)), collapse = " ") sapply(10:40, function(m) c(target = m, actual = max(nchar(strwrap(x, m))))) ``` r None `isSymmetric` Test if a Matrix or other Object is Symmetric (Hermitian) ------------------------------------------------------------------------ ### Description Generic function to test if `object` is symmetric or not. Currently only a matrix method is implemented, where a `<complex>` matrix `Z` must be “Hermitian” for `isSymmetric(Z)` to be true. ### Usage ``` isSymmetric(object, ...) ## S3 method for class 'matrix' isSymmetric(object, tol = 100 * .Machine$double.eps, tol1 = 8 * tol, ...) ``` ### Arguments | | | | --- | --- | | `object` | any **R** object; a `<matrix>` for the matrix method. | | `tol` | numeric scalar >= 0. Smaller differences are not considered, see `[all.equal.numeric](all.equal)`. | | `tol1` | numeric scalar >= 0. `isSymmetric.matrix()` ‘pre-tests’ the first and last few rows for fast detection of ‘obviously’ asymmetric cases with this tolerance. Setting it to length zero will skip the pre-tests. | | `...` | further arguments passed to methods; the matrix method passes these to `<all.equal>`. If the row and column names of `object` are allowed to differ for the symmetry check do use `check.attributes = FALSE`! | ### Details The `<matrix>` method is used inside `<eigen>` by default to test symmetry of matrices *up to rounding error*, using `<all.equal>`. It might not be appropriate in all situations. Note that a matrix `m` is only symmetric if its `rownames` and `colnames` are identical. Consider using `<unname>(m)`. ### Value logical indicating if `object` is symmetric or not. ### See Also `<eigen>` which calls `isSymmetric` when its `symmetric` argument is missing. ### Examples ``` isSymmetric(D3 <- diag(3)) # -> TRUE D3[2, 1] <- 1e-100 D3 isSymmetric(D3) # TRUE isSymmetric(D3, tol = 0) # FALSE for zero-tolerance ## Complex Matrices - Hermitian or not Z <- sqrt(matrix(-1:2 + 0i, 2)); Z <- t(Conj(Z)) %*% Z Z isSymmetric(Z) # TRUE isSymmetric(Z + 1) # TRUE isSymmetric(Z + 1i) # FALSE -- a Hermitian matrix has a *real* diagonal colnames(D3) <- c("X", "Y", "Z") isSymmetric(D3) # FALSE (as row and column names differ) isSymmetric(D3, check.attributes=FALSE) # TRUE (as names are not checked) ``` r None `detach` Detach Objects from the Search Path --------------------------------------------- ### Description Detach a database, i.e., remove it from the `<search>()` path of available **R** objects. Usually this is either a `<data.frame>` which has been `<attach>`ed or a package which was attached by `<library>`. ### Usage ``` detach(name, pos = 2L, unload = FALSE, character.only = FALSE, force = FALSE) ``` ### Arguments | | | | --- | --- | | `name` | The object to detach. Defaults to `search()[pos]`. This can be an unquoted name or a character string but *not* a character vector. If a number is supplied this is taken as `pos`. | | `pos` | Index position in `<search>()` of the database to detach. When `name` is a number, `pos = name` is used. | | `unload` | A logical value indicating whether or not to attempt to unload the namespace when a package is being detached. If the package has a namespace and `unload` is `TRUE`, then `detach` will attempt to unload the namespace *via* `[unloadNamespace](ns-load)`: if the namespace is imported by another namespace or `unload` is `FALSE`, no unloading will occur. | | `character.only` | a logical indicating whether `name` can be assumed to be a character string. | | `force` | logical: should a package be detached even though other attached packages depend on it? | ### Details This is most commonly used with a single number argument referring to a position on the search list, and can also be used with a unquoted or quoted name of an item on the search list such as `package:tools`. If a package has a namespace, detaching it does not by default unload the namespace (and may not even with `unload = TRUE`), and detaching will not in general unload any dynamically loaded compiled code (DLLs); see `[getLoadedDLLs](getloadeddlls)` and `[library.dynam.unload](library.dynam)`. Further, registered S3 methods from the namespace will not be removed, and because S3 methods are not tagged to their source on registration, it is in general not possible to safely un-register the methods associated with a given package. If you use `<library>` on a package whose namespace is loaded, it attaches the exports of the already loaded namespace. So detaching and re-attaching a package may not refresh some or all components of the package, and is inadvisable. The most reliable way to completely detach a package is to restart **R**. ### Value The return value is <invisible>. It is `NULL` when a package is detached, otherwise the environment which was returned by `<attach>` when the object was attached (incorporating any changes since it was attached). ### Good practice `detach()` without an argument removes the first item on the search path after the workspace. It is all too easy to call it too many or too few times, or to not notice that the search path has changed since an `<attach>` call. Use of `attach`/`detach` is best avoided in functions (see the help for `<attach>`) and in interactive use and scripts it is prudent to detach by name. ### Note You cannot detach either the workspace (position 1) nor the base package (the last item in the search list), and attempting to do so will throw an error. Unloading some namespaces has undesirable side effects: e.g. unloading grid closes all graphics devices, and on some systems tcltk cannot be reloaded once it has been unloaded and may crash **R** if this is attempted. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<attach>`, `<library>`, `<search>`, `[objects](ls)`, `[unloadNamespace](ns-load)`, `[library.dynam.unload](library.dynam)` . ### Examples ``` require(splines) # package detach(package:splines) ## or also library(splines) pkg <- "package:splines" detach(pkg, character.only = TRUE) ## careful: do not do this unless 'splines' is not already attached. library(splines) detach(2) # 'pos' used for 'name' ## an example of the name argument to attach ## and of detaching a database named by a character vector attach_and_detach <- function(db, pos = 2) { name <- deparse1(substitute(db)) attach(db, pos = pos, name = name) print(search()[pos]) detach(name, character.only = TRUE) } attach_and_detach(women, pos = 3) ``` r None `tilde` Tilde Operator ----------------------- ### Description Tilde is used to separate the left- and right-hand sides in a model formula. ### Usage ``` y ~ model ``` ### Arguments | | | | --- | --- | | `y, model` | symbolic expressions. | ### Details The left-hand side is optional, and one-sided formulae are used in some contexts. A formula has <mode> `<call>`. It can be subsetted by `[[`: the components are `~`, the left-hand side (if present) and the right-hand side *in that order*. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical models.* Chapter 2 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `[formula](../../stats/html/formula)` r None `prmatrix` Print Matrices, Old-style ------------------------------------- ### Description An earlier method for printing matrices, provided for S compatibility. ### Usage ``` prmatrix(x, rowlab =, collab =, quote = TRUE, right = FALSE, na.print = NULL, ...) ``` ### Arguments | | | | --- | --- | | `x` | numeric or character matrix. | | `rowlab, collab` | (optional) character vectors giving row or column names respectively. By default, these are taken from `<dimnames>(x)`. | | `quote` | logical; if `TRUE` and `x` is of mode `"character"`, *quotes* (") are used. | | | | | --- | --- | | `right` | if `TRUE` and `x` is of mode `"character"`, the output columns are *right*-justified. | | `na.print` | how `NA`s are printed. If this is non-null, its value is used to represent `NA`. | | `...` | arguments for `print` methods. | ### Details `prmatrix` is an earlier form of `print.matrix`, and is very similar to the S function of the same name. ### Value Invisibly returns its argument, `x`. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<print.default>`, and other `<print>` methods. ### Examples ``` prmatrix(m6 <- diag(6), rowlab = rep("", 6), collab = rep("", 6)) chm <- matrix(scan(system.file("help", "AnIndex", package = "splines"), what = ""), , 2, byrow = TRUE) chm # uses print.matrix() prmatrix(chm, collab = paste("Column", 1:3), right = TRUE, quote = FALSE) ``` r None `Random` Random Number Generation ---------------------------------- ### Description `.Random.seed` is an integer vector, containing the random number generator (RNG) **state** for random number generation in **R**. It can be saved and restored, but should not be altered by the user. `RNGkind` is a more friendly interface to query or set the kind of RNG in use. `RNGversion` can be used to set the random generators as they were in an earlier **R** version (for reproducibility). `set.seed` is the recommended way to specify seeds. ### Usage ``` .Random.seed <- c(rng.kind, n1, n2, ...) RNGkind(kind = NULL, normal.kind = NULL, sample.kind = NULL) RNGversion(vstr) set.seed(seed, kind = NULL, normal.kind = NULL, sample.kind = NULL) ``` ### Arguments | | | | --- | --- | | `kind` | character or `NULL`. If `kind` is a character string, set **R**'s RNG to the kind desired. Use `"default"` to return to the **R** default. See ‘Details’ for the interpretation of `NULL`. | | `normal.kind` | character string or `NULL`. If it is a character string, set the method of Normal generation. Use `"default"` to return to the **R** default. `NULL` makes no change. | | `sample.kind` | character string or `NULL`. If it is a character string, set the method of discrete uniform generation (used in `<sample>`, for instance). Use `"default"` to return to the **R** default. `NULL` makes no change. | | `seed` | a single value, interpreted as an integer, or `NULL` (see ‘Details’). | | `vstr` | a character string containing a version number, e.g., `"1.6.2"`. The default RNG configuration of the current **R** version is used if `vstr` is greater than the current version. | | `rng.kind` | integer code in `0:k` for the above `kind`. | | `n1, n2, ...` | integers. See the details for how many are required (which depends on `rng.kind`). | ### Details The currently available RNG kinds are given below. `kind` is partially matched to this list. The default is `"Mersenne-Twister"`. `"Wichmann-Hill"` The seed, `.Random.seed[-1] == r[1:3]` is an integer vector of length 3, where each `r[i]` is in `1:(p[i] - 1)`, where `p` is the length 3 vector of primes, `p = (30269, 30307, 30323)`. The Wichmann–Hill generator has a cycle length of *6.9536e12* (= `prod(p-1)/4`, see *Applied Statistics* (1984) **33**, 123 which corrects the original article). `"Marsaglia-Multicarry"`: A *multiply-with-carry* RNG is used, as recommended by George Marsaglia in his post to the mailing list ‘sci.stat.math’. It has a period of more than *2^60* and has passed all tests (according to Marsaglia). The seed is two integers (all values allowed). `"Super-Duper"`: Marsaglia's famous Super-Duper from the 70's. This is the original version which does *not* pass the MTUPLE test of the Diehard battery. It has a period of *about 4.6\*10^18* for most initial seeds. The seed is two integers (all values allowed for the first seed: the second must be odd). We use the implementation by Reeds *et al* (1982–84). The two seeds are the Tausworthe and congruence long integers, respectively. A one-to-one mapping to S's `.Random.seed[1:12]` is possible but we will not publish one, not least as this generator is **not** exactly the same as that in recent versions of S-PLUS. `"Mersenne-Twister"`: From Matsumoto and Nishimura (1998); code updated in 2002. A twisted GFSR with period *2^19937 - 1* and equidistribution in 623 consecutive dimensions (over the whole period). The ‘seed’ is a 624-dimensional set of 32-bit integers plus a current position in that set. R uses its own initialization method due to B. D. Ripley and is not affected by the initialization issue in the 1998 code of Matsumoto and Nishimura addressed in a 2002 update. `"Knuth-TAOCP-2002"`: A 32-bit integer GFSR using lagged Fibonacci sequences with subtraction. That is, the recurrence used is *X[j] = (X[j-100] - X[j-37]) mod 2^30* and the ‘seed’ is the set of the 100 last numbers (actually recorded as 101 numbers, the last being a cyclic shift of the buffer). The period is around *2^129*. `"Knuth-TAOCP"`: An earlier version from Knuth (1997). The 2002 version was not backwards compatible with the earlier version: the initialization of the GFSR from the seed was altered. **R** did not allow you to choose consecutive seeds, the reported ‘weakness’, and already scrambled the seeds. Initialization of this generator is done in interpreted **R** code and so takes a short but noticeable time. `"L'Ecuyer-CMRG"`: A ‘combined multiple-recursive generator’ from L'Ecuyer (1999), each element of which is a feedback multiplicative generator with three integer elements: thus the seed is a (signed) integer vector of length 6. The period is around *2^191*. The 6 elements of the seed are internally regarded as 32-bit unsigned integers. Neither the first three nor the last three should be all zero, and they are limited to less than `4294967087` and `4294944443` respectively. This is not particularly interesting of itself, but provides the basis for the multiple streams used in package parallel. `"user-supplied"`: Use a user-supplied generator. See `[Random.user](random-user)` for details. `normal.kind` can be `"Kinderman-Ramage"`, `"Buggy Kinderman-Ramage"` (not for `set.seed`), `"Ahrens-Dieter"`, `"Box-Muller"`, `"Inversion"` (the default), or `"user-supplied"`. (For inversion, see the reference in `[qnorm](../../stats/html/normal)`.) The Kinderman-Ramage generator used in versions prior to 1.7.0 (now called `"Buggy"`) had several approximation errors and should only be used for reproduction of old results. The `"Box-Muller"` generator is stateful as pairs of normals are generated and returned sequentially. The state is reset whenever it is selected (even if it is the current normal generator) and when `kind` is changed. `sample.kind` can be `"Rounding"` or `"Rejection"`, or partial matches to these. The former was the default in versions prior to 3.6.0: it made `<sample>` noticeably non-uniform on large populations, and should only be used for reproduction of old results. See [PR#17494](https://bugs.R-project.org/bugzilla3/show_bug.cgi?id=17494) for a discussion. `set.seed` uses a single integer argument to set as many seeds as are required. It is intended as a simple way to get quite different seeds by specifying small integer arguments, and also as a way to get valid seed sets for the more complicated methods (especially `"Mersenne-Twister"` and `"Knuth-TAOCP"`). There is no guarantee that different values of `seed` will seed the RNG differently, although any exceptions would be extremely rare. If called with `seed = NULL` it re-initializes (see ‘Note’) as if no seed had yet been set. The use of `kind = NULL`, `normal.kind = NULL` or `sample.kind = NULL` in `RNGkind` or `set.seed` selects the currently-used generator (including that used in the previous session if the workspace has been restored): if no generator has been used it selects `"default"`. ### Value `.Random.seed` is an `<integer>` vector whose first element *codes* the kind of RNG and normal generator. The lowest two decimal digits are in `0:(k-1)` where `k` is the number of available RNGs. The hundreds represent the type of normal generator (starting at `0`), and the ten thousands represent the type of discrete uniform sampler. In the underlying C, `.Random.seed[-1]` is `unsigned`; therefore in **R** `.Random.seed[-1]` can be negative, due to the representation of an unsigned integer by a signed integer. `RNGkind` returns a three-element character vector of the RNG, normal and sample kinds selected *before* the call, invisibly if either argument is not `NULL`. A type starts a session as the default, and is selected either by a call to `RNGkind` or by setting `.Random.seed` in the workspace. (NB: prior to **R** 3.6.0 the first two kinds were returned in a two-element character vector.) `RNGversion` returns the same information as `RNGkind` about the defaults in a specific **R** version. `set.seed` returns `NULL`, invisibly. ### Note Initially, there is no seed; a new one is created from the current time and the process ID when one is required. Hence different sessions will give different simulation results, by default. However, the seed might be restored from a previous session if a previously saved workspace is restored. `.Random.seed` saves the seed set for the uniform random-number generator, at least for the system generators. It does not necessarily save the state of other generators, and in particular does not save the state of the Box–Muller normal generator. If you want to reproduce work later, call `set.seed` (preferably with explicit values for `kind` and `normal.kind`) rather than set `.Random.seed`. The object `.Random.seed` is only looked for in the user's workspace. Do not rely on randomness of low-order bits from RNGs. Most of the supplied uniform generators return 32-bit integer values that are converted to doubles, so they take at most *2^32* distinct values and long runs will return duplicated values (Wichmann-Hill is the exception, and all give at least 30 varying bits.) ### Author(s) of RNGkind: Martin Maechler. Current implementation, B. D. Ripley with modifications by Duncan Murdoch. ### References Ahrens, J. H. and Dieter, U. (1973). Extensions of Forsythe's method for random sampling from the normal distribution. *Mathematics of Computation*, **27**, 927–937. Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988). *The New S Language*. Wadsworth & Brooks/Cole. (`set.seed`, storing in `.Random.seed`.) Box, G. E. P. and Muller, M. E. (1958). A note on the generation of normal random deviates. *Annals of Mathematical Statistics*, **29**, 610–611. doi: [10.1214/aoms/1177706645](https://doi.org/10.1214/aoms/1177706645). De Matteis, A. and Pagnutti, S. (1993). Long-range Correlation Analysis of the Wichmann-Hill Random Number Generator. *Statistics and Computing*, **3**, 67–70. doi: [10.1007/BF00153065](https://doi.org/10.1007/BF00153065). Kinderman, A. J. and Ramage, J. G. (1976). Computer generation of normal random variables. *Journal of the American Statistical Association*, **71**, 893–896. doi: [10.2307/2286857](https://doi.org/10.2307/2286857). Knuth, D. E. (1997). *The Art of Computer Programming*. Volume 2, third edition. Source code at <https://www-cs-faculty.stanford.edu/~knuth/taocp.html>. Knuth, D. E. (2002). *The Art of Computer Programming*. Volume 2, third edition, ninth printing. L'Ecuyer, P. (1999). Good parameters and implementations for combined multiple recursive random number generators. *Operations Research*, **47**, 159–164. doi: [10.1287/opre.47.1.159](https://doi.org/10.1287/opre.47.1.159). Marsaglia, G. (1997). *A random number generator for C*. Discussion paper, posting on Usenet newsgroup `sci.stat.math` on September 29, 1997. Marsaglia, G. and Zaman, A. (1994). Some portable very-long-period random number generators. *Computers in Physics*, **8**, 117–121. doi: [10.1063/1.168514](https://doi.org/10.1063/1.168514). Matsumoto, M. and Nishimura, T. (1998). Mersenne Twister: A 623-dimensionally equidistributed uniform pseudo-random number generator, *ACM Transactions on Modeling and Computer Simulation*, **8**, 3–30. Source code formerly at `http://www.math.keio.ac.jp/~matumoto/emt.html`. Now see <http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/VERSIONS/C-LANG/c-lang.html>. Reeds, J., Hubert, S. and Abrahams, M. (1982–4). C implementation of SuperDuper, University of California at Berkeley. (Personal communication from Jim Reeds to Ross Ihaka.) Wichmann, B. A. and Hill, I. D. (1982). Algorithm AS 183: An Efficient and Portable Pseudo-random Number Generator. *Applied Statistics*, **31**, 188–190; Remarks: **34**, 198 and **35**, 89. doi: [10.2307/2347988](https://doi.org/10.2307/2347988). ### See Also `<sample>` for random sampling with and without replacement. [Distributions](../../stats/html/distributions) for functions for random-variate generation from standard distributions. ### Examples ``` require(stats) ## Seed the current RNG, i.e., set the RNG status set.seed(42); u1 <- runif(30) set.seed(42); u2 <- runif(30) # the same because of identical RNG status: stopifnot(identical(u1, u2)) ## the default random seed is 626 integers, so only print a few runif(1); .Random.seed[1:6]; runif(1); .Random.seed[1:6] ## If there is no seed, a "random" new one is created: rm(.Random.seed); runif(1); .Random.seed[1:6] ok <- RNGkind() RNGkind("Wich") # (partial string matching on 'kind') ## This shows how 'runif(.)' works for Wichmann-Hill, ## using only R functions: p.WH <- c(30269, 30307, 30323) a.WH <- c( 171, 172, 170) next.WHseed <- function(i.seed = .Random.seed[-1]) { (a.WH * i.seed) %% p.WH } my.runif1 <- function(i.seed = .Random.seed) { ns <- next.WHseed(i.seed[-1]); sum(ns / p.WH) %% 1 } set.seed(1998-12-04)# (when the next lines were added to the souRce) rs <- .Random.seed (WHs <- next.WHseed(rs[-1])) u <- runif(1) stopifnot( next.WHseed(rs[-1]) == .Random.seed[-1], all.equal(u, my.runif1(rs)) ) ## ---- .Random.seed RNGkind("Super") # matches "Super-Duper" RNGkind() .Random.seed # new, corresponding to Super-Duper ## Reset: RNGkind(ok[1]) RNGversion(getRversion()) # the default version for this R version ## ---- sum(duplicated(runif(1e6))) # around 110 for default generator ## and we would expect about almost sure duplicates beyond about qbirthday(1 - 1e-6, classes = 2e9) # 235,000 ```
programming_docs
r None `textconnections` Text Connections ----------------------------------- ### Description Input and output text connections. ### Usage ``` textConnection(object, open = "r", local = FALSE, name = deparse(substitute(object)), encoding = c("", "bytes", "UTF-8")) textConnectionValue(con) ``` ### Arguments | | | | --- | --- | | `object` | character. A description of the [connection](connections). For an input this is an **R** character vector object, and for an output connection the name for the **R** character vector to receive the output, or `NULL` (for none). | | `open` | character string. Either `"r"` (or equivalently `""`) for an input connection or `"w"` or `"a"` for an output connection. | | `local` | logical. Used only for output connections. If `TRUE`, output is assigned to a variable in the calling environment. Otherwise the global environment is used. | | `name` | a `<character>` string specifying the connection name. | | `encoding` | character string, partially matched. Used only for input connections. How marked strings in `object` should be handled: converted to the current locale, used byte-by-byte or translated to UTF-8. | | `con` | An output text connection. | ### Details An input text connection is opened and the character vector is copied at time the connection object is created, and `close` destroys the copy. `object` should be the name of a character vector: however, short expressions will be accepted provided they <deparse> to less than 60 bytes. An output text connection is opened and creates an **R** character vector of the given name in the user's workspace or in the calling environment, depending on the value of the `local` argument. This object will at all times hold the completed lines of output to the connection, and `[isIncomplete](connections)` will indicate if there is an incomplete final line. Closing the connection will output the final line, complete or not. (A line is complete once it has been terminated by end-of-line, represented by `"\n"` in **R**.) The output character vector has locked bindings (see `[lockBinding](bindenv)`) until `close` is called on the connection. The character vector can also be retrieved *via* `textConnectionValue`, which is the only way to do so if `object = NULL`. If the current locale is detected as Latin-1 or UTF-8, non-ASCII elements of the character vector will be marked accordingly (see `Encoding`). Opening a text connection with `mode = "a"` will attempt to append to an existing character vector with the given name in the user's workspace or the calling environment. If none is found (even if an object exists of the right name but the wrong type) a new character vector will be created, with a warning. You cannot `seek` on a text connection, and `seek` will always return zero as the position. Text connections have slightly unusual semantics: they are always open, and throwing away an input text connection without closing it (so it get garbage-collected) does not give a warning. ### Value For `textConnection`, a connection object of class `"textConnection"` which inherits from class `"connection"`. For `textConnectionValue`, a character vector. ### Note As output text connections keep the character vector up to date line-by-line, they are relatively expensive to use, and it is often better to use an anonymous `[file](connections)()` connection to collect output. On (rare) platforms where `vsnprintf` does not return the needed length of output there is a 100,000 character limit on the length of line for output connections: longer lines will be truncated with a warning. ### References Chambers, J. M. (1998) *Programming with Data. A Guide to the S Language.* Springer. [S has input text connections only.] ### See Also `<connections>`, `[showConnections](showconnections)`, `[pushBack](pushback)`, `[capture.output](../../utils/html/capture.output)`. ### Examples ``` zz <- textConnection(LETTERS) readLines(zz, 2) scan(zz, "", 4) pushBack(c("aa", "bb"), zz) scan(zz, "", 4) close(zz) zz <- textConnection("foo", "w") writeLines(c("testit1", "testit2"), zz) cat("testit3 ", file = zz) isIncomplete(zz) cat("testit4\n", file = zz) isIncomplete(zz) close(zz) foo # capture R output: use part of example from help(lm) zz <- textConnection("foo", "w") ctl <- c(4.17, 5.58, 5.18, 6.11, 4.5, 4.61, 5.17, 4.53, 5.33, 5.14) trt <- c(4.81, 4.17, 4.41, 3.59, 5.87, 3.83, 6.03, 4.89, 4.32, 4.69) group <- gl(2, 10, 20, labels = c("Ctl", "Trt")) weight <- c(ctl, trt) sink(zz) anova(lm.D9 <- lm(weight ~ group)) cat("\nSummary of Residuals:\n\n") summary(resid(lm.D9)) sink() close(zz) cat(foo, sep = "\n") ``` r None `Quotes` Quotes ---------------- ### Description Descriptions of the various uses of quoting in **R**. ### Details Three types of quotes are part of the syntax of **R**: single and double quotation marks and the backtick (or back quote, `). In addition, backslash is used to escape the following character inside character constants. ### Character constants Single and double quotes delimit character constants. They can be used interchangeably but double quotes are preferred (and character constants are printed using double quotes), so single quotes are normally only used to delimit character constants containing double quotes. Backslash is used to start an escape sequence inside character constants. Escaping a character not in the following table is an error. Single quotes need to be escaped by backslash in single-quoted strings, and double quotes in double-quoted strings. | | | | --- | --- | | \n | newline | | \r | carriage return | | \t | tab | | \b | backspace | | \a | alert (bell) | | \f | form feed | | \v | vertical tab | | \\ | backslash \ | | \' | ASCII apostrophe ' | | \" | ASCII quotation mark " | | \` | ASCII grave accent (backtick) ` | | \nnn | character with given octal code (1, 2 or 3 digits) | | \xnn | character with given hex code (1 or 2 hex digits) | | \unnnn | Unicode character with given code (1--4 hex digits) | | \Unnnnnnnn | Unicode character with given code (1--8 hex digits) | | | Alternative forms for the last two are \u{nnnn} and \U{nnnnnnnn}. All except the Unicode escape sequences are also supported when reading character strings by `<scan>` and `[read.table](../../utils/html/read.table)` if `allowEscapes = TRUE`. Unicode escapes can be used to enter Unicode characters not in the current locale's charset (when the string will be stored internally in UTF-8). As from **R** 4.1.0 the largest allowed \U value is \U10FFFF, the maximum Unicode point. The parser does not allow the use of both octal/hex and Unicode escapes in a single string. These forms will also be used by `<print.default>` when outputting non-printable characters (including backslash). Embedded nuls are not allowed in character strings, so using escapes (such as \0) for a nul will result in the string being truncated at that point (usually with a warning). Raw character constants are also available using a syntax similar to the one used in C++: `r"(...)"` with `...` any character sequence, except that it must not contain the closing sequence )". The delimiter pairs `[]` and `{}` can also be used, and `R` can be used in place of `r`. For additional flexibility, a number of dashes can be placed between the opening quote and the opening delimiter, as long as the same number of dashes appear between the closing delimiter and the closing quote. ### Names and Identifiers Identifiers consist of a sequence of letters, digits, the period (`.`) and the underscore. They must not start with a digit nor underscore, nor with a period followed by a digit. [Reserved](reserved) words are not valid identifiers. The definition of a *letter* depends on the current locale, but only ASCII digits are considered to be digits. Such identifiers are also known as *syntactic names* and may be used directly in **R** code. Almost always, other names can be used provided they are quoted. The preferred quote is the backtick (`), and `<deparse>` will normally use it, but under many circumstances single or double quotes can be used (as a character constant will often be converted to a name). One place where backticks may be essential is to delimit variable names in formulae: see `[formula](../../stats/html/formula)`. ### Note UTF-16 surrogate pairs in \unnnn\uoooo form will be converted to a single Unicode point, so for example \uD834\uDD1E gives the single character \U1D11E. However, unpaired values in the surrogate range such as in the string `"abc\uD834de"` will be converted to a non-standard-conformant UTF-8 string (as is done by most other software): this may change in future. ### See Also `[Syntax](syntax)` for other aspects of the syntax. `[sQuote](squote)` for quoting English text. `[shQuote](shquote)` for quoting OS commands. The ‘R Language Definition’ manual. ### Examples ``` 'single quotes can be used more-or-less interchangeably' "with double quotes to create character vectors" ## Single quotes inside single-quoted strings need backslash-escaping. ## Ditto double quotes inside double-quoted strings. ## identical('"It\'s alive!", he screamed.', "\"It's alive!\", he screamed.") # same ## Backslashes need doubling, or they have a special meaning. x <- "In ALGOL, you could do logical AND with /\\." print(x) # shows it as above ("input-like") writeLines(x) # shows it as you like it ;-) ## Single backslashes followed by a letter are used to denote ## special characters like tab(ulator)s and newlines: x <- "long\tlines can be\nbroken with newlines" writeLines(x) # see also ?strwrap ## Backticks are used for non-standard variable names. ## (See make.names and ?Reserved for what counts as ## non-standard.) `x y` <- 1:5 `x y` d <- data.frame(`1st column` = rchisq(5, 2), check.names = FALSE) d$`1st column` ## Backslashes followed by up to three numbers are interpreted as ## octal notation for ASCII characters. "\110\145\154\154\157\40\127\157\162\154\144\41" ## \x followed by up to two numbers is interpreted as ## hexadecimal notation for ASCII characters. (hw1 <- "\x48\x65\x6c\x6c\x6f\x20\x57\x6f\x72\x6c\x64\x21") ## Mixing octal and hexadecimal in the same string is OK (hw2 <- "\110\x65\154\x6c\157\x20\127\x6f\162\x6c\144\x21") ## \u is also hexadecimal, but supports up to 4 digits, ## using Unicode specification. In the previous example, ## you can simply replace \x with \u. (hw3 <- "\u48\u65\u6c\u6c\u6f\u20\u57\u6f\u72\u6c\u64\u21") ## The last three are all identical to hw <- "Hello World!" stopifnot(identical(hw, hw1), identical(hw1, hw2), identical(hw2, hw3)) ## Using Unicode makes more sense for non-latin characters. (nn <- "\u0126\u0119\u1114\u022d\u2001\u03e2\u0954\u0f3f\u13d3\u147b\u203c") ## Mixing \x and \u throws a _parse_ error (which is not catchable!) ## Not run: "\x48\u65\x6c\u6c\x6f\u20\x57\u6f\x72\u6c\x64\u21" ## End(Not run) ## --> Error: mixing Unicode and octal/hex escapes ..... ## \U works like \u, but supports up to six hex digits. ## So we can replace \u with \U in the previous example. n2 <- "\U0126\U0119\U1114\U022d\U2001\U03e2\U0954\U0f3f\U13d3\U147b\U203c" stopifnot(identical(nn, n2)) ## Under systems supporting multi-byte locales (and not Windows), ## \U also supports the rarer characters outside the usual 16^4 range. ## See the R language manual, ## https://cran.r-project.org/doc/manuals/r-release/R-lang.html#Literal-constants ## and bug 16098 https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=16098 ## This character may or not be printable (the platform decides) ## and if it is, may not have a glyph in the font used. "\U1d4d7" # On Windows this used to give the incorrect value of "\Ud4d7" ## nul characters (for terminating strings in C) are not allowed (parse errors) ## Not run: "foo\0bar" # Error: nul character not allowed (line 1) "foo\u0000bar" # same error ## End(Not run) ## A Windows path written as a raw string constant: r"(c:\Program files\R)" ## More raw strings: r"{(\1\2)}" r"(use both "double" and 'single' quotes)" r"---(\1--)-)---" ``` r None `chol2inv` Inverse from Choleski (or QR) Decomposition ------------------------------------------------------- ### Description Invert a symmetric, positive definite square matrix from its Choleski decomposition. Equivalently, compute *(X'X)^(-1)* from the (*R* part) of the QR decomposition of *X*. ### Usage ``` chol2inv(x, size = NCOL(x), LINPACK = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | a matrix. The first `size` columns of the upper triangle contain the Choleski decomposition of the matrix to be inverted. | | `size` | the number of columns of `x` containing the Choleski decomposition. | | `LINPACK` | logical. Defunct and gives an error. | ### Value The inverse of the matrix whose Choleski decomposition was given. Unsuccessful results from the underlying LAPACK code will result in an error giving a positive error code: these can only be interpreted by detailed study of the FORTRAN code. ### Source This is an interface to the LAPACK routine `DPOTRI`. LAPACK is from <https://www.netlib.org/lapack/> and its guide is listed in the references. ### References Anderson. E. and ten others (1999) *LAPACK Users' Guide*. Third Edition. SIAM. Available on-line at <https://www.netlib.org/lapack/lug/lapack_lug.html>. Dongarra, J. J., Bunch, J. R., Moler, C. B. and Stewart, G. W. (1978) *LINPACK Users Guide*. Philadelphia: SIAM Publications. ### See Also `<chol>`, `<solve>`. ### Examples ``` cma <- chol(ma <- cbind(1, 1:3, c(1,3,7))) ma %*% chol2inv(cma) ``` r None `strrep` Repeat the Elements of a Character Vector --------------------------------------------------- ### Description Repeat the character strings in a character vector a given number of times (i.e., concatenate the respective numbers of copies of the strings). ### Usage ``` strrep(x, times) ``` ### Arguments | | | | --- | --- | | `x` | a character vector, or an object which can be coerced to a character vector using `as.character`. | | `times` | an integer vector giving the (non-negative) numbers of times to repeat the respective elements of `x`. | ### Details The elements of `x` and `times` will be recycled as necessary (if one has no elements, and empty character vector is returned). Missing elements in `x` or `times` result in missing elements of the return value. ### Value A character vector with the elements of the given character vector repeated the given numbers of times. ### Examples ``` strrep("ABC", 2) strrep(c("A", "B", "C"), 1 : 3) ## Create vectors with the given numbers of spaces: strrep(" ", 1 : 5) ``` r None `srcfile` References to Source Files and Code ---------------------------------------------- ### Description These functions are for working with source files and more generally with “source references” (`"srcref"`), i.e., references to source code. The resulting data is used for printing and source level debugging, and is typically available in interactive **R** sessions, namely when `<options>(keep.source = TRUE)`. ### Usage ``` srcfile(filename, encoding = getOption("encoding"), Enc = "unknown") srcfilecopy(filename, lines, timestamp = Sys.time(), isFile = FALSE) srcfilealias(filename, srcfile) getSrcLines(srcfile, first, last) srcref(srcfile, lloc) ## S3 method for class 'srcfile' print(x, ...) ## S3 method for class 'srcfile' summary(object, ...) ## S3 method for class 'srcfile' open(con, line, ...) ## S3 method for class 'srcfile' close(con, ...) ## S3 method for class 'srcref' print(x, useSource = TRUE, ...) ## S3 method for class 'srcref' summary(object, useSource = FALSE, ...) ## S3 method for class 'srcref' as.character(x, useSource = TRUE, to = x, ...) .isOpen(srcfile) ``` ### Arguments | | | | --- | --- | | `filename` | The name of a file. | | `encoding` | The character encoding to assume for the file. | | `Enc` | The encoding with which to make strings: see the `encoding` argument of `<parse>`. | | `lines` | A character vector of source lines. Other **R** objects will be coerced to character. | | `timestamp` | The timestamp to use on a copy of a file. | | `isFile` | Is this `srcfilecopy` known to come from a file system file? | | `srcfile` | A `srcfile` object. | | `first, last, line` | Line numbers. | | `lloc` | A vector of four, six or eight values giving a source location; see ‘Details’. | | `x, object, con` | An object of the appropriate class. | | `useSource` | Whether to read the `srcfile` to obtain the text of a `srcref`. | | `to` | An optional second `srcref` object to mark the end of the character range. | | `...` | Additional arguments to the methods; these will be ignored. | ### Details These functions and classes handle source code references. The `srcfile` function produces an object of class `srcfile`, which contains the name and directory of a source code file, along with its timestamp, for use in source level debugging (not yet implemented) and source echoing. The encoding of the file is saved; see `[file](connections)` for a discussion of encodings, and `[iconvlist](iconv)` for a list of allowable encodings on your platform. The `srcfilecopy` function produces an object of the descendant class `srcfilecopy`, which saves the source lines in a character vector. It copies the value of the `isFile` argument, to help debuggers identify whether this text comes from a real file in the file system. The `srcfilealias` function produces an object of the descendant class `srcfilealias`, which gives an alternate name to another srcfile. This is produced by the parser when a `#line` directive is used. The `getSrcLines` function reads the specified lines from `srcfile`. The `srcref` function produces an object of class `srcref`, which describes a range of characters in a `srcfile`. The `lloc` value gives the following values: ``` c(first_line, first_byte, last_line, last_byte, first_column, last_column, first_parsed, last_parsed) ``` Bytes (elements 2, 4) and columns (elements 5, 6) may be different due to multibyte characters. If only four values are given, the columns and bytes are assumed to match. Lines (elements 1, 3) and parsed lines (elements 7, 8) may differ if a `#line` directive is used in code: the former will respect the directive, the latter will just count lines. If only 4 or 6 elements are given, the parsed lines will be assumed to match the lines. Methods are defined for `print`, `summary`, `open`, and `close` for classes `srcfile` and `srcfilecopy`. The `open` method opens its internal `[file](connections)` connection at a particular line; if it was already open, it will be repositioned to that line. Methods are defined for `print`, `summary` and `as.character` for class `srcref`. The `as.character` method will read the associated source file to obtain the text corresponding to the reference. If the `to` argument is given, it should be a second `srcref` that follows the first, in the same file; they will be treated as one reference to the whole range. The exact behaviour depends on the class of the source file. If the source file inherits from class `srcfilecopy`, the lines are taken from the saved copy using the “parsed” line counts. If not, an attempt is made to read the file, and the original line numbers of the `srcref` record (i.e., elements 1 and 3) are used. If an error occurs (e.g., the file no longer exists), text like `<srcref: "file" chars 1:1 to 2:10>` will be returned instead, indicating the `line:column` ranges of the first and last character. The `summary` method defaults to this type of display. Lists of `srcref` objects may be attached to expressions as the `"srcref"` attribute. (The list of `srcref` objects should be the same length as the expression.) By default, expressions are printed by `<print.default>` using the associated `srcref`. To see deparsed code instead, call `<print>` with argument `useSource = FALSE`. If a `srcref` object is printed with `useSource = FALSE`, the `<srcref: ...>` record will be printed. `.isOpen` is intended for internal use: it checks whether the connection associated with a `srcfile` object is open. ### Value `srcfile` returns a `srcfile` object. `srcfilecopy` returns a `srcfilecopy` object. `getSrcLines` returns a character vector of source code lines. `srcref` returns a `srcref` object. ### Author(s) Duncan Murdoch ### See Also `[getSrcFilename](../../utils/html/sourceutils)` for extracting information from a source reference, or `[removeSource](../../utils/html/removesource)` to remove it from a (non-primitive) function (aka ‘closure’). ### Examples ``` # has timestamp src <- srcfile(system.file("DESCRIPTION", package = "base")) summary(src) getSrcLines(src, 1, 4) ref <- srcref(src, c(1, 1, 2, 1000)) ref print(ref, useSource = FALSE) ```
programming_docs
r None `Primitive` Look Up a Primitive Function ----------------------------------------- ### Description `.Primitive` looks up by name a ‘primitive’ (internally implemented) function. ### Usage ``` .Primitive(name) ``` ### Arguments | | | | --- | --- | | `name` | name of the **R** function. | ### Details The advantage of `.Primitive` over `[.Internal](internal)` functions is the potential efficiency of argument passing, and that positional matching can be used where desirable, e.g. in `<switch>`. For more details, see the ‘R Internals’ manual. All primitive functions are in the base namespace. This function is almost never used: ``name`` or, more carefully, `<get>(name, envir = baseenv())` work equally well and do not depend on knowing which functions are primitive (which does change as **R** evolves). ### See Also `[is.primitive](is.function)` showing that primitive functions come in two types (`<typeof>`), `[.Internal](internal)`. ### Examples ``` mysqrt <- .Primitive("sqrt") c .Internal # this one *must* be primitive! `if` # need backticks ``` r None `Sys.sleep` Suspend Execution for a Time Interval -------------------------------------------------- ### Description Suspend execution of **R** expressions for a specified time interval. ### Usage ``` Sys.sleep(time) ``` ### Arguments | | | | --- | --- | | `time` | The time interval to suspend execution for, in seconds. | ### Details Using this function allows **R** to temporarily be given very low priority and hence not to interfere with more important foreground tasks. A typical use is to allow a process launched from **R** to set itself up and read its input files before **R** execution is resumed. The intention is that this function suspends execution of **R** expressions but wakes the process up often enough to respond to GUI events, typically every half second. It can be interrupted (e.g. by Ctrl-C or Esc at the **R** console). There is no guarantee that the process will sleep for the whole of the specified interval (sleep might be interrupted), and it may well take slightly longer in real time to resume execution. `time` must be non-negative (and not `NA` nor `NaN`): `Inf` is allowed (and might be appropriate if the intention is to wait indefinitely for an interrupt). The resolution of the time interval is system-dependent, but will normally be 20ms or better. (On modern Unix-alikes it will be better than 1ms.) ### Value Invisible `NULL`. ### Note Despite its name, this is not currently implemented using the `sleep` system call (although on Windows it does make use of `Sleep`). ### Examples ``` testit <- function(x) { p1 <- proc.time() Sys.sleep(x) proc.time() - p1 # The cpu usage should be negligible } testit(3.7) ``` r None `colSums` Form Row and Column Sums and Means --------------------------------------------- ### Description Form row and column sums and means for numeric arrays (or data frames). ### Usage ``` colSums (x, na.rm = FALSE, dims = 1) rowSums (x, na.rm = FALSE, dims = 1) colMeans(x, na.rm = FALSE, dims = 1) rowMeans(x, na.rm = FALSE, dims = 1) .colSums(x, m, n, na.rm = FALSE) .rowSums(x, m, n, na.rm = FALSE) .colMeans(x, m, n, na.rm = FALSE) .rowMeans(x, m, n, na.rm = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | an array of two or more dimensions, containing numeric, complex, integer or logical values, or a numeric data frame. For `.colSums()` etc, a numeric, integer or logical matrix (or vector of length `m * n`). | | `na.rm` | logical. Should missing values (including `NaN`) be omitted from the calculations? | | `dims` | integer: Which dimensions are regarded as ‘rows’ or ‘columns’ to sum over. For `row*`, the sum or mean is over dimensions `dims+1, ...`; for `col*` it is over dimensions `1:dims`. | | `m, n` | the dimensions of the matrix `x` for `.colSums()` etc. | ### Details These functions are equivalent to use of `<apply>` with `FUN = mean` or `FUN = sum` with appropriate margins, but are a lot faster. As they are written for speed, they blur over some of the subtleties of `NaN` and `NA`. If `na.rm = FALSE` and either `NaN` or `NA` appears in a sum, the result will be one of `NaN` or `NA`, but which might be platform-dependent. Notice that omission of missing values is done on a per-column or per-row basis, so column means may not be over the same set of rows, and vice versa. To use only complete rows or columns, first select them with `[na.omit](../../stats/html/na.fail)` or `[complete.cases](../../stats/html/complete.cases)` (possibly on the transpose of `x`). The versions with an initial dot in the name (`.colSums()` etc) are ‘bare-bones’ versions for use in programming: they apply only to numeric (like) matrices and do not name the result. ### Value A numeric or complex array of suitable size, or a vector if the result is one-dimensional. For the first four functions the `dimnames` (or `names` for a vector result) are taken from the original array. If there are no values in a range to be summed over (after removing missing values with `na.rm = TRUE`), that component of the output is set to `0` (`*Sums`) or `NaN` (`*Means`), consistent with `<sum>` and `<mean>`. ### See Also `<apply>`, `<rowsum>` ### Examples ``` ## Compute row and column sums for a matrix: x <- cbind(x1 = 3, x2 = c(4:1, 2:5)) rowSums(x); colSums(x) dimnames(x)[[1]] <- letters[1:8] rowSums(x); colSums(x); rowMeans(x); colMeans(x) x[] <- as.integer(x) rowSums(x); colSums(x) x[] <- x < 3 rowSums(x); colSums(x) x <- cbind(x1 = 3, x2 = c(4:1, 2:5)) x[3, ] <- NA; x[4, 2] <- NA rowSums(x); colSums(x); rowMeans(x); colMeans(x) rowSums(x, na.rm = TRUE); colSums(x, na.rm = TRUE) rowMeans(x, na.rm = TRUE); colMeans(x, na.rm = TRUE) ## an array dim(UCBAdmissions) rowSums(UCBAdmissions); rowSums(UCBAdmissions, dims = 2) colSums(UCBAdmissions); colSums(UCBAdmissions, dims = 2) ## complex case x <- cbind(x1 = 3 + 2i, x2 = c(4:1, 2:5) - 5i) x[3, ] <- NA; x[4, 2] <- NA rowSums(x); colSums(x); rowMeans(x); colMeans(x) rowSums(x, na.rm = TRUE); colSums(x, na.rm = TRUE) rowMeans(x, na.rm = TRUE); colMeans(x, na.rm = TRUE) ``` r None `Platform` Platform Specific Variables --------------------------------------- ### Description `.Platform` is a list with some details of the platform under which **R** was built. This provides means to write OS-portable **R** code. ### Usage ``` .Platform ``` ### Value A list with at least the following components: | | | | --- | --- | | `OS.type` | character string, giving the **O**perating **S**ystem (family) of the computer. One of `"unix"` or `"windows"`. | | `file.sep` | character string, giving the **file** **sep**arator used on your platform: `"/"` on both Unix-alikes *and* on Windows (but not on the former port to Classic Mac OS). | | `dynlib.ext` | character string, giving the file name **ext**ension of **dyn**amically loadable **lib**raries, e.g., `".dll"` on Windows and `".so"` or `".sl"` on Unix-alikes. (Note for macOS users: these are shared objects as loaded by `[dyn.load](dynload)` and not dylibs: see `[dyn.load](dynload)`.) | | `GUI` | character string, giving the type of GUI in use, or `"unknown"` if no GUI can be assumed. Possible values are for Unix-alikes the values given via the -g command-line flag (`"X11"`, `"Tk"`), `"AQUA"` (running under `R.app` on macOS), `"Rgui"` and `"RTerm"` (Windows) and perhaps others under alternative front-ends or embedded **R**. | | `endian` | character string, `"big"` or `"little"`, giving the ‘endianness’ of the processor in use. This is relevant when it is necessary to know the order to read/write bytes of e.g. an integer or double from/to a [connection](connections): see `[readBin](readbin)`. | | `pkgType` | character string, the preferred setting for `<options>("pkgType")`. Values `"source"`, `"mac.binary"` and `"win.binary"` are currently in use. This should **not** be used to identify the OS. | | `path.sep` | character string, giving the **path** **sep**arator, used on your platform, e.g., `":"` on Unix-alikes and `";"` on Windows. Used to separate paths in environment variables such as `PATH` and `TEXINPUTS`. | | `r_arch` | character string, possibly `""`. The name of an architecture-specific directory used in this build of **R**. | ### AQUA `.Platform$GUI` is set to `"AQUA"` under the macOS GUI, `R.app`. This has a number of consequences: * ‘/usr/local/bin’ is *appended* to the PATH environment variable. * the default graphics device is set to `quartz`. * selects native (rather than Tk) widgets for the `graphics = TRUE` options of `[menu](../../utils/html/menu)` and `[select.list](../../utils/html/select.list)`. * HTML help is displayed in the internal browser. * the spreadsheet-like data editor/viewer uses a Quartz version rather than the X11 one. ### See Also `[R.version](version)` and `[Sys.info](sys.info)` give more details about the OS. In particular, `R.version$platform` is the canonical name of the platform under which **R** was compiled. `[.Machine](zmachine)` for details of the arithmetic used, and `<system>` for invoking platform-specific system commands. `<capabilities>` and `[extSoftVersion](extsoftversion)` (and links there) for availability of capabilities partly *external* to **R** but used from **R** functions. ### Examples ``` ## Note: this can be done in a system-independent way by dir.exists() if(.Platform$OS.type == "unix") { system.test <- function(...) system(paste("test", ...)) == 0L dir.exists2 <- function(dir) sapply(dir, function(d) system.test("-d", d)) dir.exists2(c(R.home(), "/tmp", "~", "/NO")) # > T T T F } ``` r None `data.class` Object Classes ---------------------------- ### Description Determine the class of an arbitrary **R** object. ### Usage ``` data.class(x) ``` ### Arguments | | | | --- | --- | | `x` | an **R** object. | ### Value character string giving the *class* of `x`. The class is the (first element) of the `<class>` attribute if this is non-`NULL`, or inferred from the object's `dim` attribute if this is non-`NULL`, or `mode(x)`. Simply speaking, `data.class(x)` returns what is typically useful for method dispatching. (Or, what the basic creator functions already and maybe eventually all will attach as a class attribute.) ### Note For compatibility reasons, there is one exception to the rule above: When `x` is `<integer>`, the result of `data.class(x)` is `"numeric"` even when `x` is classed. ### See Also `<class>` ### Examples ``` x <- LETTERS data.class(factor(x)) # has a class attribute data.class(matrix(x, ncol = 13)) # has a dim attribute data.class(list(x)) # the same as mode(x) data.class(x) # the same as mode(x) stopifnot(data.class(1:2) == "numeric") # compatibility "rule" ``` r None `attach` Attach Set of R Objects to Search Path ------------------------------------------------ ### Description The database is attached to the **R** search path. This means that the database is searched by **R** when evaluating a variable, so objects in the database can be accessed by simply giving their names. ### Usage ``` attach(what, pos = 2L, name = deparse1(substitute(what), backtick=FALSE), warn.conflicts = TRUE) ``` ### Arguments | | | | --- | --- | | `what` | ‘database’. This can be a `data.frame` or a `list` or a **R** data file created with `<save>` or `NULL` or an environment. See also ‘Details’. | | `pos` | integer specifying position in `<search>()` where to attach. | | `name` | name to use for the attached database. Names starting with `package:` are reserved for `<library>`. | | `warn.conflicts` | logical. If `TRUE`, warnings are printed about `<conflicts>` from attaching the database, unless that database contains an object `.conflicts.OK`. A conflict is a function masking a function, or a non-function masking a non-function. | ### Details When evaluating a variable or function name **R** searches for that name in the databases listed by `<search>`. The first name of the appropriate type is used. By attaching a data frame (or list) to the search path it is possible to refer to the variables in the data frame by their names alone, rather than as components of the data frame (e.g., in the example below, `height` rather than `women$height`). By default the database is attached in position 2 in the search path, immediately after the user's workspace and before all previously attached packages and previously attached databases. This can be altered to attach later in the search path with the `pos` option, but you cannot attach at `pos = 1`. The database is not actually attached. Rather, a new environment is created on the search path and the elements of a list (including columns of a data frame) or objects in a save file or an environment are *copied* into the new environment. If you use `[<<-](assignops)` or `<assign>` to assign to an attached database, you only alter the attached copy, not the original object. (Normal assignment will place a modified version in the user's workspace: see the examples.) For this reason `attach` can lead to confusion. One useful ‘trick’ is to use `what = NULL` (or equivalently a length-zero list) to create a new environment on the search path into which objects can be assigned by `<assign>` or `<load>` or `<sys.source>`. Names starting `"package:"` are reserved for `<library>` and should not be used by end users. Attached files are by default given the name `file:what`. The `name` argument given for the attached environment will be used by `<search>` and can be used as the argument to `<as.environment>`. There are hooks to attach user-defined table objects of class `"UserDefinedDatabase"`, supported by the Omegahat package RObjectTables. ### Value The `<environment>` is returned invisibly with a `"name"` attribute. ### Good practice `attach` has the side effect of altering the search path and this can easily lead to the wrong object of a particular name being found. People do often forget to `<detach>` databases. In interactive use, `<with>` is usually preferable to the use of `attach`/`detach`, unless `what` is a `<save>()`-produced file in which case `attach()` is a (safety) wrapper for `<load>()`. In programming, functions should not change the search path unless that is their purpose. Often `<with>` can be used within a function. If not, good practice is to * Always use a distinctive `name` argument, and * To immediately follow the `attach` call by an `<on.exit>` call to `detach` using the distinctive name. This ensures that the search path is left unchanged even if the function is interrupted or if code after the `attach` call changes the search path. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<library>`, `<detach>`, `<search>`, `[objects](ls)`, `<environment>`, `<with>`. ### Examples ``` require(utils) summary(women$height) # refers to variable 'height' in the data frame attach(women) summary(height) # The same variable now available by name height <- height*2.54 # Don't do this. It creates a new variable # in the user's workspace find("height") summary(height) # The new variable in the workspace rm(height) summary(height) # The original variable. height <<- height*25.4 # Change the copy in the attached environment find("height") summary(height) # The changed copy detach("women") summary(women$height) # unchanged ## Not run: ## create an environment on the search path and populate it sys.source("myfuns.R", envir = attach(NULL, name = "myfuns")) ## End(Not run) ``` r None `do.call` Execute a Function Call ---------------------------------- ### Description `do.call` constructs and executes a function call from a name or a function and a list of arguments to be passed to it. ### Usage ``` do.call(what, args, quote = FALSE, envir = parent.frame()) ``` ### Arguments | | | | --- | --- | | `what` | either a function or a non-empty character string naming the function to be called. | | `args` | a *list* of arguments to the function call. The `names` attribute of `args` gives the argument names. | | `quote` | a logical value indicating whether to quote the arguments. | | `envir` | an environment within which to evaluate the call. This will be most useful if `what` is a character string and the arguments are symbols or quoted expressions. | ### Details If `quote` is `FALSE`, the default, then the arguments are evaluated (in the calling environment, not in `envir`). If `quote` is `TRUE` then each argument is quoted (see `[quote](substitute)`) so that the effect of argument evaluation is to remove the quotes – leaving the original arguments unevaluated when the call is constructed. The behavior of some functions, such as `<substitute>`, will not be the same for functions evaluated using `do.call` as if they were evaluated from the interpreter. The precise semantics are currently undefined and subject to change. ### Value The result of the (evaluated) function call. ### Warning This should not be used to attempt to evade restrictions on the use of `.Internal` and other non-API calls. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<call>` which creates an unevaluated call. ### Examples ``` do.call("complex", list(imaginary = 1:3)) ## if we already have a list (e.g., a data frame) ## we need c() to add further arguments tmp <- expand.grid(letters[1:2], 1:3, c("+", "-")) do.call("paste", c(tmp, sep = "")) do.call(paste, list(as.name("A"), as.name("B")), quote = TRUE) ## examples of where objects will be found. A <- 2 f <- function(x) print(x^2) env <- new.env() assign("A", 10, envir = env) assign("f", f, envir = env) f <- function(x) print(x) f(A) # 2 do.call("f", list(A)) # 2 do.call("f", list(A), envir = env) # 4 do.call( f, list(A), envir = env) # 2 do.call("f", list(quote(A)), envir = env) # 100 do.call( f, list(quote(A)), envir = env) # 10 do.call("f", list(as.name("A")), envir = env) # 100 eval(call("f", A)) # 2 eval(call("f", quote(A))) # 2 eval(call("f", A), envir = env) # 4 eval(call("f", quote(A)), envir = env) # 100 ``` r None `match.call` Argument Matching ------------------------------- ### Description `match.call` returns a call in which all of the specified arguments are specified by their full names. ### Usage ``` match.call(definition = sys.function(sys.parent()), call = sys.call(sys.parent()), expand.dots = TRUE, envir = parent.frame(2L)) ``` ### Arguments | | | | --- | --- | | `definition` | a function, by default the function from which `match.call` is called. See details. | | `call` | an unevaluated call to the function specified by `definition`, as generated by `<call>`. | | `expand.dots` | logical. Should arguments matching `...` in the call be included or left as a `...` argument? | | `envir` | an environment, from which the `...` in `call` are retrieved, if any. | ### Details ‘function’ on this help page means an interpreted function (also known as a ‘closure’): `match.call` does not support primitive functions (where argument matching is normally positional). `match.call` is most commonly used in two circumstances: * To record the call for later re-use: for example most model-fitting functions record the call as element `call` of the list they return. Here the default `expand.dots = TRUE` is appropriate. * To pass most of the call to another function, often `model.frame`. Here the common idiom is that `expand.dots = FALSE` is used, and the `...` element of the matched call is removed. An alternative is to explicitly select the arguments to be passed on, as is done in `lm`. Calling `match.call` outside a function without specifying `definition` is an error. ### Value An object of class `call`. ### References Chambers, J. M. (1998) *Programming with Data. A Guide to the S Language*. Springer. ### See Also `[sys.call](sys.parent)()` is similar, but does *not* expand the argument names; `<call>`, `<pmatch>`, `<match.arg>`, `<match.fun>`. ### Examples ``` match.call(get, call("get", "abc", i = FALSE, p = 3)) ## -> get(x = "abc", pos = 3, inherits = FALSE) fun <- function(x, lower = 0, upper = 1) { structure((x - lower) / (upper - lower), CALL = match.call()) } fun(4 * atan(1), u = pi) ```
programming_docs
r None `AsIs` Inhibit Interpretation/Conversion of Objects ---------------------------------------------------- ### Description Change the class of an object to indicate that it should be treated ‘as is’. ### Usage ``` I(x) ``` ### Arguments | | | | --- | --- | | `x` | an object | ### Details Function `I` has two main uses. * In function `<data.frame>`. Protecting an object by enclosing it in `I()` in a call to `data.frame` inhibits the conversion of character vectors to factors and the dropping of names, and ensures that matrices are inserted as single columns. `I` can also be used to protect objects which are to be added to a data frame, or converted to a data frame *via* `<as.data.frame>`. It achieves this by prepending the class `"AsIs"` to the object's classes. Class `"AsIs"` has a few of its own methods, including for `[`, `as.data.frame`, `print` and `format`. * In function `[formula](../../stats/html/formula)`. There it is used to inhibit the interpretation of operators such as `"+"`, `"-"`, `"*"` and `"^"` as formula operators, so they are used as arithmetical operators. This is interpreted as a symbol by `terms.formula`. ### Value A copy of the object with class `"AsIs"` prepended to the class(es). ### References Chambers, J. M. (1992) *Linear models*. Chapter 4 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<data.frame>`, `[formula](../../stats/html/formula)` r None `readChar` Transfer Character Strings To and From Connections -------------------------------------------------------------- ### Description Transfer character strings to and from connections, without assuming they are null-terminated on the connection. ### Usage ``` readChar(con, nchars, useBytes = FALSE) writeChar(object, con, nchars = nchar(object, type = "chars"), eos = "", useBytes = FALSE) ``` ### Arguments | | | | --- | --- | | `con` | A [connection](connections) object, or a character string naming a file, or a raw vector. | | `nchars` | integer vector, giving the lengths in characters of (unterminated) character strings to be read or written. Elements must be >= 0 and not `NA`. | | `useBytes` | logical: For `readChar`, should `nchars` be regarded as a number of bytes not characters in a multi-byte locale? For `writeChar`, see `[writeLines](writelines)`. | | `object` | A character vector to be written to the connection, at least as long as `nchars`. | | `eos` | ‘end of string’: character string. The terminator to be written after each string, followed by an ASCII `nul`; use `NULL` for no terminator at all. | ### Details These functions complement `[readBin](readbin)` and `[writeBin](readbin)` which read and write C-style zero-terminated character strings. They are for strings of known length, and can optionally write an end-of-string mark. They are intended only for character strings valid in the current locale. These functions are intended to be used with binary-mode connections. If `con` is a character string, the functions call `[file](connections)` to obtain a binary-mode file connection which is opened for the duration of the function call. If the connection is open it is read/written from its current position. If it is not open, it is opened for the duration of the call in an appropriate mode (binary read or write) and then closed again. An open connection must be in binary mode. If `readChar` is called with `con` a raw vector, the data in the vector is used as input. If `writeChar` is called with `con` a raw vector, it is just an indication that a raw vector should be returned. Character strings containing ASCII `nul`(s) will be read correctly by `readChar` but truncated at the first `nul` with a warning. If the character length requested for `readChar` is longer than the data available on the connection, what is available is returned. For `writeChar` if too many characters are requested the output is zero-padded, with a warning. Missing strings are written as `NA`. ### Value For `readChar`, a character vector of length the number of items read (which might be less than `length(nchars)`). For `writeChar`, a raw vector (if `con` is a raw vector) or invisibly `NULL`. ### Note Earlier versions of **R** allowed embedded nul bytes within character strings, but not **R** >= 2.8.0. `readChar` was commonly used to read fixed-size zero-padded byte fields for which `readBin` was unsuitable. `readChar` can still be used for such fields if there are no embedded nuls: otherwise `readBin(what = "raw")` provides an alternative. `nchars` will be interpreted in bytes not characters in a non-UTF-8 multi-byte locale, with a warning. There is little validity checking of UTF-8 reads. Using these functions on a text-mode connection may work but should not be mixed with text-mode access to the connection, especially if the connection was opened with an `encoding` argument. ### See Also The ‘R Data Import/Export’ manual. `<connections>`, `[readLines](readlines)`, `[writeLines](writelines)`, `[readBin](readbin)` ### Examples ``` ## test fixed-length strings zzfil <- tempfile("testchar") zz <- file(zzfil, "wb") x <- c("a", "this will be truncated", "abc") nc <- c(3, 10, 3) writeChar(x, zz, nc, eos = NULL) writeChar(x, zz, eos = "\r\n") close(zz) zz <- file(zzfil, "rb") readChar(zz, nc) readChar(zz, nchar(x)+3) # need to read the terminator explicitly close(zz) unlink(zzfil) ``` r None `Memory-limits` Memory Limits in R ----------------------------------- ### Description **R** holds objects it is using in virtual memory. This help file documents the current design limitations on large objects: these differ between 32-bit and 64-bit builds of **R**. ### Details Currently **R** runs on 32- and 64-bit operating systems, and most 64-bit OSes (including Linux, Solaris, Windows and macOS) can run either 32- or 64-bit builds of **R**. The memory limits depends mainly on the build, but for a 32-bit build of **R** on Windows they also depend on the underlying OS version. **R** holds all objects in virtual memory, and there are limits based on the amount of memory that can be used by all objects: * There may be limits on the size of the heap and the number of cons cells allowed – see `[Memory](memory)` – but these are usually not imposed. * There is a limit on the (user) address space of a single process such as the **R** executable. This is system-specific, and can depend on the executable. * The environment may impose limitations on the resources available to a single process: Windows' versions of **R** do so directly. Error messages beginning `cannot allocate vector of size` indicate a failure to obtain memory, either because the size exceeded the address-space limit for a process or, more likely, because the system was unable to provide the memory. Note that on a 32-bit build there may well be enough free memory available, but not a large enough contiguous block of address space into which to map it. There are also limits on individual objects. The storage space cannot exceed the address limit, and if you try to exceed that limit, the error message begins `cannot allocate vector of length`. The number of bytes in a character string is limited to *2^31 - 1 ~ 2\*10^9*, which is also the limit on each dimension of an array. ### Unix The address-space limit is system-specific: 32-bit OSes imposes a limit of no more than 4Gb: it is often 3Gb. Running 32-bit executables on a 64-bit OS will have similar limits: 64-bit executables will have an essentially infinite system-specific limit (e.g., 128Tb for Linux on x86\_64 cpus). See the OS/shell's help on commands such as `limit` or `ulimit` for how to impose limitations on the resources available to a single process. For example a `bash` user could use ``` ulimit -t 600 -v 4000000 ``` whereas a `csh` user might use ``` limit cputime 10m limit vmemoryuse 4096m ``` to limit a process to 10 minutes of CPU time and (around) 4Gb of virtual memory. (There are other options to set the RAM in use, but they are not generally honoured.) ### Windows The address-space limit is 2Gb under 32-bit Windows unless the OS's default has been changed to allow more (up to 3Gb). See <https://docs.microsoft.com/en-gb/windows/desktop/Memory/physical-address-extension> and <https://docs.microsoft.com/en-gb/windows/desktop/Memory/4-gigabyte-tuning>. Under most 64-bit versions of Windows the limit for a 32-bit build of **R** is 4Gb: for the oldest ones it is 2Gb. The limit for a 64-bit build of **R** (imposed by the OS) is 8Tb. It is not normally possible to allocate as much as 2Gb to a single vector in a 32-bit build of **R** even on 64-bit Windows because of preallocations by Windows in the middle of the address space. Under Windows, **R** imposes limits on the total memory allocation available to a single session as the OS provides no way to do so: see `[memory.size](../../utils/html/memory.size)` and `[memory.limit](../../utils/html/memory.size)`. ### See Also `[object.size](../../utils/html/object.size)(a)` for the (approximate) size of **R** object `a`. r None `Cstack_info` Report Information on C Stack Size and Usage ----------------------------------------------------------- ### Description Report information on the C stack size and usage (if available). ### Usage ``` Cstack_info() ``` ### Details On most platforms, C stack information is recorded when **R** is initialized and used for stack-checking. If this information is unavailable, the `size` will be returned as `NA`, and stack-checking is not performed. The information on the stack base address is thought to be accurate on Windows, Linux (using `glibc`), macOS and FreeBSD but a heuristic is used on other platforms. Because this might be slightly inaccurate, the current usage could be estimated as negative. (The heuristic is not used on embedded uses of **R** on platforms where the stack base information is not thought to be accurate.) The ‘evaluation depth’ is the number of nested **R** expressions currently under evaluation: this has a limit controlled by `<options>("expressions")`. ### Value An integer vector. This has named elements | | | | --- | --- | | `size` | The size of the stack (in bytes), or `NA` if unknown. | | `current` | The estimated current usage (in bytes), possibly `NA`. | | `direction` | `1` (stack grows down, the usual case) or `-1` (stack grows up). | | `eval_depth` | The current evaluation depth (including two calls for the call to `Cstack_info`). | ### Examples ``` Cstack_info() ``` r None `curlGetHeaders` Retrieve Headers from URLs -------------------------------------------- ### Description Retrieve the headers for a URL for a supported protocol such as `http://`, `ftp://`, `https://` and `ftps://`. An optional function not supported on all platforms. ### Usage ``` curlGetHeaders(url, redirect = TRUE, verify = TRUE, timeout = 0L, TLS = "") ``` ### Arguments | | | | --- | --- | | `url` | character string specifying the URL. | | `redirect` | logical: should redirections be followed? | | `verify` | logical: should certificates be verified as valid and applying to that host? | | `timeout` | integer: the maximum time in seconds the request is allowed to take. Non-positive and invalid values are ignored (including the default). (Added in **R** 4.1.0.) | | `TLS` | character: the minimum version of the TLS protocol to be used for `https://` URLs: the default (`""`) is no restriction beyond that of the underlying `libcurl` (usually 1.0). Other valid values are `"1.1"`, `"1.2"` (both for `libcurl` 7.34.0 and later) and `"1.3"` (7.52.0 and later), if supported by the underlying version of `libcurl` and the SSL library it uses. | ### Details This reports what `curl -I -L` or `curl -I` would report. For a `ftp://` URL the ‘headers’ are a record of the conversation between client and server before data transfer. Only 500 header lines will be reported: there is a limit of 20 redirections so this should suffice (and even 20 would indicate problems). If argument `timeout` is not set to a positive integer this uses `[getOption](options)("timeout")` which defaults to 60 seconds. As the request cannot be interrupted you may want to consider a shorter value. To see all the details of the interaction with the server(s) set `<options>(internet.info = 1)`. HTTP[S] servers are allowed to refuse requests to read the headers and some do: this will result in a `status` of `405`. For possible issues with secure URLs (especially on Windows) see `[download.file](../../utils/html/download.file)`. There is a security risk in not verifying certificates, but as only the headers are captured it is slight. Usually looking at the URL in a browser will reveal what the problem is (and it may well be machine-specific). ### Value A character vector with integer attribute `"status"` (the last-received ‘status’ code). If redirection occurs this will include the headers for all the URLs visited. For the interpretation of ‘status’ codes see <https://en.wikipedia.org/wiki/List_of_HTTP_status_codes> and <https://en.wikipedia.org/wiki/List_of_FTP_server_return_codes>. A successful FTP connection will usually have status 250, 257 or 350. ### See Also `<capabilities>("libcurl")` to see if this is supported. `[libcurlVersion](libcurlversion)` for the version of `libcurl` in use. `<options>` `HTTPUserAgent` and `timeout` are used. ### Examples ``` ## needs Internet access, results vary curlGetHeaders("http://bugs.r-project.org") ## this redirects to https:// curlGetHeaders("https://httpbin.org/status/404") ## returns status curlGetHeaders("ftp://cran.r-project.org") ``` r None `qr` The QR Decomposition of a Matrix -------------------------------------- ### Description `qr` computes the QR decomposition of a matrix. ### Usage ``` qr(x, ...) ## Default S3 method: qr(x, tol = 1e-07 , LAPACK = FALSE, ...) qr.coef(qr, y) qr.qy(qr, y) qr.qty(qr, y) qr.resid(qr, y) qr.fitted(qr, y, k = qr$rank) qr.solve(a, b, tol = 1e-7) ## S3 method for class 'qr' solve(a, b, ...) is.qr(x) as.qr(x) ``` ### Arguments | | | | --- | --- | | `x` | a numeric or complex matrix whose QR decomposition is to be computed. Logical matrices are coerced to numeric. | | `tol` | the tolerance for detecting linear dependencies in the columns of `x`. Only used if `LAPACK` is false and `x` is real. | | `qr` | a QR decomposition of the type computed by `qr`. | | `y, b` | a vector or matrix of right-hand sides of equations. | | `a` | a QR decomposition or (`qr.solve` only) a rectangular matrix. | | `k` | effective rank. | | `LAPACK` | logical. For real `x`, if true use LAPACK otherwise use LINPACK (the default). | | `...` | further arguments passed to or from other methods | ### Details The QR decomposition plays an important role in many statistical techniques. In particular it can be used to solve the equation *\bold{Ax} = \bold{b}* for given matrix *\bold{A}*, and vector *\bold{b}*. It is useful for computing regression coefficients and in applying the Newton-Raphson algorithm. The functions `qr.coef`, `qr.resid`, and `qr.fitted` return the coefficients, residuals and fitted values obtained when fitting `y` to the matrix with QR decomposition `qr`. (If pivoting is used, some of the coefficients will be `NA`.) `qr.qy` and `qr.qty` return `Q %*% y` and `t(Q) %*% y`, where `Q` is the (complete) *\bold{Q}* matrix. All the above functions keep `dimnames` (and `names`) of `x` and `y` if there are any. `solve.qr` is the method for `<solve>` for `qr` objects. `qr.solve` solves systems of equations via the QR decomposition: if `a` is a QR decomposition it is the same as `solve.qr`, but if `a` is a rectangular matrix the QR decomposition is computed first. Either will handle over- and under-determined systems, providing a least-squares fit if appropriate. `is.qr` returns `TRUE` if `x` is a `<list>` and `[inherits](class)` from `"qr"`. It is not possible to coerce objects to mode `"qr"`. Objects either are QR decompositions or they are not. The LINPACK interface is restricted to matrices `x` with less than *2^31* elements. `qr.fitted` and `qr.resid` only support the LINPACK interface. Unsuccessful results from the underlying LAPACK code will result in an error giving a positive error code: these can only be interpreted by detailed study of the FORTRAN code. ### Value The QR decomposition of the matrix as computed by LINPACK(\*) or LAPACK. The components in the returned value correspond directly to the values returned by DQRDC(2)/DGEQP3/ZGEQP3. | | | | --- | --- | | `qr` | a matrix with the same dimensions as `x`. The upper triangle contains the *\bold{R}* of the decomposition and the lower triangle contains information on the *\bold{Q}* of the decomposition (stored in compact form). Note that the storage used by DQRDC and DGEQP3 differs. | | `qraux` | a vector of length `ncol(x)` which contains additional information on *\bold{Q}*. | | `rank` | the rank of `x` as computed by the decomposition(\*): always full rank in the LAPACK case. | | `pivot` | information on the pivoting strategy used during the decomposition. | Non-complex QR objects computed by LAPACK have the attribute `"useLAPACK"` with value `TRUE`. ### `*)` `dqrdc2` instead of LINPACK's DQRDC In the (default) LINPACK case (`LAPACK = FALSE`), `qr()` uses a *modified* version of LINPACK's DQRDC, called ‘`dqrdc2`’. It differs by using the tolerance `tol` for a pivoting strategy which moves columns with near-zero 2-norm to the right-hand edge of the x matrix. This strategy means that sequential one degree-of-freedom effects can be computed in a natural way. ### Note To compute the determinant of a matrix (do you *really* need it?), the QR decomposition is much more efficient than using Eigen values (`<eigen>`). See `<det>`. Using LAPACK (including in the complex case) uses column pivoting and does not attempt to detect rank-deficient matrices. ### Source For `qr`, the LINPACK routine `DQRDC` (but modified to `dqrdc2`(\*)) and the LAPACK routines `DGEQP3` and `ZGEQP3`. Further LINPACK and LAPACK routines are used for `qr.coef`, `qr.qy` and `qr.aty`. LAPACK and LINPACK are from <https://www.netlib.org/lapack/> and <https://www.netlib.org/linpack/> and their guides are listed in the references. ### References Anderson. E. and ten others (1999) *LAPACK Users' Guide*. Third Edition. SIAM. Available on-line at <https://www.netlib.org/lapack/lug/lapack_lug.html>. Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Dongarra, J. J., Bunch, J. R., Moler, C. B. and Stewart, G. W. (1978) *LINPACK Users Guide.* Philadelphia: SIAM Publications. ### See Also `[qr.Q](qraux)`, `[qr.R](qraux)`, `[qr.X](qraux)` for reconstruction of the matrices. `[lm.fit](../../stats/html/lmfit)`, `[lsfit](../../stats/html/lsfit)`, `<eigen>`, `<svd>`. `<det>` (using `qr`) to compute the determinant of a matrix. ### Examples ``` hilbert <- function(n) { i <- 1:n; 1 / outer(i - 1, i, "+") } h9 <- hilbert(9); h9 qr(h9)$rank #--> only 7 qrh9 <- qr(h9, tol = 1e-10) qrh9$rank #--> 9 ##-- Solve linear equation system H %*% x = y : y <- 1:9/10 x <- qr.solve(h9, y, tol = 1e-10) # or equivalently : x <- qr.coef(qrh9, y) #-- is == but much better than #-- solve(h9) %*% y h9 %*% x # = y ## overdetermined system A <- matrix(runif(12), 4) b <- 1:4 qr.solve(A, b) # or solve(qr(A), b) solve(qr(A, LAPACK = TRUE), b) # this is a least-squares solution, cf. lm(b ~ 0 + A) ## underdetermined system A <- matrix(runif(12), 3) b <- 1:3 qr.solve(A, b) solve(qr(A, LAPACK = TRUE), b) # solutions will have one zero, not necessarily the same one ```
programming_docs
r None `Special` Special Functions of Mathematics ------------------------------------------- ### Description Special mathematical functions related to the beta and gamma functions. ### Usage ``` beta(a, b) lbeta(a, b) gamma(x) lgamma(x) psigamma(x, deriv = 0) digamma(x) trigamma(x) choose(n, k) lchoose(n, k) factorial(x) lfactorial(x) ``` ### Arguments | | | | --- | --- | | `a, b` | non-negative numeric vectors. | | `x, n` | numeric vectors. | | `k, deriv` | integer vectors. | ### Details The functions `beta` and `lbeta` return the beta function and the natural logarithm of the beta function, *B(a,b) = Γ(a)Γ(b)/Γ(a+b).* The formal definition is *integral\_0^1 t^(a-1) (1-t)^(b-1) dt* (Abramowitz and Stegun section 6.2.1, page 258). Note that it is only defined in **R** for non-negative `a` and `b`, and is infinite if either is zero. The functions `gamma` and `lgamma` return the gamma function *Γ(x)* and the natural logarithm of *the absolute value of* the gamma function. The gamma function is defined by (Abramowitz and Stegun section 6.1.1, page 255) *Γ(x) = integral\_0^Inf t^(x-1) exp(-t) dt* for all real `x` except zero and negative integers (when `NaN` is returned). There will be a warning on possible loss of precision for values which are too close (within about *1e-8*) to a negative integer less than -10. `factorial(x)` (*x!* for non-negative integer `x`) is defined to be `gamma(x+1)` and `lfactorial` to be `lgamma(x+1)`. The functions `digamma` and `trigamma` return the first and second derivatives of the logarithm of the gamma function. `psigamma(x, deriv)` (`deriv >= 0`) computes the `deriv`-th derivative of *ψ(x)*. *digamma(x) = ψ(x) = d/dx{ln Γ(x)} = Γ'(x) / Γ(x)* *ψ* and its derivatives, the `psigamma()` functions, are often called the ‘polygamma’ functions, e.g. in Abramowitz and Stegun (section 6.4.1, page 260); and higher derivatives (`deriv = 2:4`) have occasionally been called ‘tetragamma’, ‘pentagamma’, and ‘hexagamma’. The functions `choose` and `lchoose` return binomial coefficients and the logarithms of their absolute values. Note that `choose(n, k)` is defined for all real numbers *n* and integer *k*. For *k ≥ 1* it is defined as *n(n-1)…(n-k+1) / k!*, as *1* for *k = 0* and as *0* for negative *k*. Non-integer values of `k` are rounded to an integer, with a warning. `choose(*, k)` uses direct arithmetic (instead of `[l]gamma` calls) for small `k`, for speed and accuracy reasons. Note the function `[combn](../../utils/html/combn)` (package utils) for enumeration of all possible combinations. The `gamma`, `lgamma`, `digamma` and `trigamma` functions are [internal generic](internalmethods) <primitive> functions: methods can be defined for them individually or via the `[Math](groupgeneric)` group generic. ### Source `gamma`, `lgamma`, `beta` and `lbeta` are based on C translations of Fortran subroutines by W. Fullerton of Los Alamos Scientific Laboratory (now available as part of SLATEC). `digamma`, `trigamma` and `psigamma` are based on Amos, D. E. (1983). A portable Fortran subroutine for derivatives of the psi function, Algorithm 610, *ACM Transactions on Mathematical Software* **9(4)**, 494–502. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. (For `gamma` and `lgamma`.) Abramowitz, M. and Stegun, I. A. (1972) *Handbook of Mathematical Functions*. New York: Dover. <https://en.wikipedia.org/wiki/Abramowitz_and_Stegun> provides links to the full text which is in public domain. Chapter 6: Gamma and Related Functions. ### See Also `[Arithmetic](arithmetic)` for simple, `[sqrt](mathfun)` for miscellaneous mathematical functions and `[Bessel](bessel)` for the real Bessel functions. For the incomplete gamma function see `[pgamma](../../stats/html/gammadist)`. ### Examples ``` require(graphics) choose(5, 2) for (n in 0:10) print(choose(n, k = 0:n)) factorial(100) lfactorial(10000) ## gamma has 1st order poles at 0, -1, -2, ... ## this will generate loss of precision warnings, so turn off op <- options("warn") options(warn = -1) x <- sort(c(seq(-3, 4, length.out = 201), outer(0:-3, (-1:1)*1e-6, "+"))) plot(x, gamma(x), ylim = c(-20,20), col = "red", type = "l", lwd = 2, main = expression(Gamma(x))) abline(h = 0, v = -3:0, lty = 3, col = "midnightblue") options(op) x <- seq(0.1, 4, length.out = 201); dx <- diff(x)[1] par(mfrow = c(2, 3)) for (ch in c("", "l","di","tri","tetra","penta")) { is.deriv <- nchar(ch) >= 2 nm <- paste0(ch, "gamma") if (is.deriv) { dy <- diff(y) / dx # finite difference der <- which(ch == c("di","tri","tetra","penta")) - 1 nm2 <- paste0("psigamma(*, deriv = ", der,")") nm <- if(der >= 2) nm2 else paste(nm, nm2, sep = " ==\n") y <- psigamma(x, deriv = der) } else { y <- get(nm)(x) } plot(x, y, type = "l", main = nm, col = "red") abline(h = 0, col = "lightgray") if (is.deriv) lines(x[-1], dy, col = "blue", lty = 2) } par(mfrow = c(1, 1)) ## "Extended" Pascal triangle: fN <- function(n) formatC(n, width=2) for (n in -4:10) { cat(fN(n),":", fN(choose(n, k = -2:max(3, n+2)))) cat("\n") } ## R code version of choose() [simplistic; warning for k < 0]: mychoose <- function(r, k) ifelse(k <= 0, (k == 0), sapply(k, function(k) prod(r:(r-k+1))) / factorial(k)) k <- -1:6 cbind(k = k, choose(1/2, k), mychoose(1/2, k)) ## Binomial theorem for n = 1/2 ; ## sqrt(1+x) = (1+x)^(1/2) = sum_{k=0}^Inf choose(1/2, k) * x^k : k <- 0:10 # 10 is sufficient for ~ 9 digit precision: sqrt(1.25) sum(choose(1/2, k)* .25^k) ``` r None `save` Save R Objects ---------------------- ### Description `save` writes an external representation of **R** objects to the specified file. The objects can be read back from the file at a later date by using the function `<load>` or `<attach>` (or `[data](../../utils/html/data)` in some cases). `save.image()` is just a short-cut for ‘save my current workspace’, i.e., `save(list = ls(all.names = TRUE), file = ".RData", envir = .GlobalEnv)`. It is also what happens with `[q](quit)("yes")`. ### Usage ``` save(..., list = character(), file = stop("'file' must be specified"), ascii = FALSE, version = NULL, envir = parent.frame(), compress = isTRUE(!ascii), compression_level, eval.promises = TRUE, precheck = TRUE) save.image(file = ".RData", version = NULL, ascii = FALSE, compress = !ascii, safe = TRUE) ``` ### Arguments | | | | --- | --- | | `...` | the names of the objects to be saved (as symbols or character strings). | | `list` | A character vector containing the names of objects to be saved. | | `file` | a (writable binary-mode) [connection](connections) or the name of the file where the data will be saved (when [tilde expansion](path.expand) is done). Must be a file name for `save.image` or `version = 1`. | | `ascii` | if `TRUE`, an ASCII representation of the data is written. The default value of `ascii` is `FALSE` which leads to a binary file being written. If `NA` and `version >= 2`, a different ASCII representation is used which writes double/complex numbers as binary fractions. | | `version` | the workspace format version to use. `NULL` specifies the current default format (3). Version 1 was the default from **R** 0.99.0 to **R** 1.3.1 and version 2 from **R** 1.4.0 to 3.5.0. Version 3 is supported from **R** 3.5.0. | | `envir` | environment to search for objects to be saved. | | `compress` | logical or character string specifying whether saving to a named file is to use compression. `TRUE` corresponds to `gzip` compression, and character strings `"gzip"`, `"bzip2"` or `"xz"` specify the type of compression. Ignored when `file` is a connection and for workspace format version 1. | | `compression_level` | integer: the level of compression to be used. Defaults to `6` for `gzip` compression and to `9` for `bzip2` or `xz` compression. See the help for `[file](connections)` for possible values and their merits. | | `eval.promises` | logical: should objects which are promises be forced before saving? | | `precheck` | logical: should the existence of the objects be checked before starting to save (and in particular before opening the file/connection)? Does not apply to version 1 saves. | | `safe` | logical. If `TRUE`, a temporary file is used for creating the saved workspace. The temporary file is renamed to `file` if the save succeeds. This preserves an existing workspace `file` if the save fails, but at the cost of using extra disk space during the save. | ### Details The names of the objects specified either as symbols (or character strings) in `...` or as a character vector in `list` are used to look up the objects from environment `envir`. By default [promises](delayedassign) are evaluated, but if `eval.promises = FALSE` promises are saved (together with their evaluation environments). (Promises embedded in objects are always saved unevaluated.) All **R** platforms use the XDR (bigendian) representation of C ints and doubles in binary save-d files, and these are portable across all **R** platforms. ASCII saves used to be useful for moving data between platforms but are now mainly of historical interest. They can be more compact than binary saves where compression is not used, but are almost always slower to both read and write: binary saves compress much better than ASCII ones. Further, decimal ASCII saves may not restore double/complex values exactly, and what value is restored may depend on the **R** platform. Default values for the `ascii`, `compress`, `safe` and `version` arguments can be modified with the `"save.defaults"` option (used both by `save` and `save.image`), see also the ‘Examples’ section. If a `"save.image.defaults"` option is set it is used in preference to `"save.defaults"` for function `save.image` (which allows this to have different defaults). In addition, `compression_level` can be part of the `"save.defaults"` option. A connection that is not already open will be opened in mode `"wb"`. Supplying a connection which is open and not in binary mode gives an error. ### Compression Large files can be reduced considerably in size by compression. A particular 46MB **R** object was saved as 35MB without compression in 2 seconds, 22MB with `gzip` compression in 8 secs, 19MB with `bzip2` compression in 13 secs and 9.4MB with `xz` compression in 40 secs. The load times were 1.3, 2.8, 5.5 and 5.7 seconds respectively. These results are indicative, but the relative performances do depend on the actual file: `xz` compressed unusually well here. It is possible to compress later (with `gzip`, `bzip2` or `xz`) a file saved with `compress = FALSE`: the effect is the same as saving with compression. Also, a saved file can be uncompressed and re-compressed under a different compression scheme (and see `[resaveRdaFiles](../../tools/html/checkrdafiles)` for a way to do so from within **R**). ### Parallel compression That `file` can be a connection can be exploited to make use of an external parallel compression utility such as `pigz` (<https://zlib.net/pigz/>) or `pbzip2` (<https://launchpad.net/pbzip2>) *via* a `[pipe](connections)` connection. For example, using 8 threads, ``` con <- pipe("pigz -p8 > fname.gz", "wb") save(myObj, file = con); close(con) con <- pipe("pbzip2 -p8 -9 > fname.bz2", "wb") save(myObj, file = con); close(con) con <- pipe("xz -T8 -6 -e > fname.xz", "wb") save(myObj, file = con); close(con) ``` where the last requires `xz` 5.1.1 or later built with support for multiple threads (and parallel compression is only effective for large objects: at level 6 it will compress in serialized chunks of 12MB). ### Warnings The `...` arguments only give the *names* of the objects to be saved: they are searched for in the environment given by the `envir` argument, and the actual objects given as arguments need not be those found. Saved **R** objects are binary files, even those saved with `ascii = TRUE`, so ensure that they are transferred without conversion of end-of-line markers and of 8-bit characters. The lines are delimited by LF on all platforms. Although the default version was not changed between **R** 1.4.0 and **R** 3.4.4 nor since **R** 3.5.0, this does not mean that saved files are necessarily backwards compatible. You will be able to load a saved image into an earlier version of **R** which supports its version unless use is made of later additions (for example for version 2, raw vectors, external pointers and some S4 objects). One such ‘later addition’ was [long vectors](longvectors), introduced in **R** 3.0.0 and loadable only on 64-bit platforms. Loading files saved with `ASCII = NA` requires a C99-compliant C function `sscanf`: this is a problem on Windows, first worked around in **R** 3.1.2: version-2 files in that format should be readable in earlier versions of **R** on all other platforms. ### Note For saving single **R** objects, `[saveRDS](readrds)()` is mostly preferable to `save()`, notably because of the *functional* nature of `[readRDS](readrds)()`, as opposed to `<load>()`. The most common reason for failure is lack of write permission in the current directory. For `save.image` and for saving at the end of a session this will shown by messages like ``` Error in gzfile(file, "wb") : unable to open connection In addition: Warning message: In gzfile(file, "wb") : cannot open compressed file '.RDataTmp', probable reason 'Permission denied' ``` ### See Also `<dput>`, `<dump>`, `<load>`, `[data](../../utils/html/data)`. For other interfaces to the underlying serialization format, see `<serialize>` and `[saveRDS](readrds)`. ### Examples ``` x <- stats::runif(20) y <- list(a = 1, b = TRUE, c = "oops") save(x, y, file = "xy.RData") save.image() # creating ".RData" in current working directory unlink("xy.RData") # set save defaults using option: options(save.defaults = list(ascii = TRUE, safe = FALSE)) save.image() # creating ".RData" if(interactive()) withAutoprint({ file.info(".RData") readLines(".RData", n = 7) # first 7 lines; first starts w/ "RDA".. }) unlink(".RData") ``` r None `file.access` Ascertain File Accessibility ------------------------------------------- ### Description Utility function to access information about files on the user's file systems. ### Usage ``` file.access(names, mode = 0) ``` ### Arguments | | | | --- | --- | | `names` | character vector containing file names. Tilde-expansion will be done: see `<path.expand>`. | | `mode` | integer specifying access mode required: see ‘Details’. | ### Details The `mode` value can be the exclusive or of the following values 0 test for existence. 1 test for execute permission. 2 test for write permission. 4 test for read permission. Permission will be computed for real user ID and real group ID (rather than the effective IDs). Please note that it is not a good idea to use this function to test before trying to open a file. On a multi-tasking system, it is possible that the accessibility of a file will change between the time you call `file.access()` and the time you try to open the file. It is better to wrap file open attempts in `<try>`. ### Value An integer vector with values `0` for success and `-1` for failure. ### Note This is intended as a replacement for the S-PLUS function `access`, a wrapper for the C function of the same name, which explains the return value encoding. Note that the return value is **false** for **success**. ### See Also `<file.info>` for more details on permissions, `[Sys.chmod](files2)` to change permissions, and `<try>` for a ‘test it and see’ approach. `[file\_test](../../utils/html/filetest)` for shell-style file tests. ### Examples ``` fa <- file.access(dir(".")) table(fa) # count successes & failures ``` r None `ns-hooks` Hooks for Namespace Events -------------------------------------- ### Description Packages can supply functions to be called when loaded, attached, detached or unloaded. ### Usage ``` .onLoad(libname, pkgname) .onAttach(libname, pkgname) .onUnload(libpath) .onDetach(libpath) .Last.lib(libpath) ``` ### Arguments | | | | --- | --- | | `libname` | a character string giving the library directory where the package defining the namespace was found. | | `pkgname` | a character string giving the name of the package. | | `libpath` | a character string giving the complete path to the package. | ### Details After loading, `[loadNamespace](ns-load)` looks for a hook function named `.onLoad` and calls it (with two unnamed arguments) before sealing the namespace and processing exports. When the package is attached (via `<library>` or `[attachNamespace](ns-load)`), the hook function `.onAttach` is looked for and if found is called (with two unnamed arguments) before the package environment is sealed. If a function `.onDetach` is in the namespace or `.Last.lib` is exported from the package, it will be called (with a single argument) when the package is `<detach>`ed. Beware that it might be called if `.onAttach` has failed, so it should be written defensively. (It is called within `[tryCatch](conditions)`, so errors will not stop the package being detached.) If a namespace is unloaded (via `[unloadNamespace](ns-load)`), a hook function `.onUnload` is run (with a single argument) before final unloading. Note that the code in `.onLoad` and `.onUnload` should not assume any package except the base package is on the search path. Objects in the current package will be visible (unless this is circumvented), but objects from other packages should be imported or the double colon operator should be used. `.onLoad`, `.onUnload`, `.onAttach` and `.onDetach` are looked for as internal objects in the namespace and should not be exported (whereas `.Last.lib` should be). Note that packages are not detached nor namespaces unloaded at the end of an **R** session unless the user arranges to do so (e.g., *via* `[.Last](quit)`). Anything needed for the functioning of the namespace should be handled at load/unload times by the `.onLoad` and `.onUnload` hooks. For example, DLLs can be loaded (unless done by a `useDynLib` directive in the ‘NAMESPACE’ file) and initialized in `.onLoad` and unloaded in `.onUnload`. Use `.onAttach` only for actions that are needed only when the package becomes visible to the user (for example a start-up message) or need to be run after the package environment has been created. ### Good practice Loading a namespace should where possible be silent, with startup messages given by `.onAttach`. These messages (and any essential ones from `.onLoad`) should use `[packageStartupMessage](message)` so they can be silenced where they would be a distraction. There should be no calls to `library` nor `require` in these hooks. The way for a package to load other packages is via the Depends field in the ‘DESCRIPTION’ file: this ensures that the dependence is documented and packages are loaded in the correct order. Loading a namespace should not change the search path, so rather than attach a package, dependence of a namespace on another package should be achieved by (selectively) importing from the other package's namespace. Uses of `library` with argument `help` to display basic information about the package should use `format` on the computed package information object and pass this to `packageStartupMessage`. There should be no calls to `[installed.packages](../../utils/html/installed.packages)` in startup code: it is potentially very slow and may fail in versions of **R** before 2.14.2 if package installation is going on in parallel. See its help page for alternatives. Compiled code should be loaded (e.g., *via* `<library.dynam>`) in `.onLoad` or a `useDynLib` directive in the ‘NAMESPACE’ file, and not in `.onAttach`. Similarly, compiled code should not be unloaded (e.g., *via* `[library.dynam.unload](library.dynam)`) in `.Last.lib` nor `.onDetach`, only in `.onUnload`. ### See Also `[setHook](userhooks)` shows how users can set hooks on the same events, and lists the sequence of events involving all of the hooks. `<reg.finalizer>` for hooks to be run at the end of a session. `[loadNamespace](ns-load)` for more about namespaces.
programming_docs
r None `rm` Remove Objects from a Specified Environment ------------------------------------------------- ### Description `remove` and `rm` can be used to remove objects. These can be specified successively as character strings, or in the character vector `list`, or through a combination of both. All objects thus specified will be removed. If `envir` is NULL then the currently active environment is searched first. If `inherits` is `TRUE` then parents of the supplied directory are searched until a variable with the given name is encountered. A warning is printed for each variable that is not found. ### Usage ``` remove(..., list = character(), pos = -1, envir = as.environment(pos), inherits = FALSE) rm (..., list = character(), pos = -1, envir = as.environment(pos), inherits = FALSE) ``` ### Arguments | | | | --- | --- | | `...` | the objects to be removed, as names (unquoted) or character strings (quoted). | | `list` | a character vector naming objects to be removed. | | `pos` | where to do the removal. By default, uses the current environment. See ‘details’ for other possibilities. | | `envir` | the `<environment>` to use. See ‘details’. | | `inherits` | should the enclosing frames of the environment be inspected? | ### Details The `pos` argument can specify the environment from which to remove the objects in any of several ways: as an integer (the position in the `<search>` list); as the character string name of an element in the search list; or as an `<environment>` (including using `[sys.frame](sys.parent)` to access the currently active function calls). The `envir` argument is an alternative way to specify an environment, but is primarily there for back compatibility. It is not allowed to remove variables from the base environment and base namespace, nor from any environment which is locked (see `[lockEnvironment](bindenv)`). Earlier versions of **R** incorrectly claimed that supplying a character vector in `...` removed the objects named in the character vector, but it removed the character vector. Use the `list` argument to specify objects *via* a character vector. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<ls>`, `[objects](ls)` ### Examples ``` tmp <- 1:4 ## work with tmp and cleanup rm(tmp) ## Not run: ## remove (almost) everything in the working environment. ## You will get no warning, so don't do this unless you are really sure. rm(list = ls()) ## End(Not run) ``` r None `eigen` Spectral Decomposition of a Matrix ------------------------------------------- ### Description Computes eigenvalues and eigenvectors of numeric (double, integer, logical) or complex matrices. ### Usage ``` eigen(x, symmetric, only.values = FALSE, EISPACK = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | a numeric or complex matrix whose spectral decomposition is to be computed. Logical matrices are coerced to numeric. | | `symmetric` | if `TRUE`, the matrix is assumed to be symmetric (or Hermitian if complex) and only its lower triangle (diagonal included) is used. If `symmetric` is not specified, `[isSymmetric](issymmetric)(x)` is used. | | `only.values` | if `TRUE`, only the eigenvalues are computed and returned, otherwise both eigenvalues and eigenvectors are returned. | | `EISPACK` | logical. Defunct and ignored. | ### Details If `symmetric` is unspecified, `[isSymmetric](issymmetric)(x)` determines if the matrix is symmetric up to plausible numerical inaccuracies. It is surer and typically much faster to set the value yourself. Computing the eigenvectors is the slow part for large matrices. Computing the eigendecomposition of a matrix is subject to errors on a real-world computer: the definitive analysis is Wilkinson (1965). All you can hope for is a solution to a problem suitably close to `x`. So even though a real asymmetric `x` may have an algebraic solution with repeated real eigenvalues, the computed solution may be of a similar matrix with complex conjugate pairs of eigenvalues. Unsuccessful results from the underlying LAPACK code will result in an error giving a positive error code (most often `1`): these can only be interpreted by detailed study of the FORTRAN code. ### Value The spectral decomposition of `x` is returned as a list with components | | | | --- | --- | | `values` | a vector containing the *p* eigenvalues of `x`, sorted in *decreasing* order, according to `Mod(values)` in the asymmetric case when they might be complex (even for real matrices). For real asymmetric matrices the vector will be complex only if complex conjugate pairs of eigenvalues are detected. | | `vectors` | either a *p \* p* matrix whose columns contain the eigenvectors of `x`, or `NULL` if `only.values` is `TRUE`. The vectors are normalized to unit length. Recall that the eigenvectors are only defined up to a constant: even when the length is specified they are still only defined up to a scalar of modulus one (the sign for real matrices). | When `only.values` is not true, as by default, the result is of S3 class `"eigen"`. If `r <- eigen(A)`, and `V <- r$vectors; lam <- r$values`, then *A = V Lmbd V^(-1)* (up to numerical fuzz), where *Lmbd =*`diag(lam)`. ### Source `eigen` uses the LAPACK routines `DSYEVR`, `DGEEV`, `ZHEEV` and `ZGEEV`. LAPACK is from <https://www.netlib.org/lapack/> and its guide is listed in the references. ### References Anderson. E. and ten others (1999) *LAPACK Users' Guide*. Third Edition. SIAM. Available on-line at <https://www.netlib.org/lapack/lug/lapack_lug.html>. Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Wilkinson, J. H. (1965) *The Algebraic Eigenvalue Problem.* Clarendon Press, Oxford. ### See Also `<svd>`, a generalization of `eigen`; `<qr>`, and `<chol>` for related decompositions. To compute the determinant of a matrix, the `<qr>` decomposition is much more efficient: `<det>`. ### Examples ``` eigen(cbind(c(1,-1), c(-1,1))) eigen(cbind(c(1,-1), c(-1,1)), symmetric = FALSE) # same (different algorithm). eigen(cbind(1, c(1,-1)), only.values = TRUE) eigen(cbind(-1, 2:1)) # complex values eigen(print(cbind(c(0, 1i), c(-1i, 0)))) # Hermite ==> real Eigenvalues ## 3 x 3: eigen(cbind( 1, 3:1, 1:3)) eigen(cbind(-1, c(1:2,0), 0:2)) # complex values ``` r None `crossprod` Matrix Crossproduct -------------------------------- ### Description Given matrices `x` and `y` as arguments, return a matrix cross-product. This is formally equivalent to (but usually slightly faster than) the call `t(x) %*% y` (`crossprod`) or `x %*% t(y)` (`tcrossprod`). ### Usage ``` crossprod(x, y = NULL) tcrossprod(x, y = NULL) ``` ### Arguments | | | | --- | --- | | `x, y` | numeric or complex matrices (or vectors): `y = NULL` is taken to be the same matrix as `x`. Vectors are promoted to single-column or single-row matrices, depending on the context. | ### Value A double or complex matrix, with appropriate `dimnames` taken from `x` and `y`. ### Note When `x` or `y` are not matrices, they are treated as column or row matrices, but their `<names>` are usually **not** promoted to `<dimnames>`. Hence, currently, the last example has empty dimnames. In the same situation, these matrix products (also `[%\*%](matmult)`) are more flexible in promotion of vectors to row or column matrices, such that more cases are allowed, since **R** 3.2.0. The propagation of NaN/Inf values, precision, and performance of matrix products can be controlled by `<options>("matprod")`. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[%\*%](matmult)` and outer product `[%o%](outer)`. ### Examples ``` (z <- crossprod(1:4)) # = sum(1 + 2^2 + 3^2 + 4^2) drop(z) # scalar x <- 1:4; names(x) <- letters[1:4]; x tcrossprod(as.matrix(x)) # is identical(tcrossprod(as.matrix(x)), crossprod(t(x))) tcrossprod(x) # no dimnames m <- matrix(1:6, 2,3) ; v <- 1:3; v2 <- 2:1 stopifnot(identical(tcrossprod(v, m), v %*% t(m)), identical(tcrossprod(v, m), crossprod(v, t(m))), identical(crossprod(m, v2), t(m) %*% v2)) ``` r None `double` Double-Precision Vectors ---------------------------------- ### Description Create, coerce to or test for a double-precision vector. ### Usage ``` double(length = 0) as.double(x, ...) is.double(x) single(length = 0) as.single(x, ...) ``` ### Arguments | | | | --- | --- | | `length` | A non-negative integer specifying the desired length. Double values will be coerced to integer: supplying an argument of length other than one is an error. | | `x` | object to be coerced or tested. | | `...` | further arguments passed to or from other methods. | ### Details `double` creates a double-precision vector of the specified length. The elements of the vector are all equal to `0`. It is identical to `<numeric>`. `as.double` is a generic function. It is identical to `as.numeric`. Methods should return an object of base type `"double"`. `is.double` is a test of double [type](typeof). ***R** has no single precision data type. All real numbers are stored in double precision format*. The functions `as.single` and `single` are identical to `as.double` and `double` except they set the attribute `Csingle` that is used in the `[.C](foreign)` and `[.Fortran](foreign)` interface, and they are intended only to be used in that context. ### Value `double` creates a double-precision vector of the specified length. The elements of the vector are all equal to `0`. `as.double` attempts to coerce its argument to be of double type: like `[as.vector](vector)` it strips attributes including names. (To ensure that an object is of double type without stripping attributes, use `[storage.mode](mode)`.) Character strings containing optional whitespace followed by either a decimal representation or a hexadecimal representation (starting with `0x` or `0X`) can be converted, as can special values such as `"NA"`, `"NaN"`, `"Inf"` and `"infinity"`, irrespective of case. `as.double` for factors yields the codes underlying the factor levels, not the numeric representation of the labels, see also `<factor>`. `is.double` returns `TRUE` or `FALSE` depending on whether its argument is of double [type](typeof) or not. ### Double-precision values All **R** platforms are required to work with values conforming to the IEC 60559 (also known as IEEE 754) standard. This basically works with a precision of 53 bits, and represents to that precision a range of absolute values from about *2e-308* to *2e+308*. It also has special values `[NaN](is.finite)` (many of them), plus and minus infinity and plus and minus zero (although **R** acts as if these are the same). There are also *denormal(ized)* (or *subnormal*) numbers with absolute values above or below the range given above but represented to less precision. See `[.Machine](zmachine)` for precise information on these limits. Note that ultimately how double precision numbers are handled is down to the CPU/FPU and compiler. In IEEE 754-2008/IEC60559:2011 this is called ‘binary64’ format. ### Note on names It is a historical anomaly that **R** has two names for its floating-point vectors, `<double>` and `<numeric>` (and formerly had `real`). `double` is the name of the [type](typeof). `numeric` is the name of the <mode> and also of the implicit <class>. As an S4 formal class, use `"numeric"`. The potential confusion is that **R** has used *<mode>* `"numeric"` to mean ‘double or integer’, which conflicts with the S4 usage. Thus `is.numeric` tests the mode, not the class, but `as.numeric` (which is identical to `as.double`) coerces to the class. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. <https://en.wikipedia.org/wiki/IEEE_754-1985>, <https://en.wikipedia.org/wiki/IEEE_754-2008>, <https://en.wikipedia.org/wiki/IEEE_754-2019>, <https://en.wikipedia.org/wiki/Double_precision>, <https://en.wikipedia.org/wiki/Denormal_number>. ### See Also `<integer>`, `<numeric>`, `[storage.mode](mode)`. ### Examples ``` is.double(1) all(double(3) == 0) ``` r None `strtrim` Trim Character Strings to Specified Display Widths ------------------------------------------------------------- ### Description Trim character strings to specified display widths. ### Usage ``` strtrim(x, width) ``` ### Arguments | | | | --- | --- | | `x` | a character vector, or an object which can be coerced to a character vector by `[as.character](character)`. | | `width` | Positive integer values: recycled to the length of `x`. | ### Details ‘Width’ is interpreted as the display width in a monospaced font. What happens with non-printable characters (such as backspace, tab) is implementation-dependent and may depend on the locale (e.g., they may be included in the count or they may be omitted). Using this function rather than `<substr>` is important when there might be double-width (e.g., Chinese/Japanese/Korean) characters in the character vector. ### Value A character vector of the same length and with the same attributes as `x` (after possible coercion). Elements of the result will have the encoding declared as that of the current locale (see `[Encoding](encoding)`) if the corresponding input had a declared encoding and the current locale is either Latin-1 or UTF-8. ### Examples ``` strtrim(c("abcdef", "abcdef", "abcdef"), c(1,5,10)) ``` r None `strsplit` Split the Elements of a Character Vector ---------------------------------------------------- ### Description Split the elements of a character vector `x` into substrings according to the matches to substring `split` within them. ### Usage ``` strsplit(x, split, fixed = FALSE, perl = FALSE, useBytes = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | character vector, each element of which is to be split. Other inputs, including a factor, will give an error. | | `split` | character vector (or object which can be coerced to such) containing [regular expression](regex)(s) (unless `fixed = TRUE`) to use for splitting. If empty matches occur, in particular if `split` has length 0, `x` is split into single characters. If `split` has length greater than 1, it is re-cycled along `x`. | | `fixed` | logical. If `TRUE` match `split` exactly, otherwise use regular expressions. Has priority over `perl`. | | `perl` | logical. Should Perl-compatible regexps be used? | | `useBytes` | logical. If `TRUE` the matching is done byte-by-byte rather than character-by-character, and inputs with marked encodings are not converted. This is forced (with a warning) if any input is found which is marked as `"bytes"` (see `[Encoding](encoding)`). | ### Details Argument `split` will be coerced to character, so you will see uses with `split = NULL` to mean `split = character(0)`, including in the examples below. Note that splitting into single characters can be done *via* `split = character(0)` or `split = ""`; the two are equivalent. The definition of ‘character’ here depends on the locale: in a single-byte locale it is a byte, and in a multi-byte locale it is the unit represented by a ‘wide character’ (almost always a Unicode code point). A missing value of `split` does not split the corresponding element(s) of `x` at all. The algorithm applied to each input string is ``` repeat { if the string is empty break. if there is a match add the string to the left of the match to the output. remove the match and all to the left of it. else add the string to the output. break. } ``` Note that this means that if there is a match at the beginning of a (non-empty) string, the first element of the output is `""`, but if there is a match at the end of the string, the output is the same as with the match removed. Invalid inputs in the current locale are warned about up to 5 times. ### Value A list of the same length as `x`, the `i`-th element of which contains the vector of splits of `x[i]`. If any element of `x` or `split` is declared to be in UTF-8 (see `[Encoding](encoding)`), all non-ASCII character strings in the result will be in UTF-8 and have their encoding declared as UTF-8. (This also holds if any element is declared to be Latin-1 except in a Latin-1 locale.) For `perl = TRUE, useBytes = FALSE` all non-ASCII strings in a multibyte locale are translated to UTF-8. ### See Also `<paste>` for the reverse, `<grep>` and `[sub](grep)` for string search and manipulation; also `<nchar>`, `<substr>`. ‘[regular expression](regex)’ for the details of the pattern specification. Option `PCRE_use_JIT` controls the details when `perl = TRUE`. ### Examples ``` noquote(strsplit("A text I want to display with spaces", NULL)[[1]]) x <- c(as = "asfef", qu = "qwerty", "yuiop[", "b", "stuff.blah.yech") # split x on the letter e strsplit(x, "e") unlist(strsplit("a.b.c", ".")) ## [1] "" "" "" "" "" ## Note that 'split' is a regexp! ## If you really want to split on '.', use unlist(strsplit("a.b.c", "[.]")) ## [1] "a" "b" "c" ## or unlist(strsplit("a.b.c", ".", fixed = TRUE)) ## a useful function: rev() for strings strReverse <- function(x) sapply(lapply(strsplit(x, NULL), rev), paste, collapse = "") strReverse(c("abc", "Statistics")) ## get the first names of the members of R-core a <- readLines(file.path(R.home("doc"),"AUTHORS"))[-(1:8)] a <- a[(0:2)-length(a)] (a <- sub(" .*","", a)) # and reverse them strReverse(a) ## Note that final empty strings are not produced: strsplit(paste(c("", "a", ""), collapse="#"), split="#")[[1]] # [1] "" "a" ## and also an empty string is only produced before a definite match: strsplit("", " ")[[1]] # character(0) strsplit(" ", " ")[[1]] # [1] "" ``` r None `aperm` Array Transposition ---------------------------- ### Description Transpose an array by permuting its dimensions and optionally resizing it. ### Usage ``` aperm(a, perm, ...) ## Default S3 method: aperm(a, perm = NULL, resize = TRUE, ...) ## S3 method for class 'table' aperm(a, perm = NULL, resize = TRUE, keep.class = TRUE, ...) ``` ### Arguments | | | | --- | --- | | `a` | the array to be transposed. | | `perm` | the subscript permutation vector, usually a permutation of the integers `1:n`, where `n` is the number of dimensions of `a`. When `a` has named dimnames, it can be a character vector of length `n` giving a permutation of those names. The default (used whenever `perm` has zero length) is to reverse the order of the dimensions. | | `resize` | a flag indicating whether the vector should be resized as well as having its elements reordered (default `TRUE`). | | `keep.class` | logical indicating if the result should be of the same class as `a`. | | `...` | potential further arguments of methods. | ### Value A transposed version of array `a`, with subscripts permuted as indicated by the array `perm`. If `resize` is `TRUE`, the array is reshaped as well as having its elements permuted, the `dimnames` are also permuted; if `resize = FALSE` then the returned object has the same dimensions as `a`, and the dimnames are dropped. In each case other attributes are copied from `a`. The function `t` provides a faster and more convenient way of transposing matrices. ### Author(s) Jonathan Rougier, [[email protected]](mailto:[email protected]) did the faster C implementation. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<t>`, to transpose matrices. ### Examples ``` # interchange the first two subscripts on a 3-way array x x <- array(1:24, 2:4) xt <- aperm(x, c(2,1,3)) stopifnot(t(xt[,,2]) == x[,,2], t(xt[,,3]) == x[,,3], t(xt[,,4]) == x[,,4]) UCB <- aperm(UCBAdmissions, c(2,1,3)) UCB[1,,] summary(UCB) # UCB is still a contingency table ```
programming_docs
r None `match.arg` Argument Verification Using Partial Matching --------------------------------------------------------- ### Description `match.arg` matches `arg` against a table of candidate values as specified by `choices`, where `NULL` means to take the first one. ### Usage ``` match.arg(arg, choices, several.ok = FALSE) ``` ### Arguments | | | | --- | --- | | `arg` | a character vector (of length one unless `several.ok` is `TRUE`) or `NULL`. | | `choices` | a character vector of candidate values | | `several.ok` | logical specifying if `arg` should be allowed to have more than one element. | ### Details In the one-argument form `match.arg(arg)`, the choices are obtained from a default setting for the formal argument `arg` of the function from which `match.arg` was called. (Since default argument matching will set `arg` to `choices`, this is allowed as an exception to the ‘length one unless `several.ok` is `TRUE`’ rule, and returns the first element.) Matching is done using `<pmatch>`, so `arg` may be abbreviated. ### Value The unabbreviated version of the exact or unique partial match if there is one; otherwise, an error is signalled if `several.ok` is false, as per default. When `several.ok` is true and more than one element of `arg` has a match, all unabbreviated versions of matches are returned. ### See Also `<pmatch>`, `<match.fun>`, `<match.call>`. ### Examples ``` require(stats) ## Extends the example for 'switch' center <- function(x, type = c("mean", "median", "trimmed")) { type <- match.arg(type) switch(type, mean = mean(x), median = median(x), trimmed = mean(x, trim = .1)) } x <- rcauchy(10) center(x, "t") # Works center(x, "med") # Works try(center(x, "m")) # Error stopifnot(identical(center(x), center(x, "mean")), identical(center(x, NULL), center(x, "mean")) ) ## Allowing more than one match: match.arg(c("gauss", "rect", "ep"), c("gaussian", "epanechnikov", "rectangular", "triangular"), several.ok = TRUE) ``` r None `diag` Matrix Diagonals ------------------------ ### Description Extract or replace the diagonal of a matrix, or construct a diagonal matrix. ### Usage ``` diag(x = 1, nrow, ncol, names = TRUE) diag(x) <- value ``` ### Arguments | | | | --- | --- | | `x` | a matrix, vector or 1D `<array>`, or missing. | | `nrow, ncol` | optional dimensions for the result when `x` is not a matrix. | | `names` | (when `x` is a matrix) logical indicating if the resulting vector, the diagonal of `x`, should inherit `<names>` from `dimnames(x)` if available. | | `value` | either a single value or a vector of length equal to that of the current diagonal. Should be of a mode which can be coerced to that of `x`. | ### Details `diag` has four distinct usages: 1. `x` is a matrix, when it extracts the diagonal. 2. `x` is missing and `nrow` is specified, it returns an identity matrix. 3. `x` is a scalar (length-one vector) and the only argument, it returns a square identity matrix of size given by the scalar. 4. `x` is a ‘numeric’ (`<complex>`, `numeric`, `integer`, `<logical>`, or `<raw>`) vector, either of length at least 2 or there were further arguments. This returns a matrix with the given diagonal and zero off-diagonal entries. It is an error to specify `nrow` or `ncol` in the first case. ### Value If `x` is a matrix then `diag(x)` returns the diagonal of `x`. The resulting vector will have `<names>` if the matrix `x` has matching column and rownames. The replacement form sets the diagonal of the matrix `x` to the given value(s). In all other cases the value is a diagonal matrix with `nrow` rows and `ncol` columns (if `ncol` is not given the matrix is square). Here `nrow` is taken from the argument if specified, otherwise inferred from `x`: if that is a vector (or 1D array) of length two or more, then its length is the number of rows, but if it is of length one and neither `nrow` nor `ncol` is specified, `nrow = as.integer(x)`. When a diagonal matrix is returned, the diagonal elements are one except in the fourth case, when `x` gives the diagonal elements: it will be recycled or truncated as needed, but fractional recycling and truncation will give a warning. ### Note Using `diag(x)` can have unexpected effects if `x` is a vector that could be of length one. Use `diag(x, nrow = length(x))` for consistent behaviour. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[upper.tri](lower.tri)`, `<lower.tri>`, `<matrix>`. ### Examples ``` dim(diag(3)) diag(10, 3, 4) # guess what? all(diag(1:3) == {m <- matrix(0,3,3); diag(m) <- 1:3; m}) ## other "numeric"-like diagonal matrices : diag(c(1i,2i)) # complex diag(TRUE, 3) # logical diag(as.raw(1:3)) # raw (D2 <- diag(2:1, 4)); typeof(D2) # "integer" require(stats) ## diag(<var-cov-matrix>) = variances diag(var(M <- cbind(X = 1:5, Y = rnorm(5)))) #-> vector with names "X" and "Y" rownames(M) <- c(colnames(M), rep("", 3)) M; diag(M) # named as well diag(M, names = FALSE) # w/o names ``` r None `gc` Garbage Collection ------------------------ ### Description A call of `gc` causes a garbage collection to take place. `gcinfo` sets a flag so that automatic collection is either silent (`verbose = FALSE`) or prints memory usage statistics (`verbose = TRUE`). ### Usage ``` gc(verbose = getOption("verbose"), reset = FALSE, full = TRUE) gcinfo(verbose) ``` ### Arguments | | | | --- | --- | | `verbose` | logical; if `TRUE`, the garbage collection prints statistics about cons cells and the space allocated for vectors. | | `reset` | logical; if `TRUE` the values for maximum space used are reset to the current values. | | `full` | logical; if `TRUE` a full collection is performed; otherwise only more recently allocated objects may be collected. | ### Details A call of `gc` causes a garbage collection to take place. This will also take place automatically without user intervention, and the primary purpose of calling `gc` is for the report on memory usage. For an accurate report `full = TRUE` should be used. It can be useful to call `gc` after a large object has been removed, as this may prompt **R** to return memory to the operating system. **R** allocates space for vectors in multiples of 8 bytes: hence the report of `"Vcells"`, a relic of an earlier allocator (that used a vector heap). When `gcinfo(TRUE)` is in force, messages are sent to the message connection at each garbage collection of the form ``` Garbage collection 12 = 10+0+2 (level 0) ... 6.4 Mbytes of cons cells used (58%) 2.0 Mbytes of vectors used (32%) ``` Here the last two lines give the current memory usage rounded up to the next 0.1Mb and as a percentage of the current trigger value. The first line gives a breakdown of the number of garbage collections at various levels (for an explanation see the ‘R Internals’ manual). ### Value `gc` returns a matrix with rows `"Ncells"` (*cons cells*), usually 28 bytes each on 32-bit systems and 56 bytes on 64-bit systems, and `"Vcells"` (*vector cells*, 8 bytes each), and columns `"used"` and `"gc trigger"`, each also interpreted in megabytes (rounded up to the next 0.1Mb). If maxima have been set for either `"Ncells"` or `"Vcells"`, a fifth column is printed giving the current limits in Mb (with `NA` denoting no limit). The final two columns show the maximum space used since the last call to `gc(reset = TRUE)` (or since **R** started). `gcinfo` returns the previous value of the flag. ### See Also The ‘R Internals’ manual. `[Memory](memory)` on **R**'s memory management, and `<gctorture>` if you are an **R** developer. `<gc.time>()` reports *time* used for garbage collection. `<reg.finalizer>` for actions to happen at garbage collection. ### Examples ``` gc() #- do it now gcinfo(TRUE) #-- in the future, show when R does it ## vvvvv use larger to *show* something x <- integer(100000); for(i in 1:18) x <- c(x, i) gcinfo(verbose = FALSE) #-- don't show it anymore gc(TRUE) gc(reset = TRUE) ``` r None `writeLines` Write Lines to a Connection ----------------------------------------- ### Description Write text lines to a connection. ### Usage ``` writeLines(text, con = stdout(), sep = "\n", useBytes = FALSE) ``` ### Arguments | | | | --- | --- | | `text` | A character vector | | `con` | A [connection](connections) object or a character string. | | `sep` | character string. A string to be written to the connection after each line of text. | | `useBytes` | logical. See ‘Details’. | ### Details If the `con` is a character string, the function calls `[file](connections)` to obtain a file connection which is opened for the duration of the function call. If the connection is open it is written from its current position. If it is not open, it is opened for the duration of the call in `"wt"` mode and then closed again. Normally `writeLines` is used with a text-mode connection, and the default separator is converted to the normal separator for that platform (LF on Unix/Linux, CRLF on Windows). For more control, open a binary connection and specify the precise value you want written to the file in `sep`. For even more control, use `[writeChar](readchar)` on a binary connection. `useBytes` is for expert use. Normally (when false) character strings with marked encodings are converted to the current encoding before being passed to the connection (which might do further re-encoding). `useBytes = TRUE` suppresses the re-encoding of marked strings so they are passed byte-by-byte to the connection: this can be useful when strings have already been re-encoded by e.g. `<iconv>`. (It is invoked automatically for strings with marked encoding `"bytes"`.) ### See Also `<connections>`, `[writeChar](readchar)`, `[writeBin](readbin)`, `[readLines](readlines)`, `<cat>` r None `match` Value Matching ----------------------- ### Description `match` returns a vector of the positions of (first) matches of its first argument in its second. `%in%` is a more intuitive interface as a binary operator, which returns a logical vector indicating if there is a match or not for its left operand. ### Usage ``` match(x, table, nomatch = NA_integer_, incomparables = NULL) x %in% table ``` ### Arguments | | | | --- | --- | | `x` | vector or `NULL`: the values to be matched. [Long vectors](longvectors) are supported. | | `table` | vector or `NULL`: the values to be matched against. [Long vectors](longvectors) are not supported. | | `nomatch` | the value to be returned in the case when no match is found. Note that it is coerced to `integer`. | | `incomparables` | a vector of values that cannot be matched. Any value in `x` matching a value in this vector is assigned the `nomatch` value. For historical reasons, `FALSE` is equivalent to `NULL`. | ### Details `%in%` is currently defined as `"%in%" <- function(x, table) match(x, table, nomatch = 0) > 0` Factors, raw vectors and lists are converted to character vectors, and then `x` and `table` are coerced to a common type (the later of the two types in **R**'s ordering, logical < integer < numeric < complex < character) before matching. If `incomparables` has positive length it is coerced to the common type. Matching for lists is potentially very slow and best avoided except in simple cases. Exactly what matches what is to some extent a matter of definition. For all types, `NA` matches `NA` and no other value. For real and complex values, `NaN` values are regarded as matching any other `NaN` value, but not matching `NA`, where for complex `x`, real and imaginary parts must match both (unless containing at least one `NA`). Character strings will be compared as byte sequences if any input is marked as `"bytes"`, and otherwise are regarded as equal if they are in different encodings but would agree when translated to UTF-8 (see `[Encoding](encoding)`). That `%in%` never returns `NA` makes it particularly useful in `if` conditions. ### Value A vector of the same length as `x`. `match`: An integer vector giving the position in `table` of the first match if there is a match, otherwise `nomatch`. If `x[i]` is found to equal `table[j]` then the value returned in the `i`-th position of the return value is `j`, for the smallest possible `j`. If no match is found, the value is `nomatch`. `%in%`: A logical vector, indicating if a match was located for each element of `x`: thus the values are `TRUE` or `FALSE` and never `NA`. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<pmatch>` and `<charmatch>` for (*partial*) string matching, `<match.arg>`, etc for function argument matching. `[findInterval](findinterval)` similarly returns a vector of positions, but finds numbers within intervals, rather than exact matches. `[is.element](sets)` for an S-compatible equivalent of `%in%`. `<unique>` (and `<duplicated>`) are using the same definitions of “match” or “equality” as `match()`, and these are less strict than `[==](comparison)`, e.g., for `[NA](na)` and `[NaN](is.finite)` in numeric or complex vectors, or for strings with different encodings, see also above. ### Examples ``` ## The intersection of two sets can be defined via match(): ## Simple version: ## intersect <- function(x, y) y[match(x, y, nomatch = 0)] intersect # the R function in base is slightly more careful intersect(1:10, 7:20) 1:10 %in% c(1,3,5,9) sstr <- c("c","ab","B","bba","c",NA,"@","bla","a","Ba","%") sstr[sstr %in% c(letters, LETTERS)] "%w/o%" <- function(x, y) x[!x %in% y] #-- x without y (1:10) %w/o% c(3,7,12) ## Note that setdiff() is very similar and typically makes more sense: c(1:6,7:2) %w/o% c(3,7,12) # -> keeps duplicates setdiff(c(1:6,7:2), c(3,7,12)) # -> unique values ## Illuminating example about NA matching r <- c(1, NA, NaN) zN <- c(complex(real = NA , imaginary = r ), complex(real = r , imaginary = NA ), complex(real = r , imaginary = NaN), complex(real = NaN, imaginary = r )) zM <- cbind(Re=Re(zN), Im=Im(zN), match = match(zN, zN)) rownames(zM) <- format(zN) zM ##--> many "NA's" (= 1) and the four non-NA's (3 different ones, at 7,9,10) length(zN) # 12 unique(zN) # the "NA" and the 3 different non-NA NaN's stopifnot(identical(unique(zN), zN[c(1, 7,9,10)])) ## very strict equality would have 4 duplicates (of 12): symnum(outer(zN, zN, Vectorize(identical,c("x","y")), FALSE,FALSE,FALSE,FALSE)) ## removing "(very strictly) duplicates", i <- c(5,8,11,12) # we get 8 pairwise non-identicals : Ixy <- outer(zN[-i], zN[-i], Vectorize(identical,c("x","y")), FALSE,FALSE,FALSE,FALSE) stopifnot(identical(Ixy, diag(8) == 1)) ``` r None `char.expand` Expand a String with Respect to a Target Table ------------------------------------------------------------- ### Description Seeks a unique match of its first argument among the elements of its second. If successful, it returns this element; otherwise, it performs an action specified by the third argument. ### Usage ``` char.expand(input, target, nomatch = stop("no match")) ``` ### Arguments | | | | --- | --- | | `input` | a character string to be expanded. | | `target` | a character vector with the values to be matched against. | | `nomatch` | an **R** expression to be evaluated in case expansion was not possible. | ### Details This function is particularly useful when abbreviations are allowed in function arguments, and need to be uniquely expanded with respect to a target table of possible values. ### Value A length-one character vector, one of the elements of `target` (unless `nomatch` is changed to be a non-error, when it can be a zero-length character string). ### See Also `<charmatch>` and `<pmatch>` for performing partial string matching. ### Examples ``` locPars <- c("mean", "median", "mode") char.expand("me", locPars, warning("Could not expand!")) char.expand("mo", locPars) ``` r None `data.frame` Data Frames ------------------------- ### Description The function `data.frame()` creates data frames, tightly coupled collections of variables which share many of the properties of matrices and of lists, used as the fundamental data structure by most of **R**'s modeling software. ### Usage ``` data.frame(..., row.names = NULL, check.rows = FALSE, check.names = TRUE, fix.empty.names = TRUE, stringsAsFactors = FALSE) default.stringsAsFactors() # << this is deprecated ! ``` ### Arguments | | | | --- | --- | | `...` | these arguments are of either the form `value` or `tag = value`. Component names are created based on the tag (if present) or the deparsed argument itself. | | `row.names` | `NULL` or a single integer or character string specifying a column to be used as row names, or a character or integer vector giving the row names for the data frame. | | `check.rows` | if `TRUE` then the rows are checked for consistency of length and names. | | `check.names` | logical. If `TRUE` then the names of the variables in the data frame are checked to ensure that they are syntactically valid variable names and are not duplicated. If necessary they are adjusted (by `<make.names>`) so that they are. | | `fix.empty.names` | logical indicating if arguments which are “unnamed” (in the sense of not being formally called as `someName = arg`) get an automatically constructed name or rather name `""`. Needs to be set to `FALSE` even when `check.names` is false if `""` names should be kept. | | `stringsAsFactors` | logical: should character vectors be converted to factors? The ‘factory-fresh’ default has been `TRUE` previously but has been changed to `FALSE` for **R** 4.0.0. | ### Details A data frame is a list of variables of the same number of rows with unique row names, given class `"data.frame"`. If no variables are included, the row names determine the number of rows. The column names should be non-empty, and attempts to use empty names will have unsupported results. Duplicate column names are allowed, but you need to use `check.names = FALSE` for `data.frame` to generate such a data frame. However, not all operations on data frames will preserve duplicated column names: for example matrix-like subsetting will force column names in the result to be unique. `data.frame` converts each of its arguments to a data frame by calling `<as.data.frame>(optional = TRUE)`. As that is a generic function, methods can be written to change the behaviour of arguments according to their classes: **R** comes with many such methods. Character variables passed to `data.frame` are converted to factor columns unless protected by `[I](asis)` or argument `stringsAsFactors` is false. If a list or data frame or matrix is passed to `data.frame` it is as if each component or column had been passed as a separate argument (except for matrices protected by `[I](asis)`). Objects passed to `data.frame` should have the same number of rows, but atomic vectors (see `[is.vector](vector)`), factors and character vectors protected by `[I](asis)` will be recycled a whole number of times if necessary (including as elements of list arguments). If row names are not supplied in the call to `data.frame`, the row names are taken from the first component that has suitable names, for example a named vector or a matrix with rownames or a data frame. (If that component is subsequently recycled, the names are discarded with a warning.) If `row.names` was supplied as `NULL` or no suitable component was found the row names are the integer sequence starting at one (and such row names are considered to be ‘automatic’, and not preserved by `[as.matrix](matrix)`). If row names are supplied of length one and the data frame has a single row, the `row.names` is taken to specify the row names and not a column (by name or number). Names are removed from vector inputs not protected by `[I](asis)`. `default.stringsAsFactors` is a utility that takes `[getOption](options)("stringsAsFactors")` and ensures the result is `TRUE` or `FALSE` (or throws an error if the value is not `NULL`). This function is **deprecated** now and will no longer be available in the future. ### Value A data frame, a matrix-like structure whose columns may be of differing types (numeric, logical, factor and character and so on). How the names of the data frame are created is complex, and the rest of this paragraph is only the basic story. If the arguments are all named and simple objects (not lists, matrices of data frames) then the argument names give the column names. For an unnamed simple argument, a deparsed version of the argument is used as the name (with an enclosing `I(...)` removed). For a named matrix/list/data frame argument with more than one named column, the names of the columns are the name of the argument followed by a dot and the column name inside the argument: if the argument is unnamed, the argument's column names are used. For a named or unnamed matrix/list/data frame argument that contains a single column, the column name in the result is the column name in the argument. Finally, the names are adjusted to be unique and syntactically valid unless `check.names = FALSE`. ### Note In versions of **R** prior to 2.4.0 `row.names` had to be character: to ensure compatibility with such versions of **R**, supply a character vector as the `row.names` argument. ### References Chambers, J. M. (1992) *Data for models.* Chapter 3 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `[I](asis)`, `[plot.data.frame](../../graphics/html/plot.dataframe)`, `[print.data.frame](print.dataframe)`, `<row.names>`, `<names>` (for the column names), `[[.data.frame](extract.data.frame)` for subsetting methods and `I(matrix(..))` examples; `[Math.data.frame](groupgeneric)` etc, about *Group* methods for `data.frame`s; `[read.table](../../utils/html/read.table)`, `<make.names>`, `[list2DF](list2df)` for creating data frames from lists of variables. ### Examples ``` L3 <- LETTERS[1:3] fac <- sample(L3, 10, replace = TRUE) (d <- data.frame(x = 1, y = 1:10, fac = fac)) ## The "same" with automatic column names: data.frame(1, 1:10, sample(L3, 10, replace = TRUE)) is.data.frame(d) ## do not convert to factor, using I() : (dd <- cbind(d, char = I(letters[1:10]))) rbind(class = sapply(dd, class), mode = sapply(dd, mode)) stopifnot(1:10 == row.names(d)) # {coercion} (d0 <- d[, FALSE]) # data frame with 0 columns and 10 rows (d.0 <- d[FALSE, ]) # <0 rows> data frame (3 named cols) (d00 <- d0[FALSE, ]) # data frame with 0 columns and 0 rows ```
programming_docs
r None `match.fun` Extract a Function Specified by Name ------------------------------------------------- ### Description When called inside functions that take a function as argument, extract the desired function object while avoiding undesired matching to objects of other types. ### Usage ``` match.fun(FUN, descend = TRUE) ``` ### Arguments | | | | --- | --- | | `FUN` | item to match as function: a function, symbol or character string. See ‘Details’. | | `descend` | logical; control whether to search past non-function objects. | ### Details `match.fun` is not intended to be used at the top level since it will perform matching in the *parent* of the caller. If `FUN` is a function, it is returned. If it is a symbol (for example, enclosed in backquotes) or a character vector of length one, it will be looked up using `get` in the environment of the parent of the caller. If it is of any other mode, it is attempted first to get the argument to the caller as a symbol (using `substitute` twice), and if that fails, an error is declared. If `descend = TRUE`, `match.fun` will look past non-function objects with the given name; otherwise if `FUN` points to a non-function object then an error is generated. This is used in base functions such as `<apply>`, `<lapply>`, `<outer>`, and `<sweep>`. ### Value A function matching `FUN` or an error is generated. ### Bugs The `descend` argument is a bit of misnomer and probably not actually needed by anything. It may go away in the future. It is impossible to fully foolproof this. If one `attach`es a list or data frame containing a length-one character vector with the same name as a function, it may be used (although namespaces will help). ### Author(s) Peter Dalgaard and Robert Gentleman, based on an earlier version by Jonathan Rougier. ### See Also `<match.arg>`, `<get>` ### Examples ``` # Same as get("*"): match.fun("*") # Overwrite outer with a vector outer <- 1:5 try(match.fun(outer, descend = FALSE)) #-> Error: not a function match.fun(outer) # finds it anyway is.function(match.fun("outer")) # as well ``` r None `tapply` Apply a Function Over a Ragged Array ---------------------------------------------- ### Description Apply a function to each cell of a ragged array, that is to each (non-empty) group of values given by a unique combination of the levels of certain factors. ### Usage ``` tapply(X, INDEX, FUN = NULL, ..., default = NA, simplify = TRUE) ``` ### Arguments | | | | --- | --- | | `X` | an **R** object for which a `<split>` method exists. Typically vector-like, allowing subsetting with `[[](extract)`. | | `INDEX` | a `<list>` of one or more `<factor>`s, each of same length as `X`. The elements are coerced to factors by `[as.factor](factor)`. | | `FUN` | a function (or name of a function) to be applied, or `NULL`. In the case of functions like `+`, `%*%`, etc., the function name must be backquoted or quoted. If `FUN` is `NULL`, tapply returns a vector which can be used to subscript the multi-way array `tapply` normally produces. | | `...` | optional arguments to `FUN`: the Note section. | | `default` | (only in the case of simplification to an array) the value with which the array is initialized as `<array>(default, dim = ..)`. Before **R** 3.4.0, this was hard coded to `<array>()`'s default `NA`. If it is `NA` (the default), the missing value of the answer type, e.g. `[NA\_real\_](na)`, is chosen (`[as.raw](raw)(0)` for `"raw"`). In a numerical case, it may be set, e.g., to `FUN(integer(0))`, e.g., in the case of `FUN = sum` to `0` or `0L`. | | `simplify` | logical; if `FALSE`, `tapply` always returns an array of mode `"list"`; in other words, a `<list>` with a `<dim>` attribute. If `TRUE` (the default), then if `FUN` always returns a scalar, `tapply` returns an array with the mode of the scalar. | ### Details If `FUN` is not `NULL`, it is passed to `<match.fun>`, and hence it can be a function or a symbol or character string naming a function. ### Value When `FUN` is present, `tapply` calls `FUN` for each cell that has any data in it. If `FUN` returns a single atomic value for each such cell (e.g., functions `mean` or `var`) and when `simplify` is `TRUE`, `tapply` returns a multi-way <array> containing the values, and `NA` for the empty cells. The array has the same number of dimensions as `INDEX` has components; the number of levels in a dimension is the number of levels (`nlevels()`) in the corresponding component of `INDEX`. Note that if the return value has a class (e.g., an object of class `"[Date](dates)"`) the class is discarded. `simplify = TRUE` always returns an array, possibly 1-dimensional. If `FUN` does not return a single atomic value, `tapply` returns an array of mode `<list>` whose components are the values of the individual calls to `FUN`, i.e., the result is a list with a `<dim>` attribute. When there is an array answer, its `<dimnames>` are named by the names of `INDEX` and are based on the levels of the grouping factors (possibly after coercion). For a list result, the elements corresponding to empty cells are `NULL`. ### Note Optional arguments to `FUN` supplied by the `...` argument are not divided into cells. It is therefore inappropriate for `FUN` to expect additional arguments with the same length as `X`. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also the convenience functions `<by>` and `[aggregate](../../stats/html/aggregate)` (using `tapply`); `<apply>`, `<lapply>` with its versions `[sapply](lapply)` and `<mapply>`. ### Examples ``` require(stats) groups <- as.factor(rbinom(32, n = 5, prob = 0.4)) tapply(groups, groups, length) #- is almost the same as table(groups) ## contingency table from data.frame : array with named dimnames tapply(warpbreaks$breaks, warpbreaks[,-1], sum) tapply(warpbreaks$breaks, warpbreaks[, 3, drop = FALSE], sum) n <- 17; fac <- factor(rep_len(1:3, n), levels = 1:5) table(fac) tapply(1:n, fac, sum) tapply(1:n, fac, sum, default = 0) # maybe more desirable tapply(1:n, fac, sum, simplify = FALSE) tapply(1:n, fac, range) tapply(1:n, fac, quantile) tapply(1:n, fac, length) ## NA's tapply(1:n, fac, length, default = 0) # == table(fac) ## example of ... argument: find quarterly means tapply(presidents, cycle(presidents), mean, na.rm = TRUE) ind <- list(c(1, 2, 2), c("A", "A", "B")) table(ind) tapply(1:3, ind) #-> the split vector tapply(1:3, ind, sum) ## Some assertions (not held by all patch propsals): nq <- names(quantile(1:5)) stopifnot( identical(tapply(1:3, ind), c(1L, 2L, 4L)), identical(tapply(1:3, ind, sum), matrix(c(1L, 2L, NA, 3L), 2, dimnames = list(c("1", "2"), c("A", "B")))), identical(tapply(1:n, fac, quantile)[-1], array(list(`2` = structure(c(2, 5.75, 9.5, 13.25, 17), .Names = nq), `3` = structure(c(3, 6, 9, 12, 15), .Names = nq), `4` = NULL, `5` = NULL), dim=4, dimnames=list(as.character(2:5))))) ``` r None `xtfrm` Auxiliary Function for Sorting and Ranking --------------------------------------------------- ### Description A generic auxiliary function that produces a numeric vector which will sort in the same order as `x`. ### Usage ``` xtfrm(x) ``` ### Arguments | | | | --- | --- | | `x` | an **R** object. | ### Details This is a special case of ranking, but as a less general function than `<rank>` is more suitable to be made generic. The default method is similar to `rank(x, ties.method = "min", na.last = "keep")`, so `NA` values are given rank `NA` and all tied values are given equal integer rank. The `<factor>` method extracts the codes. The default method will unclass the object if `[is.numeric](numeric)(x)` is true but otherwise make use of `==` and `>` methods for the class of `x[i]` (for integers `i`), and the `is.na` method for the class of `x`, but might be rather slow when doing so. This is an [internal generic](internalmethods) <primitive>, so S3 or S4 methods can be written for it. ### Value A numeric (usually integer) vector of the same length as `x`. ### See Also `<rank>`, `<sort>`, `<order>`. r None `file.path` Construct Path to File ----------------------------------- ### Description Construct the path to a file from components in a platform-independent way. ### Usage ``` file.path(..., fsep = .Platform$file.sep) ``` ### Arguments | | | | --- | --- | | `...` | character vectors. [Long vectors](longvectors) are not supported. | | `fsep` | the path separator to use (assumed to be ASCII). | ### Details The implementation is designed to be fast (faster than `<paste>`) as this function is used extensively in **R** itself. It can also be used for environment paths such as PATH and R\_LIBS with `fsep = .Platform$path.sep`. Trailing path separators are invalid for Windows file paths apart from ‘/’ and ‘d:/’ (although some functions/utilities do accept them), so a trailing `/` or `\` is removed there. ### Value A character vector of the arguments concatenated term-by-term and separated by `fsep` if all arguments have positive length; otherwise, an empty character vector (unlike `<paste>`). An element of the result will be marked (see `[Encoding](encoding)`) as UTF-8 if run in a UTF-8 locale (when marked inputs are converted to UTF-8) or if a component of the result is marked as UTF-8, or as Latin-1 in a non-Latin-1 locale. ### Note The components are by default separated by `/` (not `\`) on Windows. ### See Also `<basename>`, `[normalizePath](normalizepath)`, `<path.expand>`. r None `tempfile` Create Names for Temporary Files -------------------------------------------- ### Description `tempfile` returns a vector of character strings which can be used as names for temporary files. ### Usage ``` tempfile(pattern = "file", tmpdir = tempdir(), fileext = "") tempdir(check = FALSE) ``` ### Arguments | | | | --- | --- | | `pattern` | a non-empty character vector giving the initial part of the name. | | `tmpdir` | a non-empty character vector giving the directory name | | `fileext` | a non-empty character vector giving the file extension | | `check` | `<logical>` indicating if `tmpdir()` should be checked and recreated if no longer valid. | ### Details The length of the result is the maximum of the lengths of the three arguments; values of shorter arguments are recycled. The names are very likely to be unique among calls to `tempfile` in an **R** session and across simultaneous **R** sessions (unless `tmpdir` is specified). The filenames are guaranteed not to be currently in use. The file name is made by concatenating the path given by `tmpdir`, the `pattern` string, a random string in hex and a suffix of `fileext`. By default, `tmpdir` will be the directory given by `tempdir()`. This will be a subdirectory of the per-session temporary directory found by the following rule when the **R** session is started. The environment variables TMPDIR, TMP and TEMP are checked in turn and the first found which points to a writable directory is used: if none succeeds ‘/tmp’ is used. The path should not contain spaces. Note that setting any of these environment variables in the **R** session has no effect on `tempdir()`: the per-session temporary directory is created before the interpreter is started. ### Value For `tempfile` a character vector giving the names of possible (temporary) files. Note that no files are generated by `tempfile`. For `tempdir`, the path of the per-session temporary directory. On Windows, both will use a backslash as the path separator. On a Unix-alike, the value will be an absolute path (unless `tmpdir` is set to a relative path), but it need not be canonical (see `[normalizePath](normalizepath)`) and on macOS it often is not. ### Note on parallel use **R** processes forked by functions such as `[mclapply](../../parallel/html/mclapply)` and `[makeForkCluster](../../parallel/html/makecluster)` in package parallel share a per-session temporary directory. Further, the ‘guaranteed not to be currently in use’ applies only at the time of asking, and two children could ask simultaneously. This is circumvented by ensuring that `tempfile` calls in different children try different names. ### Source The final component of `tempdir()` is created by the POSIX system call `mkdtemp`, or if this is not available (e.g. on Windows) a version derived from the source code of GNU `glibc`. It will be of the form ‘RtmpXXXXXX’ where the last 6 characters are replaced in a platform-specific way. POSIX only requires that the replacements be ASCII, which allows `.` (so the value may appear to have a file extension) and [regexp](regex) metacharacters such as `+`. Most commonly the replacements are from the [regexp](regex) pattern `[A-Za-z0-9]`, but `.` *has* been seen. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<unlink>` for deleting files. ### Examples ``` tempfile(c("ab", "a b c")) # give file name with spaces in! tempfile("plot", fileext = c(".ps", ".pdf")) tempdir() # works on all platforms with a platform-dependent result ## Show how 'check' is working on some platforms: if(exists("I'm brave") && `I'm brave` && identical(.Platform$OS.type, "unix") && grepl("^/tmp/", tempdir())) { cat("Current tempdir(): ", tempdir(), "\n") cat("Removing it :", file.remove(tempdir()), "; dir.exists(tempdir()):", dir.exists(tempdir()), "\n") cat("and now tempdir(check = TRUE) :", tempdir(check = TRUE),"\n") } ``` r None `Reserved` Reserved Words in R ------------------------------- ### Description The reserved words in **R**'s parser are `[if](control)` `[else](control)` `[repeat](control)` `[while](control)` `<function>` `[for](control)` `in` `[next](control)` `[break](control)` `[TRUE](logical)` `[FALSE](logical)` `[NULL](null)` `[Inf](is.finite)` `[NaN](is.finite)` `[NA](na)` `[NA\_integer\_](na)` `[NA\_real\_](na)` `[NA\_complex\_](na)` `[NA\_character\_](na)` `...` and `..1`, `..2` etc, which are used to refer to arguments passed down from a calling function, see `[...](dots)`. ### Details Reserved words outside <quotes> are always parsed to be references to the objects linked to in the ‘Description’, and hence they are not allowed as syntactic names (see `<make.names>`). They **are** allowed as non-syntactic names, e.g. inside [backtick](quotes) quotes. r None `matmult` Matrix Multiplication -------------------------------- ### Description Multiplies two matrices, if they are conformable. If one argument is a vector, it will be promoted to either a row or column matrix to make the two arguments conformable. If both are vectors of the same length, it will return the inner product (as a matrix). ### Usage ``` x %*% y ``` ### Arguments | | | | --- | --- | | `x, y` | numeric or complex matrices or vectors. | ### Details When a vector is promoted to a matrix, its names are not promoted to row or column names, unlike `[as.matrix](matrix)`. Promotion of a vector to a 1-row or 1-column matrix happens when one of the two choices allows `x` and `y` to get conformable dimensions. This operator is S4 generic but not S3 generic. S4 methods need to be written for a function of two arguments named `x` and `y`. ### Value A double or complex matrix product. Use `<drop>` to remove dimensions which have only one level. ### Note The propagation of NaN/Inf values, precision, and performance of matrix products can be controlled by `<options>("matprod")`. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also For matrix *cross*products, `<crossprod>()` and `tcrossprod()` are typically preferable. `<matrix>`, `[Arithmetic](arithmetic)`, `<diag>`. ### Examples ``` x <- 1:4 (z <- x %*% x) # scalar ("inner") product (1 x 1 matrix) drop(z) # as scalar y <- diag(x) z <- matrix(1:12, ncol = 3, nrow = 4) y %*% z y %*% x x %*% z ``` r None `nargs` The Number of Arguments to a Function ---------------------------------------------- ### Description When used inside a function body, `nargs` returns the number of arguments supplied to that function, *including* positional arguments left blank. ### Usage ``` nargs() ``` ### Details The count includes empty (missing) arguments, so that `foo(x,,z)` will be considered to have three arguments (see ‘Examples’). This can occur in rather indirect ways, so for example `x[]` might dispatch a call to ``[.some_method`(x, )` which is considered to have two arguments. This is a <primitive> function. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<args>`, `<formals>` and `[sys.call](sys.parent)`. ### Examples ``` tst <- function(a, b = 3, ...) {nargs()} tst() # 0 tst(clicketyclack) # 1 (even non-existing) tst(c1, a2, rr3) # 3 foo <- function(x, y, z, w) { cat("call was ", deparse(match.call()), "\n", sep = "") nargs() } foo() # 0 foo(, , 3) # 3 foo(z = 3) # 1, even though this is the same call nargs() # not really meaningful ``` r None `allnames` Find All Names in an Expression ------------------------------------------- ### Description Return a character vector containing all the names which occur in an expression or call. ### Usage ``` all.names(expr, functions = TRUE, max.names = -1L, unique = FALSE) all.vars(expr, functions = FALSE, max.names = -1L, unique = TRUE) ``` ### Arguments | | | | --- | --- | | `expr` | an <expression> or <call> from which the names are to be extracted. | | `functions` | a logical value indicating whether function names should be included in the result. | | `max.names` | the maximum number of names to be returned. `-1` indicates no limit (other than vector size limits). | | `unique` | a logical value which indicates whether duplicate names should be removed from the value. | ### Details These functions differ only in the default values for their arguments. ### Value A character vector with the extracted names. ### See Also `<substitute>` to replace symbols with values in an expression. ### Examples ``` all.names(expression(sin(x+y))) all.names(quote(sin(x+y))) # or a call all.vars(expression(sin(x+y))) ``` r None `contributors` R Project Contributors -------------------------------------- ### Description The **R** Who-is-who, describing who made significant contributions to the development of **R**. ### Usage ``` contributors() ``` r None `comment` Query or Set a "comment" Attribute --------------------------------------------- ### Description These functions set and query a *comment* attribute for any **R** objects. This is typically useful for `<data.frame>`s or model fits. Contrary to other `<attributes>`, the `comment` is not printed (by `<print>` or `<print.default>`). Assigning `NULL` or a zero-length character vector removes the comment. ### Usage ``` comment(x) comment(x) <- value ``` ### Arguments | | | | --- | --- | | `x` | any **R** object | | `value` | a `character` vector, or `NULL`. | ### See Also `<attributes>` and `<attr>` for other attributes. ### Examples ``` x <- matrix(1:12, 3, 4) comment(x) <- c("This is my very important data from experiment #0234", "Jun 5, 1998") x comment(x) ``` r None `which` Which indices are TRUE? -------------------------------- ### Description Give the `TRUE` indices of a logical object, allowing for array indices. ### Usage ``` which(x, arr.ind = FALSE, useNames = TRUE) arrayInd(ind, .dim, .dimnames = NULL, useNames = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | a `<logical>` vector or array. `[NA](na)`s are allowed and omitted (treated as if `FALSE`). | | `arr.ind` | logical; should **arr**ay **ind**ices be returned when `x` is an array? | | `ind` | integer-valued index vector, as resulting from `which(x)`. | | `.dim` | `<dim>(.)` integer vector | | `.dimnames` | optional list of character `<dimnames>(.)`. If `useNames` is true, to be used for constructing dimnames for `arrayInd()` (and hence, `which(*, arr.ind=TRUE)`). If `<names>(.dimnames)` is not empty, these are used as column names. `.dimnames[[1]]` is used as row names. | | `useNames` | logical indicating if the value of `arrayInd()` should have (non-null) dimnames at all. | ### Value If `arr.ind == FALSE` (the default), an integer vector, or a double vector if `x` is a *[long vector](longvectors)*, with `length` equal to `sum(x)`, i.e., to the number of `TRUE`s in `x`. Basically, the result is `(1:length(x))[x]` in typical cases; more generally, including when `x` has `[NA](na)`'s, `which(x)` is `seq_along(x)[!is.na(x) & x]` plus `<names>` when `x` has. If `arr.ind == TRUE` and `x` is an `<array>` (has a `<dim>` attribute), the result is `arrayInd(which(x), dim(x), dimnames(x))`, namely a matrix whose rows each are the indices of one element of `x`; see Examples below. ### Note Unlike most other base **R** functions this does not coerce `x` to logical: only arguments with `<typeof>` logical are accepted and others give an error. ### Author(s) Werner Stahel and Peter Holzer (ETH Zurich) proposed the `arr.ind` option. ### See Also `[Logic](logic)`, `<which.min>` for the index of the minimum or maximum, and `<match>` for the first index of an element in a vector, i.e., for a scalar `a`, `match(a, x)` is equivalent to `min(which(x == a))` but much more efficient. ### Examples ``` which(LETTERS == "R") which(ll <- c(TRUE, FALSE, TRUE, NA, FALSE, FALSE, TRUE)) #> 1 3 7 names(ll) <- letters[seq(ll)] which(ll) which((1:12)%%2 == 0) # which are even? which(1:10 > 3, arr.ind = TRUE) ( m <- matrix(1:12, 3, 4) ) div.3 <- m %% 3 == 0 which(div.3) which(div.3, arr.ind = TRUE) rownames(m) <- paste("Case", 1:3, sep = "_") which(m %% 5 == 0, arr.ind = TRUE) dim(m) <- c(2, 2, 3); m which(div.3, arr.ind = FALSE) which(div.3, arr.ind = TRUE) vm <- c(m) dim(vm) <- length(vm) #-- funny thing with length(dim(...)) == 1 which(div.3, arr.ind = TRUE) ```
programming_docs
r None `dim` Dimensions of an Object ------------------------------ ### Description Retrieve or set the dimension of an object. ### Usage ``` dim(x) dim(x) <- value ``` ### Arguments | | | | --- | --- | | `x` | an **R** object, for example a matrix, array or data frame. | | `value` | For the default method, either `NULL` or a numeric vector, which is coerced to integer (by truncation). | ### Details The functions `dim` and `dim<-` are [internal generic](internalmethods) <primitive> functions. `dim` has a method for `<data.frame>`s, which returns the lengths of the `row.names` attribute of `x` and of `x` (as the numbers of rows and columns respectively). ### Value For an array (and hence in particular, for a matrix) `dim` retrieves the `dim` attribute of the object. It is `NULL` or a vector of mode `<integer>`. The replacement method changes the `"dim"` attribute (provided the new value is compatible) and removes any `"dimnames"` *and* `"names"` attributes. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[ncol](nrow)`, `<nrow>` and `<dimnames>`. ### Examples ``` x <- 1:12 ; dim(x) <- c(3,4) x # simple versions of nrow and ncol could be defined as follows nrow0 <- function(x) dim(x)[1] ncol0 <- function(x) dim(x)[2] ``` r None `system.file` Find Names of R System Files ------------------------------------------- ### Description Finds the full file names of files in packages etc. ### Usage ``` system.file(..., package = "base", lib.loc = NULL, mustWork = FALSE) ``` ### Arguments | | | | --- | --- | | `...` | character vectors, specifying subdirectory and file(s) within some package. The default, none, returns the root of the package. Wildcards are not supported. | | `package` | a character string with the name of a single package. An error occurs if more than one package name is given. | | `lib.loc` | a character vector with path names of **R** libraries. See ‘Details’ for the meaning of the default value of `NULL`. | | `mustWork` | logical. If `TRUE`, an error is given if there are no matching files. | ### Details This checks the existence of the specified files with `[file.exists](files)`. So file paths are only returned if there are sufficient permissions to establish their existence. The unnamed arguments in `...` are usually character strings, but if character vectors they are recycled to the same length. This uses `<find.package>` to find the package, and hence with the default `lib.loc = NULL` looks first for attached packages then in each library listed in `[.libPaths](libpaths)()`. Note that if a namespace is loaded but the package is not attached, this will look only on `[.libPaths](libpaths)()`. ### Value A character vector of positive length, containing the file paths that matched `...`, or the empty string, `""`, if none matched (unless `mustWork = TRUE`). If matching the root of a package, there is no trailing separator. `system.file()` with no arguments gives the root of the base package. ### See Also `[R.home](rhome)` for the root directory of the **R** installation, `<list.files>`. `[Sys.glob](sys.glob)` to find paths via wildcards. ### Examples ``` system.file() # The root of the 'base' package system.file(package = "stats") # The root of package 'stats' system.file("INDEX") system.file("help", "AnIndex", package = "splines") ``` r None `array` Multi-way Arrays ------------------------- ### Description Creates or tests for arrays. ### Usage ``` array(data = NA, dim = length(data), dimnames = NULL) as.array(x, ...) is.array(x) ``` ### Arguments | | | | --- | --- | | `data` | a vector (including a list or `<expression>` vector) giving data to fill the array. Non-atomic classed objects are coerced by `[as.vector](vector)`. | | `dim` | the dim attribute for the array to be created, that is an integer vector of length one or more giving the maximal indices in each dimension. | | `dimnames` | either `NULL` or the names for the dimensions. This must a list (or it will be ignored) with one component for each dimension, either `NULL` or a character vector of the length given by `dim` for that dimension. The list can be named, and the list names will be used as names for the dimensions. If the list is shorter than the number of dimensions, it is extended by `NULL`s to the length required. | | `x` | an **R** object. | | `...` | additional arguments to be passed to or from methods. | ### Details An array in **R** can have one, two or more dimensions. It is simply a vector which is stored with additional <attributes> giving the dimensions (attribute `"dim"`) and optionally names for those dimensions (attribute `"dimnames"`). A two-dimensional array is the same thing as a `<matrix>`. One-dimensional arrays often look like vectors, but may be handled differently by some functions: `[str](../../utils/html/str)` does distinguish them in recent versions of **R**. The `"dim"` attribute is an integer vector of length one or more containing non-negative values: the product of the values must match the length of the array. The `"dimnames"` attribute is optional: if present it is a list with one component for each dimension, either `NULL` or a character vector of the length given by the element of the `"dim"` attribute for that dimension. `is.array` is a <primitive> function. For a list array, the `print` methods prints entries of length not one in the form integer,7 indicating the type and length. ### Value `array` returns an array with the extents specified in `dim` and naming information in `dimnames`. The values in `data` are taken to be those in the array with the leftmost subscript moving fastest. If there are too few elements in `data` to fill the array, then the elements in `data` are recycled. If `data` has length zero, `NA` of an appropriate type is used for atomic vectors (`0` for raw vectors) and `NULL` for lists. Unlike `<matrix>`, `array` does not currently remove any attributes left by `as.vector` from a classed list `data`, so can return a list array with a class attribute. `as.array` is a generic function for coercing to arrays. The default method does so by attaching a `<dim>` attribute to it. It also attaches `<dimnames>` if `x` has `<names>`. The sole purpose of this is to make it possible to access the `dim[names]` attribute at a later time. `is.array` returns `TRUE` or `FALSE` depending on whether its argument is an array (i.e., has a `dim` attribute of positive length) or not. It is generic: you can write methods to handle specific classes of objects, see [InternalMethods](internalmethods). ### Note `is.array` is a <primitive> function. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<aperm>`, `<matrix>`, `<dim>`, `<dimnames>`. ### Examples ``` dim(as.array(letters)) array(1:3, c(2,4)) # recycle 1:3 "2 2/3 times" # [,1] [,2] [,3] [,4] #[1,] 1 3 2 1 #[2,] 2 1 3 2 ``` r None `Log` Logarithms and Exponentials ---------------------------------- ### Description `log` computes logarithms, by default natural logarithms, `log10` computes common (i.e., base 10) logarithms, and `log2` computes binary (i.e., base 2) logarithms. The general form `log(x, base)` computes logarithms with base `base`. `log1p(x)` computes *log(1+x)* accurately also for *|x| << 1*. `exp` computes the exponential function. `expm1(x)` computes *exp(x) - 1* accurately also for *|x| << 1*. ### Usage ``` log(x, base = exp(1)) logb(x, base = exp(1)) log10(x) log2(x) log1p(x) exp(x) expm1(x) ``` ### Arguments | | | | --- | --- | | `x` | a numeric or complex vector. | | `base` | a positive or complex number: the base with respect to which logarithms are computed. Defaults to *e*=`exp(1)`. | ### Details All except `logb` are generic functions: methods can be defined for them individually or via the `[Math](groupgeneric)` group generic. `log10` and `log2` are only convenience wrappers, but logs to bases 10 and 2 (whether computed *via* `log` or the wrappers) will be computed more efficiently and accurately where supported by the OS. Methods can be set for them individually (and otherwise methods for `log` will be used). `logb` is a wrapper for `log` for compatibility with S. If (S3 or S4) methods are set for `log` they will be dispatched. Do not set S4 methods on `logb` itself. All except `log` are <primitive> functions. ### Value A vector of the same length as `x` containing the transformed values. `log(0)` gives `-Inf`, and `log(x)` for negative values of `x` is `NaN`. `exp(-Inf)` is `0`. For complex inputs to the log functions, the value is a complex number with imaginary part in the range *[-pi, pi]*: which end of the range is used might be platform-specific. ### S4 methods `exp`, `expm1`, `log`, `log10`, `log2` and `log1p` are S4 generic and are members of the `[Math](../../methods/html/s4groupgeneric)` group generic. Note that this means that the S4 generic for `log` has a signature with only one argument, `x`, but that `base` can be passed to methods (but will not be used for method selection). On the other hand, if you only set a method for the `Math` group generic then `base` argument of `log` will be ignored for your class. ### Source `log1p` and `expm1` may be taken from the operating system, but if not available there then they are based on the Fortran subroutine `dlnrel` by W. Fullerton of Los Alamos Scientific Laboratory (see <https://www.netlib.org/slatec/fnlib/dlnrel.f>) and (for small x) a single Newton step for the solution of `log1p(y) = x` respectively. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. (for `log`, `log10` and `exp`.) Chambers, J. M. (1998) *Programming with Data. A Guide to the S Language*. Springer. (for `logb`.) ### See Also `[Trig](trig)`, `[sqrt](mathfun)`, `[Arithmetic](arithmetic)`. ### Examples ``` log(exp(3)) log10(1e7) # = 7 x <- 10^-(1+2*1:9) cbind(x, log(1+x), log1p(x), exp(x)-1, expm1(x)) ``` r None `rawConversion` Convert to or from (Bit/Packed) Raw Vectors ------------------------------------------------------------ ### Description Conversion to and from and manipulation of objects of type `"raw"`, both used as bits or “packed” 8 bits. ### Usage ``` charToRaw(x) rawToChar(x, multiple = FALSE) rawShift(x, n) rawToBits(x) intToBits(x) packBits(x, type = c("raw", "integer", "double")) numToInts(x) numToBits(x) ``` ### Arguments | | | | --- | --- | | `x` | object to be converted or shifted. | | `multiple` | logical: should the conversion be to a single character string or multiple individual characters? | | `n` | the number of bits to shift. Positive numbers shift right and negative numbers shift left: allowed values are `-8 ... 8`. | | `type` | the result type, partially matched. | ### Details `packBits` accepts raw, integer or logical inputs, the last two without any NAs. `numToBits(.)` and `packBits(., type="double")` are *inverse* functions of each other, see also the examples. Note that ‘bytes’ are not necessarily the same as characters, e.g. in UTF-8 locales. ### Value `charToRaw` converts a length-one character string to raw bytes. It does so without taking into account any declared encoding (see `[Encoding](encoding)`). `rawToChar` converts raw bytes either to a single character string or a character vector of single bytes (with `""` for `0`). (Note that a single character string could contain embedded nuls; only trailing nulls are allowed and will be removed.) In either case it is possible to create a result which is invalid in a multibyte locale, e.g. one using UTF-8. [Long vectors](longvectors) are allowed if `multiple` is true. `rawShift(x, n)` shift the bits in `x` by `n` positions to the right, see the argument `n`, above. `rawToBits` returns a raw vector of 8 times the length of a raw vector with entries 0 or 1. `intToBits` returns a raw vector of 32 times the length of an integer vector with entries 0 or 1. (Non-integral numeric values are truncated to integers.) In both cases the unpacking is least-significant bit first. `packBits` packs its input (using only the lowest bit for raw or integer vectors) least-significant bit first to a raw, integer or double (“numeric”) vector. `numToInts()` and `numToBits()` split `<double>` precision numeric vectors either into to two `<integer>`s each or into 64 bits each, stored as `raw`. In both cases the unpacking is least-significant element first. ### Examples ``` x <- "A test string" (y <- charToRaw(x)) is.vector(y) # TRUE rawToChar(y) rawToChar(y, multiple = TRUE) (xx <- c(y, charToRaw("&"), charToRaw(" more"))) rawToChar(xx) rawShift(y, 1) rawShift(y,-2) rawToBits(y) showBits <- function(r) stats::symnum(as.logical(rawToBits(r))) z <- as.raw(5) z ; showBits(z) showBits(rawShift(z, 1)) # shift to right showBits(rawShift(z, 2)) showBits(z) showBits(rawShift(z, -1)) # shift to left showBits(rawShift(z, -2)) # .. showBits(rawShift(z, -3)) # shifted off entirely packBits(as.raw(0:31)) i <- -2:3 stopifnot(exprs = { identical(i, packBits(intToBits(i), "integer")) identical(packBits( 0:31) , packBits(as.raw(0:31))) }) str(pBi <- packBits(intToBits(i))) data.frame(B = matrix(pBi, nrow=6, byrow=TRUE), hex = format(as.hexmode(i)), i) ## Look at internal bit representation of ... ## ... of integers : bitI <- function(x) vapply(as.integer(x), function(x) { b <- substr(as.character(rev(intToBits(x))), 2L, 2L) paste0(c(b[1L], " ", b[2:32]), collapse = "") }, "") print(bitI(-8:8), width = 35, quote = FALSE) ## ... of double precision numbers in format 'sign exp | mantissa' ## where 1 bit sign 1 <==> "-"; ## 11 bit exp is the base-2 exponent biased by 2^10 - 1 (1023) ## 52 bit mantissa is without the implicit leading '1' # ## Bit representation [ sign | exponent | mantissa ] of double prec numbers : bitC <- function(x) noquote(vapply(as.double(x), function(x) { # split one double b <- substr(as.character(rev(numToBits(x))), 2L, 2L) paste0(c(b[1L], " ", b[2:12], " | ", b[13:64]), collapse = "") }, "")) bitC(17) bitC(c(-1,0,1)) bitC(2^(-2:5)) bitC(1+2^-(1:53))# from 0.5 converge to 1 ### numToBits(.) <==> intToBits(numToInts(.)) : d2bI <- function(x) vapply(as.double(x), function(x) intToBits(numToInts(x)), raw(64L)) d2b <- function(x) vapply(as.double(x), function(x) numToBits(x) , raw(64L)) set.seed(1) x <- c(sort(rt(2048, df=1.5)), 2^(-10:10), 1+2^-(1:53)) str(bx <- d2b(x)) # a 64 x 2122 raw matrix stopifnot( identical(bx, d2bI(x)) ) ## Show that packBits(*, "double") is the inverse of numToBits() : packBits(numToBits(pi), type="double") bitC(2050) b <- numToBits(2050) identical(b, numToBits(packBits(b, type="double"))) pbx <- apply(bx, 2, packBits, type="double") stopifnot( identical(pbx, x)) ``` r None `cumsum` Cumulative Sums, Products, and Extremes ------------------------------------------------- ### Description Returns a vector whose elements are the cumulative sums, products, minima or maxima of the elements of the argument. ### Usage ``` cumsum(x) cumprod(x) cummax(x) cummin(x) ``` ### Arguments | | | | --- | --- | | `x` | a numeric or complex (not `cummin` or `cummax`) object, or an object that can be coerced to one of these. | ### Details These are generic functions: methods can be defined for them individually or via the `[Math](groupgeneric)` group generic. ### Value A vector of the same length and type as `x` (after coercion), except that `cumprod` returns a numeric vector for integer input (for consistency with `*`). Names are preserved. An `NA` value in `x` causes the corresponding and following elements of the return value to be `NA`, as does integer overflow in `cumsum` (with a warning). ### S4 methods `cumsum` and `cumprod` are S4 generic functions: methods can be defined for them individually or via the `[Math](../../methods/html/s4groupgeneric)` group generic. `cummax` and `cummin` are individually S4 generic functions. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. (`cumsum` only.) ### Examples ``` cumsum(1:10) cumprod(1:10) cummin(c(3:1, 2:0, 4:2)) cummax(c(3:1, 2:0, 4:2)) ``` r None `grepRaw` Pattern Matching for Raw Vectors ------------------------------------------- ### Description `grepRaw` searches for substring `pattern` matches within a raw vector `x`. ### Usage ``` grepRaw(pattern, x, offset = 1L, ignore.case = FALSE, value = FALSE, fixed = FALSE, all = FALSE, invert = FALSE) ``` ### Arguments | | | | --- | --- | | `pattern` | raw vector containing a [regular expression](regex) (or fixed pattern for `fixed = TRUE`) to be matched in the given raw vector. Coerced by `[charToRaw](rawconversion)` to a character string if possible. | | `x` | a raw vector where matches are sought, or an object which can be coerced by `charToRaw` to a raw vector. [Long vectors](longvectors) are not supported. | | `ignore.case` | if `FALSE`, the pattern matching is *case sensitive* and if `TRUE`, case is ignored during matching. | | `offset` | An integer specifying the offset from which the search should start. Must be positive. The beginning of line is defined to be at that offset so `"^"` will match there. | | `value` | logical. Determines the return value: see ‘Value’. | | `fixed` | logical. If `TRUE`, `pattern` is a pattern to be matched as is. | | `all` | logical. If `TRUE` all matches are returned, otherwise just the first one. | | `invert` | logical. If `TRUE` return indices or values for elements that do *not* match. Ignored (with a warning) unless `value = TRUE`. | ### Details Unlike `<grep>`, seeks matching patterns within the raw vector `x` . This has implications especially in the `all = TRUE` case, e.g., patterns matching empty strings are inherently infinite and thus may lead to unexpected results. The argument `invert` is interpreted as asking to return the complement of the match, which is only meaningful for `value = TRUE`. Argument `offset` determines the start of the search, not of the complement. Note that `invert = TRUE` with `all = TRUE` will split `x` into pieces delimited by the pattern including leading and trailing empty strings (consequently the use of regular expressions with `"^"` or `"$"` in that case may lead to less intuitive results). Some combinations of arguments such as `fixed = TRUE` with `value = TRUE` are supported but are less meaningful. ### Value `grepRaw(value = FALSE)` returns an integer vector of the offsets at which matches have occurred. If `all = FALSE` then it will be either of length zero (no match) or length one (first matching position). `grepRaw(value = TRUE, all = FALSE)` returns a raw vector which is either empty (no match) or the matched part of `x`. `grepRaw(value = TRUE, all = TRUE)` returns a (potentially empty) list of raw vectors corresponding to the matched parts. ### Source The TRE library of Ville Laurikari (<https://github.com/laurikari/tre/>) is used except for `fixed = TRUE`. ### See Also [regular expression](regex) (aka `[regexp](regex)`) for the details of the pattern specification. `<grep>` for matching character vectors. ### Examples ``` grepRaw("no match", "textText") # integer(0): no match grepRaw("adf", "adadfadfdfadadf") # 3 - the first match grepRaw("adf", "adadfadfdfadadf", all=TRUE, fixed=TRUE) ## [1] 3 6 13 -- three matches ``` r None `Memory` Memory Available for Data Storage ------------------------------------------- ### Description How **R** manages its workspace. ### Details **R** has a variable-sized workspace. There are (rarely-used) command-line options to control its minimum size, but no longer any to control the maximum size. **R** maintains separate areas for fixed and variable sized objects. The first of these is allocated as an array of *cons cells* (Lisp programmers will know what they are, others may think of them as the building blocks of the language itself, parse trees, etc.), and the second are thrown on a *heap* of ‘Vcells’ of 8 bytes each. Each cons cell occupies 28 bytes on a 32-bit build of **R**, (usually) 56 bytes on a 64-bit build. The default values are (currently) an initial setting of 350k cons cells and 6Mb of vector heap. Note that the areas are not actually allocated initially: rather these values are the sizes for triggering garbage collection. These values can be set by the command line options --min-nsize and --min-vsize (or if they are not used, the environment variables R\_NSIZE and R\_VSIZE) when **R** is started. Thereafter **R** will grow or shrink the areas depending on usage, never decreasing below the initial values. The maximal vector heap size can be set with the environment variable R\_MAX\_VSIZE. How much time **R** spends in the garbage collector will depend on these initial settings and on the trade-off the memory manager makes, when memory fills up, between collecting garbage to free up unused memory and growing these areas. The strategy used for growth can be specified by setting the environment variable R\_GC\_MEM\_GROW to an integer value between 0 and 3. This variable is read at start-up. Higher values grow the heap more aggressively, thus reducing garbage collection time but using more memory. You can find out the current memory consumption (the heap and cons cells used as numbers and megabytes) by typing `<gc>()` at the **R** prompt. Note that following `[gcinfo](gc)(TRUE)`, automatic garbage collection always prints memory use statistics. The command-line option --max-ppsize controls the maximum size of the pointer protection stack. This defaults to 50000, but can be increased to allow deep recursion or large and complicated calculations to be done. *Note* that parts of the garbage collection process goes through the full reserved pointer protection stack and hence becomes slower when the size is increased. Currently the maximum value accepted is 500000. ### See Also *An Introduction to R* for more command-line options. `[Memory-limits](memory-limits)` for the design limitations. `<gc>` for information on the garbage collector and total memory usage, `[object.size](../../utils/html/object.size)(a)` for the (approximate) size of **R** object `a`. `<memory.profile>` for profiling the usage of cons cells.
programming_docs
r None `grep` Pattern Matching and Replacement ---------------------------------------- ### Description `grep`, `grepl`, `regexpr`, `gregexpr`, `regexec` and `gregexec` search for matches to argument `pattern` within each element of a character vector: they differ in the format of and amount of detail in the results. `sub` and `gsub` perform replacement of the first and all matches respectively. ### Usage ``` grep(pattern, x, ignore.case = FALSE, perl = FALSE, value = FALSE, fixed = FALSE, useBytes = FALSE, invert = FALSE) grepl(pattern, x, ignore.case = FALSE, perl = FALSE, fixed = FALSE, useBytes = FALSE) sub(pattern, replacement, x, ignore.case = FALSE, perl = FALSE, fixed = FALSE, useBytes = FALSE) gsub(pattern, replacement, x, ignore.case = FALSE, perl = FALSE, fixed = FALSE, useBytes = FALSE) regexpr(pattern, text, ignore.case = FALSE, perl = FALSE, fixed = FALSE, useBytes = FALSE) gregexpr(pattern, text, ignore.case = FALSE, perl = FALSE, fixed = FALSE, useBytes = FALSE) regexec(pattern, text, ignore.case = FALSE, perl = FALSE, fixed = FALSE, useBytes = FALSE) gregexec(pattern, text, ignore.case = FALSE, perl = FALSE, fixed = FALSE, useBytes = FALSE) ``` ### Arguments | | | | --- | --- | | `pattern` | character string containing a [regular expression](regex) (or character string for `fixed = TRUE`) to be matched in the given character vector. Coerced by `[as.character](character)` to a character string if possible. If a character vector of length 2 or more is supplied, the first element is used with a warning. Missing values are allowed except for `regexpr`, `gregexpr` and `regexec`. | | `x, text` | a character vector where matches are sought, or an object which can be coerced by `as.character` to a character vector. [Long vectors](longvectors) are supported. | | `ignore.case` | if `FALSE`, the pattern matching is *case sensitive* and if `TRUE`, case is ignored during matching. | | `perl` | logical. Should Perl-compatible regexps be used? | | `value` | if `FALSE`, a vector containing the (`integer`) indices of the matches determined by `grep` is returned, and if `TRUE`, a vector containing the matching elements themselves is returned. | | `fixed` | logical. If `TRUE`, `pattern` is a string to be matched as is. Overrides all conflicting arguments. | | `useBytes` | logical. If `TRUE` the matching is done byte-by-byte rather than character-by-character. See ‘Details’. | | `invert` | logical. If `TRUE` return indices or values for elements that do *not* match. | | `replacement` | a replacement for matched pattern in `sub` and `gsub`. Coerced to character if possible. For `fixed = FALSE` this can include backreferences `"\1"` to `"\9"` to parenthesized subexpressions of `pattern`. For `perl = TRUE` only, it can also contain `"\U"` or `"\L"` to convert the rest of the replacement to upper or lower case and `"\E"` to end case conversion. If a character vector of length 2 or more is supplied, the first element is used with a warning. If `NA`, all elements in the result corresponding to matches will be set to `NA`. | ### Details Arguments which should be character strings or character vectors are coerced to character if possible. Each of these functions operates in one of three modes: 1. `fixed = TRUE`: use exact matching. 2. `perl = TRUE`: use Perl-style regular expressions. 3. `fixed = FALSE, perl = FALSE`: use POSIX 1003.2 extended regular expressions (the default). See the help pages on [regular expression](regex) for details of the different types of regular expressions. The two `*sub` functions differ only in that `sub` replaces only the first occurrence of a `pattern` whereas `gsub` replaces all occurrences. If `replacement` contains backreferences which are not defined in `pattern` the result is undefined (but most often the backreference is taken to be `""`). For `regexpr`, `gregexpr`, `regexec` and `gregexec` it is an error for `pattern` to be `NA`, otherwise `NA` is permitted and gives an `NA` match. Both `grep` and `grepl` take missing values in `x` as not matching a non-missing `pattern`. The main effect of `useBytes = TRUE` is to avoid errors/warnings about invalid inputs and spurious matches in multibyte locales, but for `regexpr` it changes the interpretation of the output. It inhibits the conversion of inputs with marked encodings, and is forced if any input is found which is marked as `"bytes"` (see `[Encoding](encoding)`). Caseless matching does not make much sense for bytes in a multibyte locale, and you should expect it only to work for ASCII characters if `useBytes = TRUE`. `regexpr` and `gregexpr` with `perl = TRUE` allow Python-style named captures, but not for *long vector* inputs. Invalid inputs in the current locale are warned about up to 5 times. Caseless matching with `perl = TRUE` for non-ASCII characters depends on the PCRE library being compiled with ‘Unicode property support’, which PCRE2 is by default. ### Value `grep(value = FALSE)` returns a vector of the indices of the elements of `x` that yielded a match (or not, for `invert = TRUE`). This will be an integer vector unless the input is a *[long vector](longvectors)*, when it will be a double vector. `grep(value = TRUE)` returns a character vector containing the selected elements of `x` (after coercion, preserving names but no other attributes). `grepl` returns a logical vector (match or not for each element of `x`). `sub` and `gsub` return a character vector of the same length and with the same attributes as `x` (after possible coercion to character). Elements of character vectors `x` which are not substituted will be returned unchanged (including any declared encoding). If `useBytes = FALSE` a non-ASCII substituted result will often be in UTF-8 with a marked encoding (e.g., if there is a UTF-8 input, and in a multibyte locale unless `fixed = TRUE`). Such strings can be re-encoded by `[enc2native](encoding)`. `regexpr` returns an integer vector of the same length as `text` giving the starting position of the first match or *-1* if there is none, with attribute `"match.length"`, an integer vector giving the length of the matched text (or *-1* for no match). The match positions and lengths are in characters unless `useBytes = TRUE` is used, when they are in bytes (as they are for ASCII-only matching: in either case an attribute `useBytes` with value `TRUE` is set on the result). If named capture is used there are further attributes `"capture.start"`, `"capture.length"` and `"capture.names"`. `gregexpr` returns a list of the same length as `text` each element of which is of the same form as the return value for `regexpr`, except that the starting positions of every (disjoint) match are given. `regexec` returns a list of the same length as `text` each element of which is either *-1* if there is no match, or a sequence of integers with the starting positions of the match and all substrings corresponding to parenthesized subexpressions of `pattern`, with attribute `"match.length"` a vector giving the lengths of the matches (or *-1* for no match). The interpretation of positions and length and the attributes follows `regexpr`. `gregexec` returns the same as `regexec`, except that to accommodate multiple matches per element of `text`, the integer sequences for each match are made into columns of a matrix, with one matrix per element of `text` with matches. Where matching failed because of resource limits (especially for `perl = TRUE`) this is regarded as a non-match, usually with a warning. ### Warning The POSIX 1003.2 mode of `gsub` and `gregexpr` does not work correctly with repeated word-boundaries (e.g., `pattern = "\b"`). Use `perl = TRUE` for such matches (but that may not work as expected with non-ASCII inputs, as the meaning of ‘word’ is system-dependent). ### Performance considerations If you are doing a lot of regular expression matching, including on very long strings, you will want to consider the options used. Generally `perl = TRUE` will be faster than the default regular expression engine, and `fixed = TRUE` faster still (especially when each pattern is matched only a few times). If you are working in a single-byte locale and have marked UTF-8 strings that are representable in that locale, convert them first as just one UTF-8 string will force all the matching to be done in Unicode, which attracts a penalty of around *3x* for the default POSIX 1003.2 mode. If you can make use of `useBytes = TRUE`, the strings will not be checked before matching, and the actual matching will be faster. Often byte-based matching suffices in a UTF-8 locale since byte patterns of one character never match part of another. Character ranges may produce unexpected results. PCRE-based matching by default used to put additional effort into ‘studying’ the compiled pattern when `x`/`text` has length 10 or more. That study may use the PCRE JIT compiler on platforms where it is available (see `<pcre_config>`). As from PCRE2 (PCRE version >= 10.00 as reported by `[extSoftVersion](extsoftversion)`), there is no study phase, but the patterns are optimized automatically when possible, and PCRE JIT is used when enabled. The details are controlled by `<options>` `PCRE_study` and `PCRE_use_JIT`. (Some timing comparisons can be seen by running file ‘tests/PCRE.R’ in the **R** sources (and perhaps installed).) People working with PCRE and very long strings can adjust the maximum size of the JIT stack by setting environment variable R\_PCRE\_JIT\_STACK\_MAXSIZE before JIT is used to a value between `1` and `1000` in MB: the default is `64`. When JIT is not used with PCRE version < 10.30 (that is with PCRE1 and old versions of PCRE2), it might also be wise to set the option `PCRE_limit_recursion`. ### Note Aspects will be platform-dependent as well as local-dependent: for example the implementation of character classes (except `[:digit:]` and `[:xdigit:]`). One can expect results to be consistent for ASCII inputs and when working in UTF-8 mode (when most platforms will use Unicode character tables, although those are updated frequently and subject to some degree of interpretation – is a circled capital letter alphabetic or a symbol?). However, results in 8-bit encodings can differ considerably between platforms, modes and from the UTF-8 versions. ### Source The C code for POSIX-style regular expression matching has changed over the years. As from **R** 2.10.0 (Oct 2009) the TRE library of Ville Laurikari (<https://github.com/laurikari/tre>) is used. The POSIX standard does give some room for interpretation, especially in the handling of invalid regular expressions and the collation of character ranges, so the results will have changed slightly over the years. For Perl-style matching PCRE2 or PCRE (<https://www.pcre.org>) is used: again the results may depend (slightly) on the version of PCRE in use. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole (`grep`) ### See Also [regular expression](regex) (aka `[regexp](regex)`) for the details of the pattern specification. `<regmatches>` for extracting matched substrings based on the results of `regexpr`, `gregexpr` and `regexec`. `[glob2rx](../../utils/html/glob2rx)` to turn wildcard matches into regular expressions. `<agrep>` for approximate matching. `<charmatch>`, `<pmatch>` for partial matching, `<match>` for matching to whole strings, `[startsWith](startswith)` for matching of initial parts of strings. `[tolower](chartr)`, `[toupper](chartr)` and `<chartr>` for character translations. `[apropos](../../utils/html/apropos)` uses regexps and has more examples. `[grepRaw](grepraw)` for matching raw vectors. Options `PCRE_limit_recursion`, `PCRE_study` and `PCRE_use_JIT`. `[extSoftVersion](extsoftversion)` for the versions of regex and PCRE libraries in use, `<pcre_config>` for more details for PCRE. ### Examples ``` grep("[a-z]", letters) txt <- c("arm","foot","lefroo", "bafoobar") if(length(i <- grep("foo", txt))) cat("'foo' appears at least once in\n\t", txt, "\n") i # 2 and 4 txt[i] ## Double all 'a' or 'b's; "\" must be escaped, i.e., 'doubled' gsub("([ab])", "\\1_\\1_", "abc and ABC") txt <- c("The", "licenses", "for", "most", "software", "are", "designed", "to", "take", "away", "your", "freedom", "to", "share", "and", "change", "it.", "", "By", "contrast,", "the", "GNU", "General", "Public", "License", "is", "intended", "to", "guarantee", "your", "freedom", "to", "share", "and", "change", "free", "software", "--", "to", "make", "sure", "the", "software", "is", "free", "for", "all", "its", "users") ( i <- grep("[gu]", txt) ) # indices stopifnot( txt[i] == grep("[gu]", txt, value = TRUE) ) ## Note that for some implementations character ranges are ## locale-dependent (but not currently). Then [b-e] in locales such as ## en_US may include B as the collation order is aAbBcCdDe ... (ot <- sub("[b-e]",".", txt)) txt[ot != gsub("[b-e]",".", txt)]#- gsub does "global" substitution ## In caseless matching, ranges include both cases: a <- grep("[b-e]", txt, value = TRUE) b <- grep("[b-e]", txt, ignore.case = TRUE, value = TRUE) setdiff(b, a) txt[gsub("g","#", txt) != gsub("g","#", txt, ignore.case = TRUE)] # the "G" words regexpr("en", txt) gregexpr("e", txt) ## Using grepl() for filtering ## Find functions with argument names matching "warn": findArgs <- function(env, pattern) { nms <- ls(envir = as.environment(env)) nms <- nms[is.na(match(nms, c("F","T")))] # <-- work around "checking hack" aa <- sapply(nms, function(.) { o <- get(.) if(is.function(o)) names(formals(o)) }) iw <- sapply(aa, function(a) any(grepl(pattern, a, ignore.case=TRUE))) aa[iw] } findArgs("package:base", "warn") ## trim trailing white space str <- "Now is the time " sub(" +$", "", str) ## spaces only ## what is considered 'white space' depends on the locale. sub("[[:space:]]+$", "", str) ## white space, POSIX-style ## what PCRE considered white space changed in version 8.34: see ?regex sub("\\s+$", "", str, perl = TRUE) ## PCRE-style white space ## capitalizing txt <- "a test of capitalizing" gsub("(\\w)(\\w*)", "\\U\\1\\L\\2", txt, perl=TRUE) gsub("\\b(\\w)", "\\U\\1", txt, perl=TRUE) txt2 <- "useRs may fly into JFK or laGuardia" gsub("(\\w)(\\w*)(\\w)", "\\U\\1\\E\\2\\U\\3", txt2, perl=TRUE) sub("(\\w)(\\w*)(\\w)", "\\U\\1\\E\\2\\U\\3", txt2, perl=TRUE) ## named capture notables <- c(" Ben Franklin and Jefferson Davis", "\tMillard Fillmore") # name groups 'first' and 'last' name.rex <- "(?<first>[[:upper:]][[:lower:]]+) (?<last>[[:upper:]][[:lower:]]+)" (parsed <- regexpr(name.rex, notables, perl = TRUE)) gregexpr(name.rex, notables, perl = TRUE)[[2]] parse.one <- function(res, result) { m <- do.call(rbind, lapply(seq_along(res), function(i) { if(result[i] == -1) return("") st <- attr(result, "capture.start")[i, ] substring(res[i], st, st + attr(result, "capture.length")[i, ] - 1) })) colnames(m) <- attr(result, "capture.names") m } parse.one(notables, parsed) ## Decompose a URL into its components. ## Example by LT (http://www.cs.uiowa.edu/~luke/R/regexp.html). x <- "http://stat.umn.edu:80/xyz" m <- regexec("^(([^:]+)://)?([^:/]+)(:([0-9]+))?(/.*)", x) m regmatches(x, m) ## Element 3 is the protocol, 4 is the host, 6 is the port, and 7 ## is the path. We can use this to make a function for extracting the ## parts of a URL: URL_parts <- function(x) { m <- regexec("^(([^:]+)://)?([^:/]+)(:([0-9]+))?(/.*)", x) parts <- do.call(rbind, lapply(regmatches(x, m), `[`, c(3L, 4L, 6L, 7L))) colnames(parts) <- c("protocol","host","port","path") parts } URL_parts(x) ## gregexec() may match multiple times within a single string. pattern <- "([[:alpha:]]+)([[:digit:]]+)" s <- "Test: A1 BC23 DEF456" m <- gregexec(pattern, s) m regmatches(s, m) ## Before gregexec() was implemented, one could emulate it by running ## regexec() on the regmatches obtained via gregexpr(). E.g.: lapply(regmatches(s, gregexpr(pattern, s)), function(e) regmatches(e, regexec(pattern, e))) ``` r None `standardGeneric` Formal Method System – Dispatching S4 Methods ---------------------------------------------------------------- ### Description The function `standardGeneric` initiates dispatch of S4 methods: see the references and the documentation of the methods package. Usually, calls to this function are generated automatically and not explicitly by the programmer. ### Usage ``` standardGeneric(f, fdef) ``` ### Arguments | | | | --- | --- | | `f` | The name of the generic. | | `fdef` | The generic function definition. Never passed when defining a new generic. | ### Details `standardGeneric` dispatches the method defined for a generic function named `f`, using the actual arguments in the frame from which it is called. The argument `fdef` is inserted (automatically) when dispatching methods for a primitive function. If present, it must always be the function definition for the corresponding generic. Don't insert this argument by hand, as there is no validity checking and miss-specifying the function definition will cause certain failure. For more, use the methods package, and see the documentation in `[GenericFunctions](../../methods/html/genericfunctions)`. ### Author(s) John Chambers ### References Chambers, John M. (2008) *Software for Data Analysis: Programming with R* Springer. (For the R version.) Chambers, John M. (1998) *Programming with Data* Springer (For the original S4 version.) r None `showConnections` Display Connections -------------------------------------- ### Description Display aspects of <connections>. ### Usage ``` showConnections(all = FALSE) getConnection(what) closeAllConnections() stdin() stdout() stderr() nullfile() isatty(con) ``` ### Arguments | | | | --- | --- | | `all` | logical: if true all connections, including closed ones and the standard ones are displayed. If false only open user-created connections are included. | | `what` | integer: a row number of the table given by `showConnections`. | | `con` | a connection. | ### Details `stdin()`, `stdout()` and `stderr()` are standard connections corresponding to input, output and error on the console respectively (and not necessarily to file streams). They are text-mode connections of class `"terminal"` which cannot be opened or closed, and are read-only, write-only and write-only respectively. The `stdout()` and `stderr()` connections can be re-directed by `<sink>` (and in some circumstances the output from `stdout()` can be split: see the help page). The encoding for `[stdin](showconnections)()` when redirected can be set by the command-line flag --encoding. `nullfile()` returns filename of the null device (`"/dev/null"` on Unix, `"nul:"` on Windows). `showConnections` returns a matrix of information. If a connection object has been lost or forgotten, `getConnection` will take a row number from the table and return a connection object for that connection, which can be used to close the connection, for example. However, if there is no **R** level object referring to the connection it will be closed automatically at the next garbage collection (except for `<gzcon>` connections). `closeAllConnections` closes (and destroys) all user connections, restoring all `<sink>` diversions as it does so. `isatty` returns true if the connection is one of the class `"terminal"` connections and it is apparently connected to a terminal, otherwise false. This may not be reliable in embedded applications, including GUI consoles. ### Value `stdin()`, `stdout()` and `stderr()` return connection objects. `showConnections` returns a character matrix of information with a row for each connection, by default only for open non-standard connections. `getConnection` returns a connection object, or `NULL`. ### Note `stdin()` refers to the ‘console’ and not to the C-level ‘stdin’ of the process. The distinction matters in GUI consoles (which may not have an active ‘stdin’, and if they do it may not be connected to console input), and also in embedded applications. If you want access to the C-level file stream ‘stdin’, use `[file](connections)("stdin")`. When **R** is reading a script from a file, the *file* is the ‘console’: this is traditional usage to allow in-line data (see ‘An Introduction to R’ for an example). ### See Also `<connections>` ### Examples ``` showConnections(all = TRUE) ## Not run: textConnection(letters) # oops, I forgot to record that one showConnections() # class description mode text isopen can read can write #3 "letters" "textConnection" "r" "text" "opened" "yes" "no" mycon <- getConnection(3) ## End(Not run) c(isatty(stdin()), isatty(stdout()), isatty(stderr())) ```
programming_docs
r None `mat.or.vec` Create a Matrix or a Vector ----------------------------------------- ### Description `mat.or.vec` creates an `nr` by `nc` zero matrix if `nc` is greater than 1, and a zero vector of length `nr` if `nc` equals 1. ### Usage ``` mat.or.vec(nr, nc) ``` ### Arguments | | | | --- | --- | | `nr, nc` | numbers of rows and columns. | ### Examples ``` mat.or.vec(3, 1) mat.or.vec(3, 2) ``` r None `raw` Raw Vectors ------------------ ### Description Creates or tests for objects of type `"raw"`. ### Usage ``` raw(length = 0) as.raw(x) is.raw(x) ``` ### Arguments | | | | --- | --- | | `length` | desired length. | | `x` | object to be coerced. | ### Details The raw type is intended to hold raw bytes. It is possible to extract subsequences of bytes, and to replace elements (but only by elements of a raw vector). The relational operators (see [Comparison](comparison), using the numerical order of the byte representation) work, as do the logical operators (see [Logic](logic)) with a bitwise interpretation. A raw vector is printed with each byte separately represented as a pair of hex digits. If you want to see a character representation (with escape sequences for non-printing characters) use `[rawToChar](rawconversion)`. Coercion to raw treats the input values as representing small (decimal) integers, so the input is first coerced to integer, and then values which are outside the range `[0 ... 255]` or are `NA` are set to `0` (the `nul` byte). `as.raw` and `is.raw` are <primitive> functions. ### Value `raw` creates a raw vector of the specified length. Each element of the vector is equal to `0`. Raw vectors are used to store fixed-length sequences of bytes. `as.raw` attempts to coerce its argument to be of raw type. The (elementwise) answer will be `0` unless the coercion succeeds (or if the original value successfully coerces to 0). `is.raw` returns true if and only if `typeof(x) == "raw"`. ### See Also `[charToRaw](rawconversion)`, `[rawShift](rawconversion)`, etc. `[&](logic)` for bitwise operations on raw vectors. ### Examples ``` xx <- raw(2) xx[1] <- as.raw(40) # NB, not just 40. xx[2] <- charToRaw("A") xx ## 28 41 -- raw prints hexadecimals dput(xx) ## as.raw(c(0x28, 0x41)) as.integer(xx) ## 40 65 x <- "A test string" (y <- charToRaw(x)) is.vector(y) # TRUE rawToChar(y) is.raw(x) is.raw(y) stopifnot( charToRaw("\xa3") == as.raw(0xa3) ) isASCII <- function(txt) all(charToRaw(txt) <= as.raw(127)) isASCII(x) # true isASCII("\xa325.63") # false (in Latin-1, this is an amount in UK pounds) ``` r None `droplevels` Drop Unused Levels from Factors --------------------------------------------- ### Description The function `droplevels` is used to drop unused levels from a `<factor>` or, more commonly, from factors in a data frame. ### Usage ``` ## S3 method for class 'factor' droplevels(x, exclude = if(anyNA(levels(x))) NULL else NA, ...) ## S3 method for class 'data.frame' droplevels(x, except, exclude, ...) ``` ### Arguments | | | | --- | --- | | `x` | an object from which to drop unused factor levels. | | `exclude` | passed to `<factor>()`; factor levels which should be excluded from the result even if present. Note that this was *implicitly* `NA` in **R** <= 3.3.1 which did drop `NA` levels even when present in `x`, contrary to the documentation. The current default is compatible with `x[ , drop=TRUE]`. | | `...` | further arguments passed to methods | | `except` | indices of columns from which *not* to drop levels | ### Details The method for class `"factor"` is currently equivalent to `factor(x, exclude=exclude)`. For the data frame method, you should rarely specify `exclude` “globally” for all factor columns; rather the default uses the same factor-specific `exclude` as the factor method itself. The `except` argument follow the usual indexing rules. ### Value `droplevels` returns an object of the same class as `x` ### Note This function was introduced in R 2.12.0. It is primarily intended for cases where one or more factors in a data frame contains only elements from a reduced level set after subsetting. (Notice that subsetting does *not* in general drop unused levels). By default, levels are dropped from all factors in a data frame, but the `except` argument allows you to specify columns for which this is not wanted. ### See Also `<subset>` for subsetting data frames. `<factor>` for definition of factors. `<drop>` for dropping array dimensions. `[drop1](../../stats/html/add1)` for dropping terms from a model. `[[.factor](extract.factor)` for subsetting of factors. ### Examples ``` aq <- transform(airquality, Month = factor(Month, labels = month.abb[5:9])) aq <- subset(aq, Month != "Jul") table( aq $Month) table(droplevels(aq)$Month) ``` r None `UseMethod` Class Methods -------------------------- ### Description **R** possesses a simple generic function mechanism which can be used for an object-oriented style of programming. Method dispatch takes place based on the class(es) of the first argument to the generic function or of the object supplied as an argument to `UseMethod` or `NextMethod`. ### Usage ``` UseMethod(generic, object) NextMethod(generic = NULL, object = NULL, ...) ``` ### Arguments | | | | --- | --- | | `generic` | a character string naming a function (and not a built-in operator). Required for `UseMethod`. | | `object` | for `UseMethod`: an object whose class will determine the method to be dispatched. Defaults to the first argument of the enclosing function. | | `...` | further arguments to be passed to the next method. | ### Details An **R** object is a data object which has a `class` attribute (and this can be tested by `<is.object>`). A class attribute is a character vector giving the names of the classes from which the object *inherits*. If the object does not have a class attribute, it has an implicit class. Matrices and arrays have class `"matrix"` or`"array"` followed by the class of the underlying vector. Most vectors have class the result of `<mode>(x)`, except that integer vectors have class `c("integer", "numeric")` and real vectors have class `c("double", "numeric")`. When a function calling `UseMethod("fun")` is applied to an object with class attribute `c("first", "second")`, the system searches for a function called `fun.first` and, if it finds it, applies it to the object. If no such function is found a function called `fun.second` is tried. If no class name produces a suitable function, the function `fun.default` is used, if it exists, or an error results. Function `[methods](../../utils/html/methods)` can be used to find out about the methods for a particular generic function or class. `UseMethod` is a primitive function but uses standard argument matching. It is not the only means of dispatch of methods, for there are [internal generic](internalmethods) and [group generic](groupgeneric) functions. `UseMethod` currently dispatches on the implicit class even for arguments that are not objects, but the other means of dispatch do not. `NextMethod` invokes the next method (determined by the class vector, either of the object supplied to the generic, or of the first argument to the function containing `NextMethod` if a method was invoked directly). Normally `NextMethod` is used with only one argument, `generic`, but if further arguments are supplied these modify the call to the next method. `NextMethod` should not be called except in methods called by `UseMethod` or from internal generics (see [InternalGenerics](internalmethods)). In particular it will not work inside anonymous calling functions (e.g., `get("print.ts")(AirPassengers)`). Namespaces can register methods for generic functions. To support this, `UseMethod` and `NextMethod` search for methods in two places: in the environment in which the generic function is called, and in the registration data base for the environment in which the generic is defined (typically a namespace). So methods for a generic function need to be available in the environment of the call to the generic, or they must be registered. (It does not matter whether they are visible in the environment in which the generic is defined.) As from **R** 3.5.0, the registration data base is searched after the top level environment (see `[topenv](ns-topenv)`) of the calling environment (but before the parents of the top level environment). ### Technical Details Now for some obscure details that need to appear somewhere. These comments will be slightly different than those in Chambers(1992). (See also the draft ‘R Language Definition’.) `UseMethod` creates a new function call with arguments matched as they came in to the generic. Any local variables defined before the call to `UseMethod` are retained (unlike S). Any statements after the call to `UseMethod` will not be evaluated as `UseMethod` does not return. `UseMethod` can be called with more than two arguments: a warning will be given and additional arguments ignored. (They are not completely ignored in S.) If it is called with just one argument, the class of the first argument of the enclosing function is used as `object`: unlike S this is the first actual argument passed and not the current value of the object of that name. `NextMethod` works by creating a special call frame for the next method. If no new arguments are supplied, the arguments will be the same in number, order and name as those to the current method but their values will be promises to evaluate their name in the current method and environment. Any named arguments matched to `...` are handled specially: they either replace existing arguments of the same name or are appended to the argument list. They are passed on as the promise that was supplied as an argument to the current environment. (S does this differently!) If they have been evaluated in the current (or a previous environment) they remain evaluated. (This is a complex area, and subject to change: see the draft ‘R Language Definition’.) The search for methods for `NextMethod` is slightly different from that for `UseMethod`. Finding no `fun.default` is not necessarily an error, as the search continues to the generic itself. This is to pick up an [internal generic](internalmethods) like `[` which has no separate default method, and succeeds only if the generic is a <primitive> function or a wrapper for a `[.Internal](internal)` function of the same name. (When a primitive is called as the default method, argument matching may not work as described above due to the different semantics of primitives.) You will see objects such as `.Generic`, `.Method`, and `.Class` used in methods. These are set in the environment within which the method is evaluated by the dispatch mechanism, which is as follows: 1. Find the context for the calling function (the generic): this gives us the unevaluated arguments for the original call. 2. Evaluate the object (usually an argument) to be used for dispatch, and find a method (possibly the default method) or throw an error. 3. Create an environment for evaluating the method and insert special variables (see below) into that environment. Also copy any variables in the environment of the generic that are not formal (or actual) arguments. 4. Fix up the argument list to be the arguments of the call matched to the formals of the method. `.Generic` is a length-one character vector naming the generic function. `.Method` is a character vector (normally of length one) naming the method function. (For functions in the group generic `[Ops](groupgeneric)` it is of length two.) `.Class` is a character vector of classes used to find the next method. `NextMethod` adds an attribute `"previous"` to `.Class` giving the `.Class` last used for dispatch, and shifts `.Class` along to that used for dispatch. `.GenericCallEnv` and `.GenericDefEnv` are the environments of the call to be generic and defining the generic respectively. (The latter is used to find methods registered for the generic.) Note that `.Class` is set when the generic is called, and is unchanged if the class of the dispatching argument is changed in a method. It is possible to change the method that `NextMethod` would dispatch by manipulating `.Class`, but ‘this is not recommended unless you understand the inheritance mechanism thoroughly’ (Chambers & Hastie, 1992, p. 469). ### Note This scheme is called *S3* (S version 3). For new projects, it is recommended to use the more flexible and robust *S4* scheme provided in the methods package. ### References Chambers, J. M. (1992) *Classes and methods: object-oriented programming in S.* Appendix A of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also The draft ‘R Language Definition’. `[methods](../../utils/html/methods)`, `<class>`, `[getS3method](../../utils/html/gets3method)`, `<is.object>`. r None `seq.Date` Generate Regular Sequences of Dates ----------------------------------------------- ### Description The method for `<seq>` for objects of class `"[Date](dates)"` representing calendar dates. ### Usage ``` ## S3 method for class 'Date' seq(from, to, by, length.out = NULL, along.with = NULL, ...) ``` ### Arguments | | | | --- | --- | | `from` | starting date. Required | | `to` | end date. Optional. | | `by` | increment of the sequence. Optional. See ‘Details’. | | `length.out` | integer, optional. Desired length of the sequence. | | `along.with` | take the length from the length of this argument. | | `...` | arguments passed to or from other methods. | ### Details `by` can be specified in several ways. * A number, taken to be in days. * A object of class `<difftime>` * A character string, containing one of `"day"`, `"week"`, `"month"`, `"quarter"` or `"year"`. This can optionally be preceded by a (positive or negative) integer and a space, or followed by `"s"`. See `[seq.POSIXt](seq.posixt)` for the details of `"month"`. ### Value A vector of class `"Date"`. ### See Also `[Date](dates)` ### Examples ``` ## first days of years seq(as.Date("1910/1/1"), as.Date("1999/1/1"), "years") ## by month seq(as.Date("2000/1/1"), by = "month", length.out = 12) ## quarters seq(as.Date("2000/1/1"), as.Date("2003/1/1"), by = "quarter") ## find all 7th of the month between two dates, the last being a 7th. st <- as.Date("1998-12-17") en <- as.Date("2000-1-7") ll <- seq(en, st, by = "-1 month") rev(ll[ll > st & ll < en]) ``` r None `unlist` Flatten Lists ----------------------- ### Description Given a list structure `x`, `unlist` simplifies it to produce a vector which contains all the atomic components which occur in `x`. ### Usage ``` unlist(x, recursive = TRUE, use.names = TRUE) ``` ### Arguments | | | | --- | --- | | `x` | an **R** object, typically a list or vector. | | `recursive` | logical. Should unlisting be applied to list components of `x`? | | `use.names` | logical. Should names be preserved? | ### Details `unlist` is generic: you can write methods to handle specific classes of objects, see [InternalMethods](internalmethods), and note, e.g., `[relist](../../utils/html/relist)` with the `unlist` method for `relistable` objects. If `recursive = FALSE`, the function will not recurse beyond the first level items in `x`. Factors are treated specially. If all non-list elements of `x` are `<factor>` (or ordered factor) objects then the result will be a factor with levels the union of the level sets of the elements, in the order the levels occur in the level sets of the elements (which means that if all the elements have the same level set, that is the level set of the result). `x` can be an atomic vector, but then `unlist` does nothing useful, not even drop names. By default, `unlist` tries to retain the naming information present in `x`. If `use.names = FALSE` all naming information is dropped. Where possible the list elements are coerced to a common mode during the unlisting, and so the result often ends up as a character vector. Vectors will be coerced to the highest type of the components in the hierarchy NULL < raw < logical < integer < double < complex < character < list < expression: pairlists are treated as lists. A list is a (generic) vector, and the simplified vector might still be a list (and might be unchanged). Non-vector elements of the list (for example language elements such as names, formulas and calls) are not coerced, and so a list containing one or more of these remains a list. (The effect of unlisting an `[lm](../../stats/html/lm)` fit is a list which has individual residuals as components.) Note that `unlist(x)` now returns `x` unchanged also for non-vector `x`, instead of signalling an error in that case. ### Value `NULL` or an expression or a vector of an appropriate mode to hold the list components. The output type is determined from the highest type of the components in the hierarchy NULL < raw < logical < integer < double < complex < character < list < expression, after coercion of pairlists to lists. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<c>`, `[as.list](list)`, `[relist](../../utils/html/relist)`. ### Examples ``` unlist(options()) unlist(options(), use.names = FALSE) l.ex <- list(a = list(1:5, LETTERS[1:5]), b = "Z", c = NA) unlist(l.ex, recursive = FALSE) unlist(l.ex, recursive = TRUE) l1 <- list(a = "a", b = 2, c = pi+2i) unlist(l1) # a character vector l2 <- list(a = "a", b = as.name("b"), c = pi+2i) unlist(l2) # remains a list ll <- list(as.name("sinc"), quote( a + b ), 1:10, letters, expression(1+x)) utils::str(ll) for(x in ll) stopifnot(identical(x, unlist(x))) ``` r None `formatDL` Format Description Lists ------------------------------------ ### Description Format vectors of items and their descriptions as 2-column tables or LaTeX-style description lists. ### Usage ``` formatDL(x, y, style = c("table", "list"), width = 0.9 * getOption("width"), indent = NULL) ``` ### Arguments | | | | --- | --- | | `x` | a vector giving the items to be described, or a list of length 2 or a matrix with 2 columns giving both items and descriptions. | | `y` | a vector of the same length as `x` with the corresponding descriptions. Only used if `x` does not already give the descriptions. | | `style` | a character string specifying the rendering style of the description information. Can be abbreviated. If `"table"`, a two-column table with items and descriptions as columns is produced (similar to Texinfo's `@table` environment). If `"list"`, a LaTeX-style tagged description list is obtained. | | `width` | a positive integer giving the target column for wrapping lines in the output. | | `indent` | a positive integer specifying the indentation of the second column in table style, and the indentation of continuation lines in list style. Must not be greater than `width/2`, and defaults to `width/3` for table style and `width/9` for list style. | ### Details After extracting the vectors of items and corresponding descriptions from the arguments, both are coerced to character vectors. In table style, items with more than `indent - 3` characters are displayed on a line of their own. ### Value a character vector with the formatted entries. ### Examples ``` ## Provide a nice summary of the numerical characteristics of the ## machine R is running on: writeLines(formatDL(unlist(.Machine))) ## Inspect Sys.getenv() results in "list" style (by default, these are ## printed in "table" style): writeLines(formatDL(Sys.getenv(), style = "list")) ``` r None `duplicated` Determine Duplicate Elements ------------------------------------------ ### Description `duplicated()` determines which elements of a vector or data frame are duplicates of elements with smaller subscripts, and returns a logical vector indicating which elements (rows) are duplicates. `anyDuplicated(.)` is a “generalized” more efficient version `any(duplicated(.))`, returning positive integer indices instead of just `TRUE`. ### Usage ``` duplicated(x, incomparables = FALSE, ...) ## Default S3 method: duplicated(x, incomparables = FALSE, fromLast = FALSE, nmax = NA, ...) ## S3 method for class 'array' duplicated(x, incomparables = FALSE, MARGIN = 1, fromLast = FALSE, ...) anyDuplicated(x, incomparables = FALSE, ...) ## Default S3 method: anyDuplicated(x, incomparables = FALSE, fromLast = FALSE, ...) ## S3 method for class 'array' anyDuplicated(x, incomparables = FALSE, MARGIN = 1, fromLast = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `x` | a vector or a data frame or an array or `NULL`. | | `incomparables` | a vector of values that cannot be compared. `FALSE` is a special value, meaning that all values can be compared, and may be the only value accepted for methods other than the default. It will be coerced internally to the same type as `x`. | | `fromLast` | logical indicating if duplication should be considered from the reverse side, i.e., the last (or rightmost) of identical elements would correspond to `duplicated = FALSE`. | | `nmax` | the maximum number of unique items expected (greater than one). | | `...` | arguments for particular methods. | | `MARGIN` | the array margin to be held fixed: see `<apply>`, and note that `MARGIN = 0` may be useful. | ### Details These are generic functions with methods for vectors (including lists), data frames and arrays (including matrices). For the default methods, and whenever there are equivalent method definitions for `duplicated` and `anyDuplicated`, `anyDuplicated(x, ...)` is a “generalized” shortcut for `any(duplicated(x, ...))`, in the sense that it returns the *index* `i` of the first duplicated entry `x[i]` if there is one, and `0` otherwise. Their behaviours may be different when at least one of `duplicated` and `anyDuplicated` has a relevant method. `duplicated(x, fromLast = TRUE)` is equivalent to but faster than `rev(duplicated(rev(x)))`. The array method calculates for each element of the sub-array specified by `MARGIN` if the remaining dimensions are identical to those for an earlier (or later, when `fromLast = TRUE`) element (in row-major order). This would most commonly be used to find duplicated rows (the default) or columns (with `MARGIN = 2`). Note that `MARGIN = 0` returns an array of the same dimensionality attributes as `x`. Missing values (`"[NA](na)"`) are regarded as equal, numeric and complex ones differing from `NaN`; character strings will be compared in a “common encoding”; for details, see `<match>` (and `<unique>`) which use the same concept. Values in `incomparables` will never be marked as duplicated. This is intended to be used for a fairly small set of values and will not be efficient for a very large set. Except for factors, logical and raw vectors the default `nmax = NA` is equivalent to `nmax = length(x)`. Since a hash table of size `8*nmax` bytes is allocated, setting `nmax` suitably can save large amounts of memory. For factors it is automatically set to the smaller of `length(x)` and the number of levels plus one (for `NA`). If `nmax` is set too small there is liable to be an error: `nmax = 1` is silently ignored. [Long vectors](longvectors) are supported for the default method of `duplicated`, but may only be usable if `nmax` is supplied. ### Value `duplicated()`: For a vector input, a logical vector of the same length as `x`. For a data frame, a logical vector with one element for each row. For a matrix or array, and when `MARGIN = 0`, a logical array with the same dimensions and dimnames. `anyDuplicated()`: an integer or real vector of length one with value the 1-based index of the first duplicate if any, otherwise `0`. ### Warning Using this for lists is potentially slow, especially if the elements are not atomic vectors (see `<vector>`) or differ only in their attributes. In the worst case it is *O(n^2)*. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<unique>`. ### Examples ``` x <- c(9:20, 1:5, 3:7, 0:8) ## extract unique elements (xu <- x[!duplicated(x)]) ## similar, same elements but different order: (xu2 <- x[!duplicated(x, fromLast = TRUE)]) ## xu == unique(x) but unique(x) is more efficient stopifnot(identical(xu, unique(x)), identical(xu2, unique(x, fromLast = TRUE))) duplicated(iris)[140:143] duplicated(iris3, MARGIN = c(1, 3)) anyDuplicated(iris) ## 143 anyDuplicated(x) anyDuplicated(x, fromLast = TRUE) ```
programming_docs
r None `readBin` Transfer Binary Data To and From Connections ------------------------------------------------------- ### Description Read binary data from or write binary data to a connection or raw vector. ### Usage ``` readBin(con, what, n = 1L, size = NA_integer_, signed = TRUE, endian = .Platform$endian) writeBin(object, con, size = NA_integer_, endian = .Platform$endian, useBytes = FALSE) ``` ### Arguments | | | | --- | --- | | `con` | A [connection](connections) object or a character string naming a file or a raw vector. | | `what` | Either an object whose mode will give the mode of the vector to be read, or a character vector of length one describing the mode: one of `"numeric"`, `"double"`, `"integer"`, `"int"`, `"logical"`, `"complex"`, `"character"`, `"raw"`. | | `n` | numeric. The (maximal) number of records to be read. You can use an over-estimate here, but not too large as storage is reserved for `n` items. | | `size` | integer. The number of bytes per element in the byte stream. The default, `NA_integer_`, uses the natural size. Size changing is not supported for raw and complex vectors. | | `signed` | logical. Only used for integers of sizes 1 and 2, when it determines if the quantity on file should be regarded as a signed or unsigned integer. | | `endian` | The endian-ness (`"big"` or `"little"`) of the target system for the file. Using `"swap"` will force swapping endian-ness. | | `object` | An **R** object to be written to the connection. | | `useBytes` | See `[writeLines](writelines)`. | ### Details These functions can only be used with binary-mode connections. If `con` is a character string, the functions call `[file](connections)` to obtain a binary-mode file connection which is opened for the duration of the function call. If the connection is open it is read/written from its current position. If it is not open, it is opened for the duration of the call in an appropriate mode (binary read or write) and then closed again. An open connection must be in binary mode. If `readBin` is called with `con` a raw vector, the data in the vector is used as input. If `writeBin` is called with `con` a raw vector, it is just an indication that a raw vector should be returned. If `size` is specified and not the natural size of the object, each element of the vector is coerced to an appropriate type before being written or as it is read. Possible sizes are 1, 2, 4 and possibly 8 for integer or logical vectors, and 4, 8 and possibly 12/16 for numeric vectors. (Note that coercion occurs as signed types except if `signed = FALSE` when reading integers of sizes 1 and 2.) Changing sizes is unlikely to preserve `NA`s, and the extended precision sizes are unlikely to be portable across platforms. `readBin` and `writeBin` read and write C-style zero-terminated character strings. Input strings are limited to 10000 characters. `[readChar](readchar)` and `[writeChar](readchar)` can be used to read and write fixed-length strings. No check is made that the string is valid in the current locale's encoding. Handling **R**'s missing and special (`Inf`, `-Inf` and `NaN`) values is discussed in the ‘R Data Import/Export’ manual. Only *2^31 - 1* bytes can be written in a single call (and that is the maximum capacity of a raw vector on 32-bit platforms). ‘Endian-ness’ is relevant for `size > 1`, and should always be set for portable code (the default is only appropriate when writing and then reading files on the same platform). ### Value For `readBin`, a vector of appropriate mode and length the number of items read (which might be less than `n`). For `writeBin`, a raw vector (if `con` is a raw vector) or invisibly `NULL`. ### Note Integer read/writes of size 8 will be available if either C type `long` is of size 8 bytes or C type `long long` exists and is of size 8 bytes. Real read/writes of size `sizeof(long double)` (usually 12 or 16 bytes) will be available only if that type is available and different from `double`. If `readBin(what = character())` is used incorrectly on a file which does not contain C-style character strings, warnings (usually many) are given. From a file or connection, the input will be broken into pieces of length 10000 with any final part being discarded. ### See Also The ‘R Data Import/Export’ manual. `readChar` to read/write fixed-length strings. `<connections>`, `[readLines](readlines)`, `[writeLines](writelines)`. `[.Machine](zmachine)` for the sizes of `long`, `long long` and `long double`. ### Examples ``` zzfil <- tempfile("testbin") zz <- file(zzfil, "wb") writeBin(1:10, zz) writeBin(pi, zz, endian = "swap") writeBin(pi, zz, size = 4) writeBin(pi^2, zz, size = 4, endian = "swap") writeBin(pi+3i, zz) writeBin("A test of a connection", zz) z <- paste("A very long string", 1:100, collapse = " + ") writeBin(z, zz) if(.Machine$sizeof.long == 8 || .Machine$sizeof.longlong == 8) writeBin(as.integer(5^(1:10)), zz, size = 8) if((s <- .Machine$sizeof.longdouble) > 8) writeBin((pi/3)^(1:10), zz, size = s) close(zz) zz <- file(zzfil, "rb") readBin(zz, integer(), 4) readBin(zz, integer(), 6) readBin(zz, numeric(), 1, endian = "swap") readBin(zz, numeric(), size = 4) readBin(zz, numeric(), size = 4, endian = "swap") readBin(zz, complex(), 1) readBin(zz, character(), 1) z2 <- readBin(zz, character(), 1) if(.Machine$sizeof.long == 8 || .Machine$sizeof.longlong == 8) readBin(zz, integer(), 10, size = 8) if((s <- .Machine$sizeof.longdouble) > 8) readBin(zz, numeric(), 10, size = s) close(zz) unlink(zzfil) stopifnot(z2 == z) ## signed vs unsigned ints zzfil <- tempfile("testbin") zz <- file(zzfil, "wb") x <- as.integer(seq(0, 255, 32)) writeBin(x, zz, size = 1) writeBin(x, zz, size = 1) x <- as.integer(seq(0, 60000, 10000)) writeBin(x, zz, size = 2) writeBin(x, zz, size = 2) close(zz) zz <- file(zzfil, "rb") readBin(zz, integer(), 8, size = 1) readBin(zz, integer(), 8, size = 1, signed = FALSE) readBin(zz, integer(), 7, size = 2) readBin(zz, integer(), 7, size = 2, signed = FALSE) close(zz) unlink(zzfil) ## use of raw z <- writeBin(pi^{1:5}, raw(), size = 4) readBin(z, numeric(), 5, size = 4) z <- writeBin(c("a", "test", "of", "character"), raw()) readBin(z, character(), 4) ``` r None `as.POSIXlt` Date-time Conversion Functions -------------------------------------------- ### Description Functions to manipulate objects of classes `"POSIXlt"` and `"POSIXct"` representing calendar dates and times. ### Usage ``` as.POSIXct(x, tz = "", ...) as.POSIXlt(x, tz = "", ...) ## S3 method for class 'character' as.POSIXlt(x, tz = "", format, tryFormats = c("%Y-%m-%d %H:%M:%OS", "%Y/%m/%d %H:%M:%OS", "%Y-%m-%d %H:%M", "%Y/%m/%d %H:%M", "%Y-%m-%d", "%Y/%m/%d"), optional = FALSE, ...) ## Default S3 method: as.POSIXlt(x, tz = "", optional = FALSE, ...) ## S3 method for class 'numeric' as.POSIXlt(x, tz = "", origin, ...) ## S3 method for class 'POSIXlt' as.double(x, ...) ``` ### Arguments | | | | --- | --- | | `x` | **R** object to be converted. | | `tz` | time zone specification to be used for the conversion, *if one is required*. System-specific (see [time zones](timezones)), but `""` is the current time zone, and `"GMT"` is UTC (Universal Time, Coordinated). Invalid values are most commonly treated as UTC, on some platforms with a warning. | | `...` | further arguments to be passed to or from other methods. | | `format` | character string giving a date-time format as used by `<strptime>`. | | `tryFormats` | `<character>` vector of `format` strings to try if `format` is not specified. | | `optional` | `<logical>` indicating to return `NA` (instead of signalling an error) if the format guessing does not succeed. | | `origin` | a date-time object, or something which can be coerced by `as.POSIXct(tz = "GMT")` to such an object. | ### Details The `as.POSIX*` functions convert an object to one of the two classes used to represent date/times (calendar dates plus time to the nearest second). They can convert objects of the other class and of class `"Date"` to these classes. Dates without times are treated as being at midnight UTC. They can also convert character strings of the formats `"2001-02-03"` and `"2001/02/03"` optionally followed by white space and a time in the format `"14:52"` or `"14:52:03"`. (Formats such as `"01/02/03"` are ambiguous but can be converted via a format specification by `<strptime>`.) Fractional seconds are allowed. Alternatively, `format` can be specified for character vectors or factors: if it is not specified and no standard format works for all non-`NA` inputs an error is thrown. If `format` is specified, remember that some of the format specifications are locale-specific, and you may need to set the `LC_TIME` category appropriately *via* `[Sys.setlocale](locales)`. This most often affects the use of `%b`, `%B` (month names) and `%p` (AM/PM). Logical `NA`s can be converted to either of the classes, but no other logical vectors can be. If you are given a numeric time as the number of seconds since an epoch, see the examples. Character input is first converted to class `"POSIXlt"` by `<strptime>`: numeric input is first converted to `"POSIXct"`. Any conversion that needs to go between the two date-time classes requires a time zone: conversion from `"POSIXlt"` to `"POSIXct"` will validate times in the selected time zone. One issue is what happens at transitions to and from DST, for example in the UK ``` as.POSIXct(strptime("2011-03-27 01:30:00", "%Y-%m-%d %H:%M:%S")) as.POSIXct(strptime("2010-10-31 01:30:00", "%Y-%m-%d %H:%M:%S")) ``` are respectively invalid (the clocks went forward at 1:00 GMT to 2:00 BST) and ambiguous (the clocks went back at 2:00 BST to 1:00 GMT). What happens in such cases is OS-specific: one should expect the first to be `NA`, but the second could be interpreted as either BST or GMT (and common OSes give both possible values). Note too (see `[strftime](strptime)`) that OS facilities may not format invalid times correctly. ### Value `as.POSIXct` and `as.POSIXlt` return an object of the appropriate class. If `tz` was specified, `as.POSIXlt` will give an appropriate `"tzone"` attribute. Date-times known to be invalid will be returned as `NA`. ### Note Some of the concepts used have to be extended backwards in time (the usage is said to be ‘proleptic’). For example, the origin of time for the `"POSIXct"` class, ‘1970-01-01 00:00.00 UTC’, is before UTC was defined. More importantly, conversion is done assuming the Gregorian calendar which was introduced in 1582 and not used universally until the 20th century. One of the re-interpretations assumed by ISO 8601:2004 is that there was a year zero, even though current year numbering (and zero) is a much later concept (525 AD for year numbers from 1 AD). Conversions between `"POSIXlt"` and `"POSIXct"` of future times are speculative except in UTC. The main uncertainty is in the use of and transitions to/from DST (most systems will assume the continuation of current rules but these can be changed at short notice). If you want to extract specific aspects of a time (such as the day of the week) just convert it to class `"POSIXlt"` and extract the relevant component(s) of the list, or if you want a character representation (such as a named day of the week) use the `[format](strptime)` method. If a time zone is needed and that specified is invalid on your system, what happens is system-specific but attempts to set it will probably be ignored. Conversion from character needs to find a suitable format unless one is supplied (by trying common formats in turn): this can be slow for long inputs. ### See Also [DateTimeClasses](datetimeclasses) for details of the classes; `<strptime>` for conversion to and from character representations. `[Sys.timezone](timezones)` for details of the (system-specific) naming of time zones. <locales> for locale-specific aspects. ### Examples ``` (z <- Sys.time()) # the current datetime, as class "POSIXct" unclass(z) # a large integer floor(unclass(z)/86400) # the number of days since 1970-01-01 (UTC) (now <- as.POSIXlt(Sys.time())) # the current datetime, as class "POSIXlt" unlist(unclass(now)) # a list shown as a named vector now$year + 1900 # see ?DateTimeClasses months(now); weekdays(now) # see ?months ## suppose we have a time in seconds since 1960-01-01 00:00:00 GMT ## (the origin used by SAS) z <- 1472562988 # ways to convert this as.POSIXct(z, origin = "1960-01-01") # local as.POSIXct(z, origin = "1960-01-01", tz = "GMT") # in UTC ## SPSS dates (R-help 2006-02-16) z <- c(10485849600, 10477641600, 10561104000, 10562745600) as.Date(as.POSIXct(z, origin = "1582-10-14", tz = "GMT")) ## Stata date-times: milliseconds since 1960-01-01 00:00:00 GMT ## format %tc excludes leap-seconds, assumed here ## For format %tC including leap seconds, see foreign::read.dta() z <- 1579598122120 op <- options(digits.secs = 3) # avoid rounding down: milliseconds are not exactly representable as.POSIXct((z+0.1)/1000, origin = "1960-01-01") options(op) ## Matlab 'serial day number' (days and fractional days) z <- 7.343736909722223e5 # 2010-08-23 16:35:00 as.POSIXct((z - 719529)*86400, origin = "1970-01-01", tz = "UTC") as.POSIXlt(Sys.time(), "GMT") # the current time in UTC ## These may not be correct names on your system as.POSIXlt(Sys.time(), "America/New_York") # in New York as.POSIXlt(Sys.time(), "EST5EDT") # alternative. as.POSIXlt(Sys.time(), "EST" ) # somewhere in Eastern Canada as.POSIXlt(Sys.time(), "HST") # in Hawaii as.POSIXlt(Sys.time(), "Australia/Darwin") tab <- file.path(R.home("share"), "zoneinfo", "zone1970.tab") if(file.exists(tab)) { cols <- c("code", "coordinates", "TZ", "comments") tmp <- read.delim(file.path(R.home("share"), "zoneinfo", "zone1970.tab"), header = FALSE, comment.char = "#", col.names = cols) if(interactive()) View(tmp) head(tmp, 10) } ``` r None `rev` Reverse Elements ----------------------- ### Description `rev` provides a reversed version of its argument. It is generic function with a default method for vectors and one for `[dendrogram](../../stats/html/dendrogram)`s. Note that this is no longer needed (nor efficient) for obtaining vectors sorted into descending order, since that is now rather more directly achievable by `<sort>(x, decreasing = TRUE)`. ### Usage ``` rev(x) ``` ### Arguments | | | | --- | --- | | `x` | a vector or another object for which reversal is defined. | ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<seq>`, `<sort>`. ### Examples ``` x <- c(1:5, 5:3) ## sort into descending order; first more efficiently: stopifnot(sort(x, decreasing = TRUE) == rev(sort(x))) stopifnot(rev(1:7) == 7:1) #- don't need 'rev' here ``` r None `getNativeSymbolInfo` Obtain a Description of one or more Native (C/Fortran) Symbols ------------------------------------------------------------------------------------- ### Description This finds and returns a description of one or more dynamically loaded or ‘exported’ built-in native symbols. For each name, it returns information about the name of the symbol, the library in which it is located and, if available, the number of arguments it expects and by which interface it should be called (i.e `[.Call](callexternal)`, `[.C](foreign)`, `[.Fortran](foreign)`, or `[.External](callexternal)`). Additionally, it returns the address of the symbol and this can be passed to other C routines. Specifically, this provides a way to explicitly share symbols between different dynamically loaded package libraries. Also, it provides a way to query where symbols were resolved, and aids diagnosing strange behavior associated with dynamic resolution. ### Usage ``` getNativeSymbolInfo(name, PACKAGE, unlist = TRUE, withRegistrationInfo = FALSE) ``` ### Arguments | | | | --- | --- | | `name` | the name(s) of the native symbol(s). | | `PACKAGE` | an optional argument that specifies to which DLL to restrict the search for this symbol. If this is `"base"`, we search in the **R** executable itself. | | `unlist` | a logical value which controls how the result is returned if the function is called with the name of a single symbol. If `unlist` is `TRUE` and the number of symbol names in `name` is one, then the `NativeSymbolInfo` object is returned. If it is `FALSE`, then a list of `NativeSymbolInfo` objects is returned. This is ignored if the number of symbols passed in `name` is more than one. To be compatible with earlier versions of this function, this defaults to `TRUE`. | | `withRegistrationInfo` | a logical value indicating whether, if `TRUE`, to return information that was registered with **R** about the symbol and its parameter types if such information is available, or if `FALSE` to return just the address of the symbol. | ### Details This uses the same mechanism for resolving symbols as is used in all the native interfaces (`[.Call](callexternal)`, etc.). If the symbol has been explicitly registered by the DLL in which it is contained, information about the number of arguments and the interface by which it should be called will be returned. Otherwise, a generic native symbol object is returned. ### Value Generally, a list of `NativeSymbolInfo` elements whose elements can be indexed by the elements of `name` in the call. Each `NativeSymbolInfo` object is a list containing the following elements: | | | | --- | --- | | `name` | the name of the symbol, as given by the `name` argument. | | `address` | if `withRegistrationInfo` is `FALSE`, this is the native memory address of the symbol which can be used to invoke the routine, and also to compare with other symbol addresses. This is an external pointer object and of class `NativeSymbol`. If `withRegistrationInfo` is `TRUE` and registration information is available for the symbol, then this is an object of class `RegisteredNativeSymbol` and is a reference to an internal data type that has access to the routine pointer and registration information. This too can be used in calls to `[.Call](callexternal)`, `[.C](foreign)`, `[.Fortran](foreign)` and `[.External](callexternal)`. | | `dll` | a list containing 3 elements: name the short form of the library name which can be used as the value of the `PACKAGE` argument in the different native interface functions. path the fully qualified name of the DLL. dynamicLookup a logical value indicating whether dynamic resolution is used when looking for symbols in this library, or only registered routines can be located. | If the routine was explicitly registered by the dynamically loaded library, the list contains a fourth field | | | | --- | --- | | `numParameters` | the number of arguments that should be passed in a call to this routine. | Additionally, the list will have an additional class, being `CRoutine`, `CallRoutine`, `FortranRoutine` or `ExternalRoutine` corresponding to the R interface by which it should be invoked. If any of the symbols is not found, an error is raised. If `name` contains only one symbol name and `unlist` is `TRUE`, then the single `NativeSymbolInfo` is returned rather than the list containing that one element. ### Note The third element of the `NativeSymbolInfo` objects was renamed from `package` to `dll` in **R** version 3.6.0, for consistency with the names of the `NativeSymbolInfo` objects returned by `[getDLLRegisteredRoutines](getdllregisteredroutines)()`. ### Note One motivation for accessing this reflectance information is to be able to pass native routines to C routines as function pointers in C. This allows us to treat native routines and **R** functions in a similar manner, such as when passing an **R** function to C code that makes callbacks to that function at different points in its computation (e.g., `[nls](../../stats/html/nls)`). Additionally, we can resolve the symbol just once and avoid resolving it repeatedly or using the internal cache. ### Author(s) Duncan Temple Lang ### References For information about registering native routines, see “In Search of C/C++ & FORTRAN Routines”, R-News, volume 1, number 3, 2001, p20–23 (<https://www.r-project.org/doc/Rnews/Rnews_2001-3.pdf>). ### See Also `[getDLLRegisteredRoutines](getdllregisteredroutines)`, `[is.loaded](dynload)`, `[.C](foreign)`, `[.Fortran](foreign)`, `[.External](callexternal)`, `[.Call](callexternal)`, `[dyn.load](dynload)`.
programming_docs
r None `pcre_config` Report Configuration Options for PCRE ---------------------------------------------------- ### Description Report some of the configuration options of the version of PCRE in use in this **R** session. ### Usage ``` pcre_config() ``` ### Value A named logical vector, currently with elements | | | | --- | --- | | `UTF-8` | Support for UTF-8 inputs. Required. | | `Unicode properties` | Support for \p{xx} and \P{xx} in regular expressions. Desirable and used by some CRAN packages. As of PCRE2, always present with support for UTF-8. | | `JIT` | Support for just-in-time compilation. Desirable for speed (but only available as a compile-time option on certain architectures). | | `stack` | Does match recursion use a stack (`TRUE`, the default for PCRE1 and PCRE2 older than 10.30) or a heap? See the discussion at <https://www.pcre.org/original/doc/html/pcrestack.html> (Added in **R** 3.4.0.). No longer relevant and always `FALSE` in PCRE2 since version 10.30 which no longer uses function recursion to remember backtracking positions. | ### See Also `[extSoftVersion](extsoftversion)` for the PCRE version. ### Examples ``` pcre_config() ``` r None `formals` Access to and Manipulation of the Formal Arguments ------------------------------------------------------------- ### Description Get or set the formal arguments of a `<function>`. ### Usage ``` formals(fun = sys.function(sys.parent()), envir = parent.frame()) formals(fun, envir = environment(fun)) <- value ``` ### Arguments | | | | --- | --- | | `fun` | a `<function>`, or see ‘Details’. | | `envir` | `<environment>` in which the function should be defined (or found via `<get>()` in the first case and when `fun` a character string). | | `value` | a `<list>` (or `[pairlist](list)`) of **R** expressions. | ### Details For the first form, `fun` can also be a character string naming the function to be manipulated, which is searched for in `envir`, by default from the parent frame. If it is not specified, the function calling `formals` is used. Only *closures* have formals, not primitive functions. ### Value `formals` returns the formal argument list of the function specified, as a `[pairlist](list)`, or `NULL` for a non-function or primitive. The replacement form sets the formals of a function to the list/pairlist on the right hand side, and (potentially) resets the environment of the function, dropping `<attributes>`. ### See Also `[formalArgs](../../methods/html/methodutilities)` (from methods), a shortcut for `names(formals(.))`. `<args>` for a human-readable version, `[alist](list)` to *construct* a typical formals `value`, see the examples. The three parts of a (non-primitive) `<function>` are its `formals`, `<body>`, and `<environment>`. ### Examples ``` require(stats) formals(lm) ## If you just want the names of the arguments, use formalArgs instead. names(formals(lm)) methods:: formalArgs(lm) # same ## formals returns a pairlist. Arguments with no default have type symbol (aka name). str(formals(lm)) ## formals returns NULL for primitive functions. Use it in combination with ## args for this case. is.primitive(`+`) formals(`+`) formals(args(`+`)) ## You can overwrite the formal arguments of a function (though this is ## advanced, dangerous coding). f <- function(x) a + b formals(f) <- alist(a = , b = 3) f # function(a, b = 3) a + b f(2) # result = 5 ``` r None `Last.value` Value of Last Evaluated Expression ------------------------------------------------ ### Description The value of the internal evaluation of a top-level **R** expression is always assigned to `.Last.value` (in `package:base`) before further processing (e.g., printing). ### Usage ``` .Last.value ``` ### Details The value of a top-level assignment *is* put in `.Last.value`, unlike S. Do not assign to `.Last.value` in the workspace, because this will always mask the object of the same name in `package:base`. ### See Also `<eval>` ### Examples ``` ## These will not work correctly from example(), ## but they will in make check or if pasted in, ## as example() does not run them at the top level gamma(1:15) # think of some intensive calculation... fac14 <- .Last.value # keep them library("splines") # returns invisibly .Last.value # shows what library(.) above returned ``` r None `Sys.getenv` Get Environment Variables --------------------------------------- ### Description `Sys.getenv` obtains the values of the environment variables. ### Usage ``` Sys.getenv(x = NULL, unset = "", names = NA) ``` ### Arguments | | | | --- | --- | | `x` | a character vector, or `NULL`. | | `unset` | a character string. | | `names` | logical: should the result be named? If `NA` (the default) single-element results are not named whereas multi-element results are. | ### Details Both arguments will be coerced to character if necessary. Setting `unset = NA` will enable unset variables and those set to the value `""` to be distinguished, *if the OS does*. POSIX requires the OS to distinguish, and all known current **R** platforms do. ### Value A vector of the same length as `x`, with (if `names == TRUE`) the variable names as its `names` attribute. Each element holds the value of the environment variable named by the corresponding component of `x` (or the value of `unset` if no environment variable with that name was found). On most platforms `Sys.getenv()` will return a named vector giving the values of all the environment variables, sorted in the current locale. It may be confused by names containing `=` which some platforms allow but POSIX does not. (Windows is such a platform: there names including `=` are truncated just before the first `=`.) When `x` is missing and `names` is not false, the result is of class `"Dlist"` in order to get a nice `<print>` method. ### See Also `[Sys.setenv](sys.setenv)`, `[Sys.getlocale](locales)` for the locale in use, `<getwd>` for the working directory. The help for ‘[environment variables](envvar)’ lists many of the environment variables used by **R**. ### Examples ``` ## whether HOST is set will be shell-dependent e.g. Solaris' csh does not. Sys.getenv(c("R_HOME", "R_PAPERSIZE", "R_PRINTCMD", "HOST")) names(s <- Sys.getenv()) # all settings (the values could be very long) head(s, 12)# using the Dlist print() method ## Language and Locale settings -- but rather use Sys.getlocale() s[grep("^L(C|ANG)", names(s))] ``` r None `range` Range of Values ------------------------ ### Description `range` returns a vector containing the minimum and maximum of all the given arguments. ### Usage ``` range(..., na.rm = FALSE) ## Default S3 method: range(..., na.rm = FALSE, finite = FALSE) ``` ### Arguments | | | | --- | --- | | `...` | any `<numeric>` or character objects. | | `na.rm` | logical, indicating if `[NA](na)`'s should be omitted. | | `finite` | logical, indicating if all non-finite elements should be omitted. | ### Details `range` is a generic function: methods can be defined for it directly or via the `[Summary](groupgeneric)` group generic. For this to work properly, the arguments `...` should be unnamed, and dispatch is on the first argument. If `na.rm` is `FALSE`, `NA` and `NaN` values in any of the arguments will cause `NA` values to be returned, otherwise `NA` values are ignored. If `finite` is `TRUE`, the minimum and maximum of all finite values is computed, i.e., `finite = TRUE` *includes* `na.rm = TRUE`. A special situation occurs when there is no (after omission of `NA`s) nonempty argument left, see `[min](extremes)`. ### S4 methods This is part of the S4 `[Summary](../../methods/html/s4groupgeneric)` group generic. Methods for it must use the signature `x, ..., na.rm`. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[min](extremes)`, `[max](extremes)`. The `[extendrange](../../grdevices/html/extendrange)()` utility in package grDevices. ### Examples ``` (r.x <- range(stats::rnorm(100))) diff(r.x) # the SAMPLE range x <- c(NA, 1:3, -1:1/0); x range(x) range(x, na.rm = TRUE) range(x, finite = TRUE) ``` r None `debug` Debug a Function ------------------------- ### Description Set, unset or query the debugging flag on a function. The `text` and `condition` arguments are the same as those that can be supplied via a call to `<browser>`. They can be retrieved by the user once the browser has been entered, and provide a mechanism to allow users to identify which breakpoint has been activated. ### Usage ``` debug(fun, text = "", condition = NULL, signature = NULL) debugonce(fun, text = "", condition = NULL, signature = NULL) undebug(fun, signature = NULL) isdebugged(fun, signature = NULL) debuggingState(on = NULL) ``` ### Arguments | | | | --- | --- | | `fun` | any interpreted **R** function. | | `text` | a text string that can be retrieved when the browser is entered. | | `condition` | a condition that can be retrieved when the browser is entered. | | `signature` | an optional method signature. If specified, the method is debugged, rather than its generic. | | `on` | logical; a call to the support function `debuggingState` returns `TRUE` if debugging is globally turned on, `FALSE` otherwise. An argument of one or the other of those values sets the state. If the debugging state is `FALSE`, none of the debugging actions will occur (but explicit `<browser>` calls in functions will continue to work). | ### Details When a function flagged for debugging is entered, normal execution is suspended and the body of function is executed one statement at a time. A new `<browser>` context is initiated for each step (and the previous one destroyed). At the debug prompt the user can enter commands or **R** expressions, followed by a newline. The commands are described in the `<browser>` help topic. To debug a function which is defined inside another function, single-step through to the end of its definition, and then call `debug` on its name. If you want to debug a function not starting at the very beginning, use `<trace>(..., at = *)` or `[setBreakpoint](../../utils/html/findlinenum)`. Using `debug` is persistent, and unless debugging is turned off the debugger will be entered on every invocation (note that if the function is removed and replaced the debug state is not preserved). Use `debugonce()` to enter the debugger only the next time the function is invoked. To debug an S4 method by explicit signature, use `signature`. When specified, signature indicates the method of `fun` to be debugged. Note that debugging is implemented slightly differently for this case, as it uses the trace machinery, rather than the debugging bit. As such, `text` and `condition` cannot be specified in combination with a non-null `signature`. For methods which implement the `.local` rematching mechanism, the `.local` closure itself is the one that will be ultimately debugged (see `[isRematched](../../methods/html/rmethodutils)`). `isdebugged` returns `TRUE` if a) `signature` is `NULL` and the closure `fun` has been debugged, or b) `signature` is not `NULL`, `fun` is an S4 generic, and the method of `fun` for that signature has been debugged. In all other cases, it returns `FALSE`. The number of lines printed for the deparsed call when a function is entered for debugging can be limited by setting `<options>(deparse.max.lines)`. When debugging is enabled on a byte compiled function then the interpreted version of the function will be used until debugging is disabled. ### Value `debug` and `undebug` invisibly return `NULL`. `isdebugged` returns `TRUE` if the function or method is marked for debugging, and `FALSE` otherwise. ### See Also `[debugcall](../../utils/html/debugcall)` for conveniently debugging methods, `<browser>` notably for its ‘*commands*’, `<trace>`; `<traceback>` to see the stack after an `Error: ...` message; `[recover](../../utils/html/recover)` for another debugging approach. ### Examples ``` ## Not run: debug(library) library(methods) ## End(Not run) ## Not run: debugonce(sample) ## only the first call will be debugged sampe(10, 1) sample(10, 1) ## End(Not run) ``` r None `conditions` Condition Handling and Recovery --------------------------------------------- ### Description These functions provide a mechanism for handling unusual conditions, including errors and warnings. ### Usage ``` tryCatch(expr, ..., finally) withCallingHandlers(expr, ...) globalCallingHandlers(...) signalCondition(cond) simpleCondition(message, call = NULL) simpleError (message, call = NULL) simpleWarning (message, call = NULL) simpleMessage (message, call = NULL) errorCondition(message, ..., class = NULL, call = NULL) warningCondition(message, ..., class = NULL, call = NULL) ## S3 method for class 'condition' as.character(x, ...) ## S3 method for class 'error' as.character(x, ...) ## S3 method for class 'condition' print(x, ...) ## S3 method for class 'restart' print(x, ...) conditionCall(c) ## S3 method for class 'condition' conditionCall(c) conditionMessage(c) ## S3 method for class 'condition' conditionMessage(c) withRestarts(expr, ...) computeRestarts(cond = NULL) findRestart(name, cond = NULL) invokeRestart(r, ...) tryInvokeRestart(r, ...) invokeRestartInteractively(r) isRestart(x) restartDescription(r) restartFormals(r) suspendInterrupts(expr) allowInterrupts(expr) .signalSimpleWarning(msg, call) .handleSimpleError(h, msg, call) .tryResumeInterrupt() ``` ### Arguments | | | | --- | --- | | `c` | a condition object. | | `call` | call expression. | | `cond` | a condition object. | | `expr` | expression to be evaluated. | | `finally` | expression to be evaluated before returning or exiting. | | `h` | function. | | `message` | character string. | | `msg` | character string. | | `name` | character string naming a restart. | | `r` | restart object. | | `x` | object. | | `class` | character string naming a condition class. | | `...` | additional arguments; see details below. | ### Details The condition system provides a mechanism for signaling and handling unusual conditions, including errors and warnings. Conditions are represented as objects that contain information about the condition that occurred, such as a message and the call in which the condition occurred. Currently conditions are S3-style objects, though this may eventually change. Conditions are objects inheriting from the abstract class `condition`. Errors and warnings are objects inheriting from the abstract subclasses `error` and `warning`. The class `simpleError` is the class used by `stop` and all internal error signals. Similarly, `simpleWarning` is used by `warning`, and `simpleMessage` is used by `message`. The constructors by the same names take a string describing the condition as argument and an optional call. The functions `conditionMessage` and `conditionCall` are generic functions that return the message and call of a condition. The function `errorCondition` can be used to construct error conditions of a particular class with additional fields specified as the `...` argument. `warningCondition` is analogous for warnings. Conditions are signaled by `signalCondition`. In addition, the `stop` and `warning` functions have been modified to also accept condition arguments. The function `tryCatch` evaluates its expression argument in a context where the handlers provided in the `...` argument are available. The `finally` expression is then evaluated in the context in which `tryCatch` was called; that is, the handlers supplied to the current `tryCatch` call are not active when the `finally` expression is evaluated. Handlers provided in the `...` argument to `tryCatch` are established for the duration of the evaluation of `expr`. If no condition is signaled when evaluating `expr` then `tryCatch` returns the value of the expression. If a condition is signaled while evaluating `expr` then established handlers are checked, starting with the most recently established ones, for one matching the class of the condition. When several handlers are supplied in a single `tryCatch` then the first one is considered more recent than the second. If a handler is found then control is transferred to the `tryCatch` call that established the handler, the handler found and all more recent handlers are disestablished, the handler is called with the condition as its argument, and the result returned by the handler is returned as the value of the `tryCatch` call. Calling handlers are established by `withCallingHandlers`. If a condition is signaled and the applicable handler is a calling handler, then the handler is called by `signalCondition` in the context where the condition was signaled but with the available handlers restricted to those below the handler called in the handler stack. If the handler returns, then the next handler is tried; once the last handler has been tried, `signalCondition` returns `NULL`. `globalCallingHandlers` establishes calling handlers globally. These handlers are only called as a last resort, after the other handlers dynamically registered with `withCallingHandlers` have been invoked. They are called before the `error` global option (which is the legacy interface for global handling of errors). Registering the same handler multiple times moves that handler on top of the stack, which ensures that it is called first. Global handlers are a good place to define a general purpose logger (for instance saving the last error object in the global workspace) or a general recovery strategy (e.g. installing missing packages via the `retry_loadNamespace` restart). Like `withCallingHandlers` and `tryCatch`, `globalCallingHandlers` takes named handlers. Unlike these functions, it also has an `<options>`-like interface: you can establish handlers by passing a single list of named handlers. To unregister all global handlers, supply a single 'NULL'. The list of deleted handlers is returned invisibly. Finally, calling `globalCallingHandlers` without arguments returns the list of currently established handlers, visibly. User interrupts signal a condition of class `interrupt` that inherits directly from class `condition` before executing the default interrupt action. Restarts are used for establishing recovery protocols. They can be established using `withRestarts`. One pre-established restart is an `abort` restart that represents a jump to top level. `findRestart` and `computeRestarts` find the available restarts. `findRestart` returns the most recently established restart of the specified name. `computeRestarts` returns a list of all restarts. Both can be given a condition argument and will then ignore restarts that do not apply to the condition. `invokeRestart` transfers control to the point where the specified restart was established and calls the restart's handler with the arguments, if any, given as additional arguments to `invokeRestart`. The restart argument to `invokeRestart` can be a character string, in which case `findRestart` is used to find the restart. If no restart is found, an error is thrown. `tryInvokeRestart` is a variant of `invokeRestart` that returns silently when the restart cannot be found with `findRestart`. Because a condition of a given class might be signalled with arbitrary protocols (error, warning, etc), it is recommended to use this permissive variant whenever you are handling conditions signalled from a foreign context. For instance, invocation of a `"muffleWarning"` restart should be optional because the warning might have been signalled by the user or from a different package with the `stop` or `message` protocols. Only use `invokeRestart` when you have control of the signalling context, or when it is a logical error if the restart is not available. New restarts for `withRestarts` can be specified in several ways. The simplest is in `name = function` form where the function is the handler to call when the restart is invoked. Another simple variant is as `name = string` where the string is stored in the `description` field of the restart object returned by `findRestart`; in this case the handler ignores its arguments and returns `NULL`. The most flexible form of a restart specification is as a list that can include several fields, including `handler`, `description`, and `test`. The `test` field should contain a function of one argument, a condition, that returns `TRUE` if the restart applies to the condition and `FALSE` if it does not; the default function returns `TRUE` for all conditions. One additional field that can be specified for a restart is `interactive`. This should be a function of no arguments that returns a list of arguments to pass to the restart handler. The list could be obtained by interacting with the user if necessary. The function `invokeRestartInteractively` calls this function to obtain the arguments to use when invoking the restart. The default `interactive` method queries the user for values for the formal arguments of the handler function. Interrupts can be suspended while evaluating an expression using `suspendInterrupts`. Subexpression can be evaluated with interrupts enabled using `allowInterrupts`. These functions can be used to make sure cleanup handlers cannot be interrupted. `.signalSimpleWarning`, `.handleSimpleError`, and `.tryResumeInterrupt` are used internally and should not be called directly. ### References The `tryCatch` mechanism is similar to Java error handling. Calling handlers are based on Common Lisp and Dylan. Restarts are based on the Common Lisp restart mechanism. ### See Also `<stop>` and `<warning>` signal conditions, and `<try>` is essentially a simplified version of `tryCatch`. `[assertCondition](../../tools/html/assertcondition)` in package tools *tests* that conditions are signalled and works with several of the above handlers. ### Examples ``` tryCatch(1, finally = print("Hello")) e <- simpleError("test error") ## Not run: stop(e) tryCatch(stop(e), finally = print("Hello")) tryCatch(stop("fred"), finally = print("Hello")) ## End(Not run) tryCatch(stop(e), error = function(e) e, finally = print("Hello")) tryCatch(stop("fred"), error = function(e) e, finally = print("Hello")) withCallingHandlers({ warning("A"); 1+2 }, warning = function(w) {}) ## Not run: { withRestarts(stop("A"), abort = function() {}); 1 } ## End(Not run) withRestarts(invokeRestart("foo", 1, 2), foo = function(x, y) {x + y}) ##--> More examples are part of ##--> demo(error.catching) ```
programming_docs
r None `lapply` Apply a Function over a List or Vector ------------------------------------------------ ### Description `lapply` returns a list of the same length as `X`, each element of which is the result of applying `FUN` to the corresponding element of `X`. `sapply` is a user-friendly version and wrapper of `lapply` by default returning a vector, matrix or, if `simplify = "array"`, an array if appropriate, by applying `simplify2array()`. `sapply(x, f, simplify = FALSE, USE.NAMES = FALSE)` is the same as `lapply(x, f)`. `vapply` is similar to `sapply`, but has a pre-specified type of return value, so it can be safer (and sometimes faster) to use. `replicate` is a wrapper for the common use of `sapply` for repeated evaluation of an expression (which will usually involve random number generation). `simplify2array()` is the utility called from `sapply()` when `simplify` is not false and is similarly called from `<mapply>()`. ### Usage ``` lapply(X, FUN, ...) sapply(X, FUN, ..., simplify = TRUE, USE.NAMES = TRUE) vapply(X, FUN, FUN.VALUE, ..., USE.NAMES = TRUE) replicate(n, expr, simplify = "array") simplify2array(x, higher = TRUE) ``` ### Arguments | | | | --- | --- | | `X` | a vector (atomic or list) or an `<expression>` object. Other objects (including classed objects) will be coerced by `base::[as.list](list)`. | | `FUN` | the function to be applied to each element of `X`: see ‘Details’. In the case of functions like `+`, `%*%`, the function name must be backquoted or quoted. | | `...` | optional arguments to `FUN`. | | `simplify` | logical or character string; should the result be simplified to a vector, matrix or higher dimensional array if possible? For `sapply` it must be named and not abbreviated. The default value, `TRUE`, returns a vector or matrix if appropriate, whereas if `simplify = "array"` the result may be an `<array>` of “rank” (*=*`length(dim(.))`) one higher than the result of `FUN(X[[i]])`. | | `USE.NAMES` | logical; if `TRUE` and if `X` is character, use `X` as `<names>` for the result unless it had names already. Since this argument follows `...` its name cannot be abbreviated. | | `FUN.VALUE` | a (generalized) vector; a template for the return value from FUN. See ‘Details’. | | `n` | integer: the number of replications. | | `expr` | the expression (a [language object](is.language), usually a call) to evaluate repeatedly. | | `x` | a list, typically returned from `lapply()`. | | `higher` | logical; if true, `simplify2array()` will produce a (“higher rank”) array when appropriate, whereas `higher = FALSE` would return a matrix (or vector) only. These two cases correspond to `sapply(*, simplify = "array")` or `simplify = TRUE`, respectively. | ### Details `FUN` is found by a call to `<match.fun>` and typically is specified as a function or a symbol (e.g., a backquoted name) or a character string specifying a function to be searched for from the environment of the call to `lapply`. Function `FUN` must be able to accept as input any of the elements of `X`. If the latter is an atomic vector, `FUN` will always be passed a length-one vector of the same type as `X`. Arguments in `...` cannot have the same name as any of the other arguments, and care may be needed to avoid partial matching to `FUN`. In general-purpose code it is good practice to name the first two arguments `X` and `FUN` if `...` is passed through: this both avoids partial matching to `FUN` and ensures that a sensible error message is given if arguments named `X` or `FUN` are passed through `...`. Simplification in `sapply` is only attempted if `X` has length greater than zero and if the return values from all elements of `X` are all of the same (positive) length. If the common length is one the result is a vector, and if greater than one is a matrix with a column corresponding to each element of `X`. Simplification is always done in `vapply`. This function checks that all values of `FUN` are compatible with the `FUN.VALUE`, in that they must have the same length and type. (Types may be promoted to a higher type within the ordering logical < integer < double < complex, but not demoted.) Users of S4 classes should pass a list to `lapply` and `vapply`: the internal coercion is done by the `as.list` in the base namespace and not one defined by a user (e.g., by setting S4 methods on the base function). ### Value For `lapply`, `sapply(simplify = FALSE)` and `replicate(simplify = FALSE)`, a list. For `sapply(simplify = TRUE)` and `replicate(simplify = TRUE)`: if `X` has length zero or `n = 0`, an empty list. Otherwise an atomic vector or matrix or list of the same length as `X` (of length `n` for `replicate`). If simplification occurs, the output type is determined from the highest type of the return values in the hierarchy NULL < raw < logical < integer < double < complex < character < list < expression, after coercion of pairlists to lists. `vapply` returns a vector or array of type matching the `FUN.VALUE`. If `length(FUN.VALUE) == 1` a vector of the same length as `X` is returned, otherwise an array. If `FUN.VALUE` is not an `<array>`, the result is a matrix with `length(FUN.VALUE)` rows and `length(X)` columns, otherwise an array `a` with `<dim>(a) == c(dim(FUN.VALUE), length(X))`. The (Dim)names of the array value are taken from the `FUN.VALUE` if it is named, otherwise from the result of the first function call. Column names of the matrix or more generally the names of the last dimension of the array value or names of the vector value are set from `X` as in `sapply`. ### Note `sapply(*, simplify = FALSE, USE.NAMES = FALSE)` is equivalent to `lapply(*)`. For historical reasons, the calls created by `lapply` are unevaluated, and code has been written (e.g., `bquote`) that relies on this. This means that the recorded call is always of the form `FUN(X[[i]], ...)`, with `i` replaced by the current (integer or double) index. This is not normally a problem, but it can be if `FUN` uses `[sys.call](sys.parent)` or `<match.call>` or if it is a primitive function that makes use of the call. This means that it is often safer to call primitive functions with a wrapper, so that e.g. `lapply(ll, function(x) is.numeric(x))` is required to ensure that method dispatch for `is.numeric` occurs correctly. If `expr` is a function call, be aware of assumptions about where it is evaluated, and in particular what `...` might refer to. You can pass additional named arguments to a function call as additional named arguments to `replicate`: see ‘Examples’. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<apply>`, `<tapply>`, `<mapply>` for applying a function to **m**ultiple arguments, and `<rapply>` for a **r**ecursive version of `lapply()`, `<eapply>` for applying a function to each entry in an `<environment>`. ### Examples ``` require(stats); require(graphics) x <- list(a = 1:10, beta = exp(-3:3), logic = c(TRUE,FALSE,FALSE,TRUE)) # compute the list mean for each list element lapply(x, mean) # median and quartiles for each list element lapply(x, quantile, probs = 1:3/4) sapply(x, quantile) i39 <- sapply(3:9, seq) # list of vectors sapply(i39, fivenum) vapply(i39, fivenum, c(Min. = 0, "1st Qu." = 0, Median = 0, "3rd Qu." = 0, Max. = 0)) ## sapply(*, "array") -- artificial example (v <- structure(10*(5:8), names = LETTERS[1:4])) f2 <- function(x, y) outer(rep(x, length.out = 3), y) (a2 <- sapply(v, f2, y = 2*(1:5), simplify = "array")) a.2 <- vapply(v, f2, outer(1:3, 1:5), y = 2*(1:5)) stopifnot(dim(a2) == c(3,5,4), all.equal(a2, a.2), identical(dimnames(a2), list(NULL,NULL,LETTERS[1:4]))) hist(replicate(100, mean(rexp(10)))) ## use of replicate() with parameters: foo <- function(x = 1, y = 2) c(x, y) # does not work: bar <- function(n, ...) replicate(n, foo(...)) bar <- function(n, x) replicate(n, foo(x = x)) bar(5, x = 3) ``` r None `strptime` Date-time Conversion Functions to and from Character ---------------------------------------------------------------- ### Description Functions to convert between character representations and objects of classes `"POSIXlt"` and `"POSIXct"` representing calendar dates and times. ### Usage ``` ## S3 method for class 'POSIXct' format(x, format = "", tz = "", usetz = FALSE, ...) ## S3 method for class 'POSIXlt' format(x, format = "", usetz = FALSE, ...) ## S3 method for class 'POSIXt' as.character(x, ...) strftime(x, format = "", tz = "", usetz = FALSE, ...) strptime(x, format, tz = "") ``` ### Arguments | | | | --- | --- | | `x` | An object to be converted: a character vector for `strptime`, an object which can be converted to `"[POSIXlt](datetimeclasses)"` for `strftime`. | | `tz` | A character string specifying the time zone to be used for the conversion. System-specific (see `[as.POSIXlt](as.posixlt)`), but `""` is the current time zone, and `"GMT"` is UTC. Invalid values are most commonly treated as UTC, on some platforms with a warning. | | `format` | A character string. The default for the `format` methods is `"%Y-%m-%d %H:%M:%S"` if any element has a time component which is not midnight, and `"%Y-%m-%d"` otherwise. If `<options>("digits.secs")` is set, up to the specified number of digits will be printed for seconds. | | `...` | Further arguments to be passed from or to other methods. | | `usetz` | logical. Should the time zone abbreviation be appended to the output? This is used in printing times, and more reliable than using `"%Z"`. | ### Details The `format` and `as.character` methods and `strftime` convert objects from the classes `"[POSIXlt](datetimeclasses)"` and `"[POSIXct](datetimeclasses)"` to character vectors. `strptime` converts character vectors to class `"POSIXlt"`: its input `x` is first converted by `[as.character](character)`. Each input string is processed as far as necessary for the format specified: any trailing characters are ignored. `strftime` is a wrapper for `format.POSIXlt`, and it and `format.POSIXct` first convert to class `"POSIXlt"` by calling `[as.POSIXlt](as.posixlt)` (so they also work for class `"[Date](dates)"`). Note that only that conversion depends on the time zone. The usual vector re-cycling rules are applied to `x` and `format` so the answer will be of length of the longer of these vectors. Locale-specific conversions to and from character strings are used where appropriate and available. This affects the names of the days and months, the AM/PM indicator (if used) and the separators in output formats such as `%x` and `%X`, *via* the setting of the `[LC\_TIME](locales)` locale category. The ‘current locale’ of the descriptions might mean the locale in use at the start of the **R** session or when these functions are first used. (For input, the locale-specific conversions can be changed by calling `[Sys.setlocale](locales)` with category `LC_TIME` (or `LC_ALL`). For output, what happens depends on the OS but usually works.) The details of the formats are platform-specific, but the following are likely to be widely available: most are defined by the POSIX standard. A *conversion specification* is introduced by `%`, usually followed by a single letter or `O` or `E` and then a single letter. Any character in the format string not part of a conversion specification is interpreted literally (and `%%` gives `%`). Widely implemented conversion specifications include `%a` Abbreviated weekday name in the current locale on this platform. (Also matches full name on input: in some locales there are no abbreviations of names.) `%A` Full weekday name in the current locale. (Also matches abbreviated name on input.) `%b` Abbreviated month name in the current locale on this platform. (Also matches full name on input: in some locales there are no abbreviations of names.) `%B` Full month name in the current locale. (Also matches abbreviated name on input.) `%c` Date and time. Locale-specific on output, `"%a %b %e %H:%M:%S %Y"` on input. `%C` Century (00–99): the integer part of the year divided by 100. `%d` Day of the month as decimal number (01–31). `%D` Date format such as `%m/%d/%y`: the C99 standard says it should be that exact format (but not all OSes comply). `%e` Day of the month as decimal number (1–31), with a leading space for a single-digit number. `%F` Equivalent to %Y-%m-%d (the ISO 8601 date format). `%g` The last two digits of the week-based year (see `%V`). (Accepted but ignored on input.) `%G` The week-based year (see `%V`) as a decimal number. (Accepted but ignored on input.) `%h` Equivalent to `%b`. `%H` Hours as decimal number (00–23). As a special exception strings such as 24:00:00 are accepted for input, since ISO 8601 allows these. `%I` Hours as decimal number (01–12). `%j` Day of year as decimal number (001–366): For input, 366 is only valid in a leap year. `%m` Month as decimal number (01–12). `%M` Minute as decimal number (00–59). `%n` Newline on output, arbitrary whitespace on input. `%p` AM/PM indicator in the locale. Used in conjunction with `%I` and **not** with `%H`. An empty string in some locales (for example on some OSes, non-English European locales including Russia). The behaviour is undefined if used for input in such a locale. Some platforms accept `%P` for output, which uses a lower-case version (`%p` may also use lower case): others will output `P`. `%r` For output, the 12-hour clock time (using the locale's AM or PM): only defined in some locales, and on some OSes misleading in locales which do not define an AM/PM indicator. For input, equivalent to `%I:%M:%S %p`. `%R` Equivalent to `%H:%M`. `%S` Second as integer (00–61), allowing for up to two leap-seconds (but POSIX-compliant implementations will ignore leap seconds). `%t` Tab on output, arbitrary whitespace on input. `%T` Equivalent to `%H:%M:%S`. `%u` Weekday as a decimal number (1–7, Monday is 1). `%U` Week of the year as decimal number (00–53) using Sunday as the first day 1 of the week (and typically with the first Sunday of the year as day 1 of week 1). The US convention. `%V` Week of the year as decimal number (01–53) as defined in ISO 8601. If the week (starting on Monday) containing 1 January has four or more days in the new year, then it is considered week 1. Otherwise, it is the last week of the previous year, and the next week is week 1. (Accepted but ignored on input.) `%w` Weekday as decimal number (0–6, Sunday is 0). `%W` Week of the year as decimal number (00–53) using Monday as the first day of week (and typically with the first Monday of the year as day 1 of week 1). The UK convention. `%x` Date. Locale-specific on output, `"%y/%m/%d"` on input. `%X` Time. Locale-specific on output, `"%H:%M:%S"` on input. `%y` Year without century (00–99). On input, values 00 to 68 are prefixed by 20 and 69 to 99 by 19 – that is the behaviour specified by the 2018 POSIX standard, but it does also say ‘it is expected that in a future version the default century inferred from a 2-digit year will change’. `%Y` Year with century. Note that whereas there was no zero in the original Gregorian calendar, ISO 8601:2004 defines it to be valid (interpreted as 1BC): see <https://en.wikipedia.org/wiki/0_(year)>. However, the standards also say that years before 1582 in its calendar should only be used with agreement of the parties involved. For input, only years `0:9999` are accepted. `%z` Signed offset in hours and minutes from UTC, so `-0800` is 8 hours behind UTC. Values up to `+1400` are accepted. (Standard only for output. For input **R** currently supports it on all platforms.) `%Z` (Output only.) Time zone abbreviation as a character string (empty if not available). This may not be reliable when a time zone has changed abbreviations over the years. Where leading zeros are shown they will be used on output but are optional on input. Names are matched case-insensitively on input: whether they are capitalized on output depends on the platform and the locale. Note that abbreviated names are platform-specific (although the standards specify that in the C locale they must be the first three letters of the capitalized English name: this convention is widely used in English-language locales but for example the French month abbreviations are not the same on any two of Linux, macOS, Solaris and Windows). Knowing what the abbreviations are is essential if you wish to use `%a`, `%b` or `%h` as part of an input format: see the examples for how to check. When `%z` or `%Z` is used for output with an object with an assigned time zone an attempt is made to use the values for that time zone — but it is not guaranteed to succeed. Not in the standards and less widely implemented are `%k` The 24-hour clock time with single digits preceded by a blank. `%l` The 12-hour clock time with single digits preceded by a blank. `%s` (Output only.) The number of seconds since the epoch. `%+` (Output only.) Similar to `%c`, often `"%a %b %e %H:%M:%S %Z %Y"`. May depend on the locale. For output there are also `%O[dHImMUVwWy]` which may emit numbers in an alternative locale-dependent format (e.g., roman numerals), and `%E[cCyYxX]` which can use an alternative ‘era’ (e.g., a different religious calendar). Which of these are supported is OS-dependent. These are accepted for input, but with the standard interpretation. Specific to **R** is `%OSn`, which for output gives the seconds truncated to `0 <= n <= 6` decimal places (and if `%OS` is not followed by a digit, it uses the setting of `[getOption](options)("digits.secs")`, or if that is unset, `n = 0`). Further, for `strptime` `%OS` will input seconds including fractional seconds. Note that `%S` does not read fractional parts on output. The behaviour of other conversion specifications (and even if other character sequences commencing with `%` *are* conversion specifications) is system-specific. Some systems document that the use of multi-byte characters in `format` is unsupported: UTF-8 locales are unlikely to cause a problem. ### Value The `format` methods and `strftime` return character vectors representing the time. `NA` times are returned as `NA_character_`. The elements are restricted to 256 bytes, plus a time zone abbreviation if `usetz` is true. (On known platforms longer strings are truncated at 255 or 256 bytes, but this is not guaranteed by the C99 standard.) `strptime` turns character representations into an object of class `"[POSIXlt](datetimeclasses)"`. The time zone is used to set the `isdst` component and to set the `"tzone"` attribute if `tz != ""`. If the specified time is invalid (for example "2010-02-30 08:00") all the components of the result are `NA`. (NB: this does means exactly what it says – if it is an invalid time, not just a time that does not exist in some time zone.) ### Printing years Everyone agrees that years from 1000 to 9999 should be printed with 4 digits, but the standards do not define what is to be done outside that range. For years 0 to 999 most OSes pad with zeros or spaces to 4 characters, and Linux outputs just the number. OS facilities will probably not print years before 1 CE (aka 1 AD) ‘correctly’ (they tend to assume the existence of a year 0: see <https://en.wikipedia.org/wiki/0_(year)>, and some OSes get them completely wrong). Common formats are `-45` and `-045`. Years after 9999 and before -999 are normally printed with five or more characters. Some platforms support modifiers from POSIX 2008 (and others). On Linux the format `"%04Y"` assures a minimum of four characters and zero-padding. The internal code (as used on Windows and by default on macOS) uses zero-padding by default, and formats `%_4Y` and `%_Y` can be used for space padding and no padding. ### Time zone offsets Offsets from GMT (also known as UTC) are part of the conversion between timezones and to/from class `"POSIXct"`, but cause difficulties as they are often computed incorrectly. They conventionally have the opposite sign from time-zone specifications (see `[Sys.timezone](timezones)`): positive values are East of the meridian. Although there have been time zones with offsets like 00:09:21 (Paris in 1900), and 00:44:30 (Liberia until 1972), offsets are usually treated as whole numbers of minutes, and are most often seen in RFC 5322 email headers in forms like `-0800` (e.g., used on the Pacific coast of the USA in winter). Format `%z` can be used for input or output: it is a character string, conventionally plus or minus followed by two digits for hours and two for minutes: the standards say that an empty string should be output if the offset is unknown, but some systems use the offsets for the time zone in use for the current year. ### Sources Input uses the POSIX function `strptime` and output the C99 function `strftime`. However, not all OSes (notably Windows) provided `strptime` and many issues were found for those which did, so since 2000 **R** has used a fork of code from glibc. The forked code uses the system's `strftime` to find the locale-specific day and month names and any AM/PM indicator. On some platforms (including Windows and by default on macOS) the system's `strftime` is replaced (along with most of the rest of the C-level datetime code) by code modified from IANA's tzcode distribution (<https://www.iana.org/time-zones>). ### Note The default formats follow the rules of the ISO 8601 international standard which expresses a day as `"2001-02-28"` and a time as `"14:01:02"` using leading zeroes as here. (The ISO form uses no space to separate dates and times: **R** does by default.) For `strptime` the input string need not specify the date completely: it is assumed that unspecified seconds, minutes or hours are zero, and an unspecified year, month or day is the current one. (However, if a month is specified, the day of that month has to be specified by `%d` or `%e` since the current day of the month need not be valid for the specified month.) Some components may be returned as `NA` (but an unknown `tzone` component is represented by an empty string). If the time zone specified is invalid on your system, what happens is system-specific but it will probably be ignored. Remember that in most time zones some times do not occur and some occur twice because of transitions to/from ‘daylight saving’ (also known as ‘summer’) time. `strptime` does not validate such times (it does not assume a specific time zone), but conversion by `[as.POSIXct](as.posixlt)` will do so. Conversion by `strftime` and formatting/printing uses OS facilities and may return nonsensical results for non-existent times at DST transitions. In a C locale `%c` is required to be `"%a %b %e %H:%M:%S %Y"`. As Windows does not comply (and uses a date format not understood outside N. America), that format is used by **R** on Windows in all locales. ### References International Organization for Standardization (2004, 2000, ...) ‘ISO 8601. Data elements and interchange formats – Information interchange – Representation of dates and times.’, slightly updated to International Organization for Standardization (2019) ‘ISO 8601-1:2019. Date and time – Representations for information interchange – Part 1: Basic rules’. For links to versions available on-line see (at the time of writing) <https://dotat.at/tmp/ISO_8601-2004_E.pdf> and <https://www.qsl.net/g1smd/isopdf.htm>; for information on the current official version, see <https://www.iso.org/iso/iso8601>. The POSIX 1003.1 standard, which is in some respects stricter than ISO 8601. ### See Also [DateTimeClasses](datetimeclasses) for details of the date-time classes; <locales> to query or set a locale. Your system's help page on `strftime` to see how to specify their formats. (On some systems, including Windows, `strftime` is replaced by more comprehensive internal code.) ### Examples ``` ## locale-specific version of date() format(Sys.time(), "%a %b %d %X %Y %Z") ## time to sub-second accuracy (if supported by the OS) format(Sys.time(), "%H:%M:%OS3") ## read in date info in format 'ddmmmyyyy' ## This will give NA(s) in some non-English locales; setting the C locale ## as in the commented lines will overcome this on most systems. ## lct <- Sys.getlocale("LC_TIME"); Sys.setlocale("LC_TIME", "C") x <- c("1jan1960", "2jan1960", "31mar1960", "30jul1960") z <- strptime(x, "%d%b%Y") ## Sys.setlocale("LC_TIME", lct) z ## read in date/time info in format 'm/d/y h:m:s' dates <- c("02/27/92", "02/27/92", "01/14/92", "02/28/92", "02/01/92") times <- c("23:03:20", "22:29:56", "01:03:30", "18:21:03", "16:56:26") x <- paste(dates, times) strptime(x, "%m/%d/%y %H:%M:%S") ## time with fractional seconds z <- strptime("20/2/06 11:16:16.683", "%d/%m/%y %H:%M:%OS") z # prints without fractional seconds op <- options(digits.secs = 3) z options(op) ## time zone names are not portable, but 'EST5EDT' comes pretty close. (x <- strptime(c("2006-01-08 10:07:52", "2006-08-07 19:33:02"), "%Y-%m-%d %H:%M:%S", tz = "EST5EDT")) attr(x, "tzone") ## An RFC 5322 header (Eastern Canada, during DST) ## In a non-English locale the commented lines may be needed. ## prev <- Sys.getlocale("LC_TIME"); Sys.setlocale("LC_TIME", "C") strptime("Tue, 23 Mar 2010 14:36:38 -0400", "%a, %d %b %Y %H:%M:%S %z") ## Sys.setlocale("LC_TIME", prev) ## Make sure you know what the abbreviated names are for you if you wish ## to use them for input (they are matched case-insensitively): format(seq.Date(as.Date('1978-01-01'), by = 'day', len = 7), "%a") format(seq.Date(as.Date('2000-01-01'), by = 'month', len = 12), "%b") ```
programming_docs
r None `name` Names and Symbols ------------------------- ### Description A ‘name’ (also known as a ‘symbol’) is a way to refer to **R** objects by name (rather than the value of the object, if any, bound to that name). `as.name` and `as.symbol` are identical: they attempt to coerce the argument to a name. `is.symbol` and the identical `is.name` return `TRUE` or `FALSE` depending on whether the argument is a name or not. ### Usage ``` as.symbol(x) is.symbol(x) as.name(x) is.name(x) ``` ### Arguments | | | | --- | --- | | `x` | object to be coerced or tested. | ### Details Names are limited to 10,000 bytes (and were to 256 bytes in versions of **R** before 2.13.0). `as.name` first coerces its argument internally to a character vector (so methods for `as.character` are not used). It then takes the first element and provided it is not `""`, returns a symbol of that name (and if the element is `NA_character_`, the name is ``NA``). `as.name` is implemented as `[as.vector](vector)(x, "symbol")`, and hence will dispatch methods for the generic function `as.vector`. `is.name` and `is.symbol` are <primitive> functions. ### Value For `as.name` and `as.symbol`, an **R** object of type `"symbol"` (see `<typeof>`). For `is.name` and `is.symbol`, a length-one logical vector with value `TRUE` or `FALSE`. ### Note The term ‘symbol’ is from the LISP background of **R**, whereas ‘name’ has been the standard S term for this. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<call>`, `<is.language>`. For the internal object mode, `<typeof>`. `[plotmath](../../grdevices/html/plotmath)` for another use of ‘symbol’. ### Examples ``` an <- as.name("arrg") is.name(an) # TRUE mode(an) # name typeof(an) # symbol ``` r None `regmatches` Extract or Replace Matched Substrings --------------------------------------------------- ### Description Extract or replace matched substrings from match data obtained by `[regexpr](grep)`, `[gregexpr](grep)`, `[regexec](grep)` or `[gregexec](grep)`. ### Usage ``` regmatches(x, m, invert = FALSE) regmatches(x, m, invert = FALSE) <- value ``` ### Arguments | | | | --- | --- | | `x` | a character vector | | `m` | an object with match data | | `invert` | a logical: if `TRUE`, extract or replace the non-matched substrings. | | `value` | an object with suitable replacement values for the matched or non-matched substrings (see `Details`). | ### Details If `invert` is `FALSE` (default), `regmatches` extracts the matched substrings as specified by the match data. For vector match data (as obtained from `[regexpr](grep)`), empty matches are dropped; for list match data, empty matches give empty components (zero-length character vectors). If `invert` is `TRUE`, `regmatches` extracts the non-matched substrings, i.e., the strings are split according to the matches similar to `<strsplit>` (for vector match data, at most a single split is performed). If `invert` is `NA`, `regmatches` extracts both non-matched and matched substrings, always starting and ending with a non-match (empty if the match occurred at the beginning or the end, respectively). Note that the match data can be obtained from regular expression matching on a modified version of `x` with the same numbers of characters. The replacement function can be used for replacing the matched or non-matched substrings. For vector match data, if `invert` is `FALSE`, `value` should be a character vector with length the number of matched elements in `m`. Otherwise, it should be a list of character vectors with the same length as `m`, each as long as the number of replacements needed. Replacement coerces values to character or list and generously recycles values as needed. Missing replacement values are not allowed. ### Value For `regmatches`, a character vector with the matched substrings if `m` is a vector and `invert` is `FALSE`. Otherwise, a list with the matched or/and non-matched substrings. For `regmatches<-`, the updated character vector. ### Examples ``` x <- c("A and B", "A, B and C", "A, B, C and D", "foobar") pattern <- "[[:space:]]*(,|and)[[:space:]]" ## Match data from regexpr() m <- regexpr(pattern, x) regmatches(x, m) regmatches(x, m, invert = TRUE) ## Match data from gregexpr() m <- gregexpr(pattern, x) regmatches(x, m) regmatches(x, m, invert = TRUE) ## Consider x <- "John (fishing, hunting), Paul (hiking, biking)" ## Suppose we want to split at the comma (plus spaces) between the ## persons, but not at the commas in the parenthesized hobby lists. ## One idea is to "blank out" the parenthesized parts to match the ## parts to be used for splitting, and extract the persons as the ## non-matched parts. ## First, match the parenthesized hobby lists. m <- gregexpr("\\([^)]*\\)", x) ## Create blank strings with given numbers of characters. blanks <- function(n) strrep(" ", n) ## Create a copy of x with the parenthesized parts blanked out. s <- x regmatches(s, m) <- Map(blanks, lapply(regmatches(s, m), nchar)) s ## Compute the positions of the split matches (note that we cannot call ## strsplit() on x with match data from s). m <- gregexpr(", *", s) ## And finally extract the non-matched parts. regmatches(x, m, invert = TRUE) ## regexec() and gregexec() return overlapping ranges because the ## first match is the full match. This conflicts with regmatches()<- ## and regmatches(..., invert=TRUE). We can work-around by dropping ## the first match. drop_first <- function(x) { if(!anyNA(x) && all(x > 0)) { ml <- attr(x, 'match.length') if(is.matrix(x)) x <- x[-1,] else x <- x[-1] attr(x, 'match.length') <- if(is.matrix(ml)) ml[-1,] else ml[-1] } x } m <- gregexec("(\\w+) \\(((?:\\w+(?:, )?)+)\\)", x) regmatches(x, m) try(regmatches(x, m, invert=TRUE)) regmatches(x, lapply(m, drop_first)) ## invert=TRUE loses matrix structure because we are retrieving what ## is in between every sub-match regmatches(x, lapply(m, drop_first), invert=TRUE) y <- z <- x ## Notice **list**(...) on the RHS regmatches(y, lapply(m, drop_first)) <- list(c("<NAME>", "<HOBBY-LIST>")) y regmatches(z, lapply(m, drop_first), invert=TRUE) <- list(sprintf("<%d>", 1:5)) z ## With `perl = TRUE` and `invert = FALSE` capture group names ## are preserved. Collect functions and arguments in calls: NEWS <- head(readLines(file.path(R.home(), 'doc', 'NEWS.2')), 100) m <- gregexec("(?<fun>\\w+)\\((?<args>[^)]*)\\)", NEWS, perl = TRUE) y <- regmatches(NEWS, m) y[[16]] ## Make tabular, adding original line numbers mdat <- as.data.frame(t(do.call(cbind, y))) mdat <- cbind(mdat, line=rep(seq_along(y), lengths(y) / ncol(mdat))) head(mdat) NEWS[head(mdat[['line']])] ``` r None `cat` Concatenate and Print ---------------------------- ### Description Outputs the objects, concatenating the representations. `cat` performs much less conversion than `<print>`. ### Usage ``` cat(... , file = "", sep = " ", fill = FALSE, labels = NULL, append = FALSE) ``` ### Arguments | | | | --- | --- | | `...` | **R** objects (see ‘Details’ for the types of objects allowed). | | `file` | A [connection](connections), or a character string naming the file to print to. If `""` (the default), `cat` prints to the standard output connection, the console unless redirected by `<sink>`. If it is `"|cmd"`, the output is piped to the command given by ‘cmd’, by opening a pipe connection. | | `sep` | a character vector of strings to append after each element. | | `fill` | a logical or (positive) numeric controlling how the output is broken into successive lines. If `FALSE` (default), only newlines created explicitly by "\n" are printed. Otherwise, the output is broken into lines with print width equal to the option `width` if `fill` is `TRUE`, or the value of `fill` if this is numeric. Linefeeds are only inserted *between* elements, strings wider than `fill` are not wrapped. Non-positive `fill` values are ignored, with a warning. | | `labels` | character vector of labels for the lines printed. Ignored if `fill` is `FALSE`. | | `append` | logical. Only used if the argument `file` is the name of file (and not a connection or `"|cmd"`). If `TRUE` output will be appended to `file`; otherwise, it will overwrite the contents of `file`. | ### Details `cat` is useful for producing output in user-defined functions. It converts its arguments to character vectors, concatenates them to a single character vector, appends the given `sep =` string(s) to each element and then outputs them. No linefeeds are output unless explicitly requested by "\n" or if generated by filling (if argument `fill` is `TRUE` or numeric). If `file` is a connection and open for writing it is written from its current position. If it is not open, it is opened for the duration of the call in `"wt"` mode and then closed again. Currently only [atomic](vector) vectors and <name>s are handled, together with `NULL` and other zero-length objects (which produce no output). Character strings are output ‘as is’ (unlike `<print.default>` which escapes non-printable characters and backslash — use `[encodeString](encodestring)` if you want to output encoded strings using `cat`). Other types of **R** object should be converted (e.g., by `[as.character](character)` or `<format>`) before being passed to `cat`. That includes factors, which are output as integer vectors. `cat` converts numeric/complex elements in the same way as `print` (and not in the same way as `[as.character](character)` which is used by the S equivalent), so `<options>` `"digits"` and `"scipen"` are relevant. However, it uses the minimum field width necessary for each element, rather than the same field width for all elements. ### Value None (invisible `NULL`). ### Note If any element of `sep` contains a newline character, it is treated as a vector of terminators rather than separators, an element being output after every vector element *and* a newline after the last. Entries are recycled as needed. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<print>`, `<format>`, and `<paste>` which concatenates into a string. ### Examples ``` iter <- stats::rpois(1, lambda = 10) ## print an informative message cat("iteration = ", iter <- iter + 1, "\n") ## 'fill' and label lines: cat(paste(letters, 100* 1:26), fill = TRUE, labels = paste0("{", 1:10, "}:")) ``` r None `ISOdatetime` Date-time Conversion Functions from Numeric Representations -------------------------------------------------------------------------- ### Description Convenience wrappers to create date-times from numeric representations. ### Usage ``` ISOdatetime(year, month, day, hour, min, sec, tz = "") ISOdate(year, month, day, hour = 12, min = 0, sec = 0, tz = "GMT") ``` ### Arguments | | | | --- | --- | | `year, month, day` | numerical values to specify a day. | | `hour, min, sec` | numerical values for a time within a day. Fractional seconds are allowed. | | `tz` | A [time zone](timezones) specification to be used for the conversion. `""` is the current time zone and `"GMT"` is UTC. Invalid values are most commonly treated as UTC, on some platforms with a warning. | ### Details `ISOdatetime` and `ISOdate` are convenience wrappers for `strptime` that differ only in their defaults and that `ISOdate` sets UTC as the time zone. For dates without times it would normally be better to use the `"[Date](dates)"` class. The main arguments will be recycled using the usual recycling rules. Because these make use of `<strptime>`, only years in the range `0:9999` are accepted. ### Value An object of class `"[POSIXct](datetimeclasses)"`. ### See Also [DateTimeClasses](datetimeclasses) for details of the date-time classes; `<strptime>` for conversions from character strings. r None `trimws` Remove Leading/Trailing Whitespace -------------------------------------------- ### Description Remove leading and/or trailing whitespace from character strings. ### Usage ``` trimws(x, which = c("both", "left", "right"), whitespace = "[ \t\r\n]") ``` ### Arguments | | | | --- | --- | | `x` | a character vector | | `which` | a character string specifying whether to remove both leading and trailing whitespace (default), or only leading (`"left"`) or trailing (`"right"`). Can be abbreviated. | | `whitespace` | a string specifying a regular expression to match (one character of) “white space”, see Details for alternatives to the default. | ### Details Internally, `[sub](grep)(re, "", *, perl = TRUE)`, i.e., PCRE library regular expressions are used. For portability, the default ‘whitespace’ is the character class `[ \t\r\n]` (space, horizontal tab, carriage return, newline). Alternatively, `[\h\v]` is a good (PCRE) generalization to match all Unicode horizontal and vertical white space characters, see also <https://www.pcre.org>. ### Examples ``` x <- " Some text. " x trimws(x) trimws(x, "l") trimws(x, "r") ## Unicode --> need "stronger" 'whitespace' to match all : tt <- "text with unicode 'non breakable space'." xu <- paste(" \t\v", tt, "\u00a0 \n\r") (tu <- trimws(xu, whitespace = "[\\h\\v]")) stopifnot(identical(tu, tt)) ``` r None `marginSums` Compute table margins ----------------------------------- ### Description For a contingency table in array form, compute the sum of table entries for a given margin or set of margins. ### Usage ``` marginSums(x, margin = NULL) margin.table(x, margin = NULL) ``` ### Arguments | | | | --- | --- | | `x` | an array | | `margin` | a vector giving the margins to compute sums for. E.g., for a matrix `1` indicates rows, `2` indicates columns, `c(1, 2)` indicates rows and columns. When `x` has named dimnames, it can be a character vector selecting dimension names. | ### Value The relevant marginal table, or just the sum of all entries if `margin` has length zero. The class of `x` is copied to the output table if `margin` is non-NULL. ### Note `margin.table` is an earlier name, retained for back-compatibility. ### Author(s) Peter Dalgaard ### See Also `<proportions>` and `[addmargins](../../stats/html/addmargins)`. ### Examples ``` m <- matrix(1:4, 2) marginSums(m, 1) marginSums(m, 2) DF <- as.data.frame(UCBAdmissions) tbl <- xtabs(Freq ~ Gender + Admit, DF) marginSums(tbl, "Gender") proportions(tbl, "Gender") ``` r None `memory.profile` Profile the Usage of Cons Cells ------------------------------------------------- ### Description Lists the usage of the cons cells by `SEXPREC` type. ### Usage ``` memory.profile() ``` ### Details The current types and their uses are listed in the include file ‘Rinternals.h’. ### Value A vector of counts, named by the types. See `<typeof>` for an explanation of types. ### See Also `<gc>` for the overall usage of cons cells. `[Rprofmem](../../utils/html/rprofmem)` and `<tracemem>` allow memory profiling of specific code or objects, but need to be enabled at compile time. ### Examples ``` memory.profile() ``` r None `gc.time` Report Time Spent in Garbage Collection -------------------------------------------------- ### Description This function reports the time spent in garbage collection so far in the **R** session while GC timing was enabled. ### Usage ``` gc.time(on = TRUE) ``` ### Arguments | | | | --- | --- | | `on` | logical; if `TRUE`, GC timing is enabled. | ### Details Due to timer resolution this may be under-estimate. This is a <primitive>. ### Value A numerical vector of length 5 giving the user CPU time, the system CPU time, the elapsed time and children's user and system CPU times (normally both zero), of time spent doing garbage collection whilst GC timing was enabled. Times of child processes are not available on Windows and will always be given as `NA`. ### See Also `<gc>`, `<proc.time>` for the timings for the session. ### Examples ``` gc.time() ``` r None `options` Options Settings --------------------------- ### Description Allow the user to set and examine a variety of global *options* which affect the way in which **R** computes and displays its results. ### Usage ``` options(...) getOption(x, default = NULL) .Options ``` ### Arguments | | | | --- | --- | | `...` | any options can be defined, using `name = value`. However, only the ones below are used in base **R**. Options can also be passed by giving a single unnamed argument which is a named list. | | `x` | a character string holding an option name. | | `default` | if the specified option is not set in the options list, this value is returned. This facilitates retrieving an option and checking whether it is set and setting it separately if not. | ### Details Invoking `options()` with no arguments returns a list with the current values of the options. Note that not all options listed below are set initially. To access the value of a single option, one should use, e.g., `getOption("width")` rather than `options("width")` which is a *list* of length one. ### Value For `getOption`, the current value set for option `x`, or `default` (which defaults to `NULL`) if the option is unset. For `options()`, a list of all set options sorted by name. For `options(name)`, a list of length one containing the set value, or `NULL` if it is unset. For uses setting one or more options, a list with the previous values of the options changed (returned invisibly). ### Options used in base **R** `add.smooth`: typically logical, defaulting to `TRUE`. Could also be set to an integer for specifying how many (simulated) smooths should be added. This is currently only used by `[plot.lm](../../stats/html/plot.lm)`. `askYesNo`: a function (typically set by a front-end) to ask the user binary response functions in a consistent way, or a vector of strings used by `[askYesNo](../../utils/html/askyesno)` to use as default responses for such questions. `browserNLdisabled`: logical: whether newline is disabled as a synonym for `"n"` in the browser. `checkPackageLicense`: logical, not set by default. If true, `[loadNamespace](ns-load)` asks a user to accept any non-standard license at first load of the package. `check.bounds`: logical, defaulting to `FALSE`. If true, a <warning> is produced whenever a vector (atomic or `<list>`) is extended, by something like `x <- 1:3; x[5] <- 6`. `CBoundsCheck`: logical, controlling whether `[.C](foreign)` and `[.Fortran](foreign)` make copies to check for array over-runs on the atomic vector arguments. Initially set from value of the environment variable R\_C\_BOUNDS\_CHECK (set to `yes` to enable). `conflicts.policy`: character string or list controlling handling of conflicts found in calls to `<library>` or `[require](library)`. See `<library>` for details. `continue`: a non-empty string setting the prompt used for lines which continue over one line. `defaultPackages`: the packages that are attached by default when **R** starts up. Initially set from value of the environment variable R\_DEFAULT\_PACKAGES, or if that is unset to `c("datasets", "utils", "grDevices", "graphics", "stats", "methods")`. (Set R\_DEFAULT\_PACKAGES to `NULL` or a comma-separated list of package names.) It will not work to set this in a ‘.Rprofile’ file, as its value is consulted before that file is read. `deparse.cutoff`: integer value controlling the printing of language constructs which are `<deparse>`d. Default `60`. `deparse.max.lines`: controls the number of lines used when deparsing in `<browser>`, upon entry to a function whose debugging flag is set, and if option `traceback.max.lines` is unset, of `<traceback>()`. Initially unset, and only used if set to a positive integer. `traceback.max.lines`: controls the number of lines used when deparsing in `<traceback>`, if set. Initially unset, and only used if set to a positive integer. `digits`: controls the number of *significant* (see `[signif](round)`) digits to print when printing numeric values. It is a suggestion only. Valid values are 1...22 with default 7. See the note in `<print.default>` about values greater than 15. `digits.secs`: controls the maximum number of digits to print when formatting time values in seconds. Valid values are 0...6 with default 0. See `[strftime](strptime)`. `download.file.extra`: Extra command-line argument(s) for non-default methods: see `[download.file](../../utils/html/download.file)`. `download.file.method`: Method to be used for `download.file`. Currently download methods `"internal"`, `"wininet"` (Windows only), `"libcurl"`, `"wget"` and `"curl"` are available. If not set, `method = "auto"` is chosen: see `[download.file](../../utils/html/download.file)`. `echo`: logical. Only used in non-interactive mode, when it controls whether input is echoed. Command-line option --no-echo sets this to `FALSE`, but otherwise it starts the session as `TRUE`. `encoding`: The name of an encoding, default `"native.enc"`. See `<connections>`. `error`: either a function or an expression governing the handling of non-catastrophic errors such as those generated by `<stop>` as well as by signals and internally detected errors. If the option is a function, a call to that function, with no arguments, is generated as the expression. By default the option is not set: see `<stop>` for the behaviour in that case. The functions `[dump.frames](../../utils/html/debugger)` and `[recover](../../utils/html/recover)` provide alternatives that allow post-mortem debugging. Note that these need to specified as e.g. `options(error = utils::recover)` in startup files such as ‘[.Rprofile](startup)’. `expressions`: sets a limit on the number of nested expressions that will be evaluated. Valid values are 25...500000 with default 5000. If you increase it, you may also want to start **R** with a larger protection stack; see --max-ppsize in `[Memory](memory)`. Note too that you may cause a segfault from overflow of the C stack, and on OSes where it is possible you may want to increase that. Once the limit is reached an error is thrown. The current number under evaluation can be found by calling `[Cstack\_info](cstack_info)`. `interrupt`: a function taking no arguments to be called on a user interrupt if the interrupt condition is not otherwise handled. `keep.parse.data`: When internally storing source code (`keep.source` is TRUE), also store parse data. Parse data can then be retrieved with `[getParseData](../../utils/html/getparsedata)()` and used e.g. for spell checking of string constants or syntax highlighting. The value has effect only when internally storing source code (see `keep.source`). The default is `TRUE`. `keep.parse.data.pkgs`: As for `keep.parse.data`, used only when packages are installed. Defaults to `FALSE` unless the environment variable R\_KEEP\_PKG\_PARSE\_DATA is set to `yes`. The space overhead of parse data can be substantial even after compression and it causes performance overhead when loading packages. `keep.source`: When `TRUE`, the source code for functions (newly defined or loaded) is stored internally allowing comments to be kept in the right places. Retrieve the source by printing or using `deparse(fn, control = "useSource")`. The default is `<interactive>()`, i.e., `TRUE` for interactive use. `keep.source.pkgs`: As for `keep.source`, used only when packages are installed. Defaults to `FALSE` unless the environment variable R\_KEEP\_PKG\_SOURCE is set to `yes`. `matprod`: a string selecting the implementation of the matrix products `[%\*%](matmult)`, `<crossprod>`, and `[tcrossprod](crossprod)` for double and complex vectors: `"internal"` uses an unoptimized 3-loop algorithm which correctly propagates `[NaN](is.finite)` and `[Inf](is.finite)` values and is consistent in precision with other summation algorithms inside **R** like `<sum>` or `[colSums](colsums)` (which now means that it uses a `long double` accumulator for summation if available and enabled, see `<capabilities>`). `"default"` uses BLAS to speed up computation, but to ensure correct propagation of `NaN` and `Inf` values it uses an unoptimized 3-loop algorithm for inputs that may contain `NaN` or `Inf` values. When deemed beneficial for performance, `"default"` may call the 3-loop algorithm unconditionally, i.e., without checking the input for `NaN`/`Inf` values. The 3-loop algorithm uses (only) a `double` accumulator for summation, which is consistent with the reference BLAS implementation. `"blas"` uses BLAS unconditionally without any checks and should be used with extreme caution. BLAS libraries do not propagate `[NaN](is.finite)` or `[Inf](is.finite)` values correctly and for inputs with `NaN`/`Inf` values the results may be undefined. `"default.simd"` is experimental and will likely be removed in future versions of **R**. It provides the same behavior as `"default"`, but the check whether the input contains `NaN`/`Inf` values is faster on some SIMD hardware. On older systems it will run correctly, but may be much slower than `"default"`. `max.print`: integer, defaulting to `99999`. `<print>` or `[show](../../methods/html/show)` methods can make use of this option, to limit the amount of information that is printed, to something in the order of (and typically slightly less than) `max.print` *entries*. `OutDec`: character string containing a single character. The preferred character to be used as the decimal point in output conversions, that is in printing, plotting, `<format>` and `[as.character](character)` but not when deparsing nor by `<sprintf>` nor `[formatC](formatc)` (which are sometimes used prior to printing.) `pager`: the command used for displaying text files by `<file.show>`, details depending on the platform: On a unix-alike defaults to ‘[R\_HOME](rhome)/bin/pager’, which is a shell script running the command-line specified by the environment variable PAGER whose default is set at configuration, usually to `less`. On Windows defaults to `"internal"`, which uses a pager similar to the GUI console. Another possibility is `"console"` to use the console itself. Can be a character string or an **R** function, in which case it needs to accept the arguments `(files, header, title, delete.file)` corresponding to the first four arguments of `<file.show>`. `papersize`: the default paper format used by `[postscript](../../grdevices/html/postscript)`; set by environment variable R\_PAPERSIZE when **R** is started: if that is unset or invalid it defaults platform dependently on a unix-alike to a value derived from the locale category `LC_PAPER`, or if that is unavailable to a default set when **R** was built. on Windows to `"a4"`, or `"letter"` in US and Canadian locales. `PCRE_limit_recursion`: Logical: should `<grep>(perl = TRUE)` and similar limit the maximal recursion allowed when matching? Only relevant for PCRE1 and PCRE2 <= 10.23. PCRE can be built not to use a recursion stack (see `<pcre_config>`), but it uses recursion by default with a recursion limit of 10000000 which potentially needs a very large C stack: see the discussion at <https://www.pcre.org/original/doc/html/pcrestack.html>. If true, the limit is reduced using **R**'s estimate of the C stack size available (if known), otherwise 10000. If `NA`, the limit is imposed only if any input string has 1000 or more bytes. The limit has no effect when PCRE's Just-in-Time compiler is used. `PCRE_study`: Logical or integer: should `<grep>(perl = TRUE)` and similar ‘study’ the patterns? Either logical or a numerical threshold for the minimum number of strings to be matched for the pattern to be studied (the default is `10`)). Missing values and negative numbers are treated as false. This option is ignored with PCRE2 (PCRE version >= 10.00) which does not have a separate study phase and patterns are automatically optimized when possible. `PCRE_use_JIT`: Logical: should `<grep>(perl = TRUE)`, `<strsplit>(perl = TRUE)` and similar make use of PCRE's Just-In-Time compiler if available? (This applies only to studied patterns with PCRE1.) Default: true. Missing values are treated as false. `pdfviewer`: default PDF viewer. The default is set from the environment variable R\_PDFVIEWER, the default value of which on a unix-alike is set when **R** is configured, and on Windows is the full path to `open.exe`, a utility supplied with **R**. `printcmd`: the command used by `[postscript](../../grdevices/html/postscript)` for printing; set by environment variable R\_PRINTCMD when **R** is started. This should be a command that expects either input to be piped to ‘stdin’ or to be given a single filename argument. Usually set to `"lpr"` on a Unix-alike. `prompt`: a non-empty string to be used for **R**'s prompt; should usually end in a blank (`" "`). `rl_word_breaks`: (Unix only:) Used for the readline-based terminal interface. Default value `" \t\n\"\\'`><=%;,|&{()}"`. This is the set of characters use to break the input line into tokens for object- and file-name completion. Those who do not use spaces around operators may prefer `" \t\n\"\\'`><=+-*%;,|&{()}"` `save.defaults`, `save.image.defaults`: see `<save>`. `scipen`: integer. A penalty to be applied when deciding to print numeric values in fixed or exponential notation. Positive values bias towards fixed and negative towards scientific notation: fixed notation will be preferred unless it is more than `scipen` digits wider. `setWidthOnResize`: a logical. If set and `TRUE`, **R** run in a terminal using a recent `readline` library will set the `width` option when the terminal is resized. `showWarnCalls`, `showErrorCalls`: a logical. Should warning and error messages show a summary of the call stack? By default error calls are shown in non-interactive sessions. `showNCalls`: integer. Controls how long the sequence of calls must be (in bytes) before ellipses are used. Defaults to 40 and should be at least 30 and no more than 500. `show.error.locations`: Should source locations of errors be printed? If set to `TRUE` or `"top"`, the source location that is highest on the stack (the most recent call) will be printed. `"bottom"` will print the location of the earliest call found on the stack. Integer values can select other entries. The value `0` corresponds to `"top"` and positive values count down the stack from there. The value `-1` corresponds to `"bottom"` and negative values count up from there. `show.error.messages`: a logical. Should error messages be printed? Intended for use with `<try>` or a user-installed error handler. `stringsAsFactors`: The default setting for `[default.stringsAsFactors](data.frame)`, which in **R** < 4.1.0 was used to provide the default values of the `stringsAsFactors` argument of `<data.frame>` and `[read.table](../../utils/html/read.table)`. `texi2dvi`: used by functions `[texi2dvi](../../tools/html/texi2dvi)` and `[texi2pdf](../../tools/html/texi2dvi)` in package tools. unix-alike only: Set at startup from the environment variable R\_TEXI2DVICMD, which defaults first to the value of environment variable TEXI2DVI, and then to a value set when **R** was installed (the full path to a `texi2dvi` script if one was found). If necessary, that environment variable can be set to `"emulation"`. `timeout`: positive integer. The timeout for some Internet operations, in seconds. Default 60 (seconds) but can be set from environment variable R\_DEFAULT\_INTERNET\_TIMEOUT. (Invalid values of the option or the variable are silently ignored: non-integer numeric values will be truncated.) See `[download.file](../../utils/html/download.file)` and `<connections>`. `topLevelEnvironment`: see `[topenv](ns-topenv)` and `<sys.source>`. `url.method`: character string: the default method for `[url](connections)`. Normally unset, which is equivalent to `"default"`, which is `"internal"` except on Windows. `useFancyQuotes`: controls the use of directional quotes in `[sQuote](squote)`, `dQuote` and in rendering text help (see `[Rd2txt](../../tools/html/rd2html)` in package tools). Can be `TRUE`, `FALSE`, `"TeX"` or `"UTF-8"`. `verbose`: logical. Should **R** report extra information on progress? Set to `TRUE` by the command-line option --verbose. `warn`: integer value to set the handling of warning messages. If `warn` is negative all warnings are ignored. If `warn` is zero (the default) warnings are stored until the top–level function returns. If 10 or fewer warnings were signalled they will be printed otherwise a message saying how many were signalled. An object called `last.warning` is created and can be printed through the function `<warnings>`. If `warn` is one, warnings are printed as they occur. If `warn` is two (or larger, coercible to integer), all warnings are turned into errors. `warnPartialMatchArgs`: logical. If true, warns if partial matching is used in argument matching. `warnPartialMatchAttr`: logical. If true, warns if partial matching is used in extracting attributes via `<attr>`. `warnPartialMatchDollar`: logical. If true, warns if partial matching is used for extraction by `$`. `warning.expression`: an **R** code expression to be called if a warning is generated, replacing the standard message. If non-null it is called irrespective of the value of option `warn`. `warning.length`: sets the truncation limit in bytes for error and warning messages. A non-negative integer, with allowed values 100...8170, default 1000. `nwarnings`: the limit for the number of warnings kept when `warn = 0`, default 50. This will discard messages if called whilst they are being collected. If you increase this limit, be aware that the current implementation pre-allocates the equivalent of a named list for them, i.e., do not increase it to more than say a million. `width`: controls the maximum number of columns on a line used in printing vectors, matrices and arrays, and when filling by `<cat>`. Columns are normally the same as characters except in East Asian languages. You may want to change this if you re-size the window that **R** is running in. Valid values are 10...10000 with default normally 80. (The limits on valid values are in file ‘Print.h’ and can be changed by re-compiling **R**.) Some **R** consoles automatically change the value when they are resized. See the examples on [Startup](startup) for one way to set this automatically from the terminal width when **R** is started. The ‘factory-fresh’ default settings of some of these options are | | | | --- | --- | | `add.smooth` | `TRUE` | | `check.bounds` | `FALSE` | | `continue` | `"+ "` | | `digits` | `7` | | `echo` | `TRUE` | | `encoding` | `"native.enc"` | | `error` | `NULL` | | `expressions` | `5000` | | `keep.source` | `interactive()` | | `keep.source.pkgs` | `FALSE` | | `max.print` | `99999` | | `OutDec` | `"."` | | `prompt` | `"> "` | | `scipen` | `0` | | `show.error.messages` | `TRUE` | | `timeout` | `60` | | `verbose` | `FALSE` | | `warn` | `0` | | `warning.length` | `1000` | | `width` | `80` | | | Others are set from environment variables or are platform-dependent. ### Options set in package grDevices These will be set when package grDevices (or its namespace) is loaded if not already set. `bitmapType`: (Unix only, incl. macOS) character. The default type for the bitmap devices such as `[png](../../grdevices/html/png)`. Defaults to `"cairo"` on systems where that is available, or to `"quartz"` on macOS where that is available. `device`: a character string giving the name of a function, or the function object itself, which when called creates a new graphics device of the default type for that session. The value of this option defaults to the normal screen device (e.g., `X11`, `windows` or `quartz`) for an interactive session, and `pdf` in batch use or if a screen is not available. If set to the name of a device, the device is looked for first from the global environment (that is down the usual search path) and then in the grDevices namespace. The default values in interactive and non-interactive sessions are configurable via environment variables R\_INTERACTIVE\_DEVICE and R\_DEFAULT\_DEVICE respectively. The search logic for ‘the normal screen device’ is that this is `windows` on Windows, and `quartz` if available on macOS (running at the console, and compiled into the build). Otherwise `X11` is used if environment variable DISPLAY is set. `device.ask.default`: logical. The default for `[devAskNewPage](../../grdevices/html/devasknewpage)("ask")` when a device is opened. `locatorBell`: logical. Should selection in `locator` and `identify` be confirmed by a bell? Default `TRUE`. Honoured at least on `X11` and `windows` devices. `windowsTimeout`: (Windows-only) integer vector of length 2 representing two times in milliseconds. These control the double-buffering of `[windows](../../grdevices/html/windows)` devices when that is enabled: the first is the delay after plotting finishes (default 100) and the second is the update interval during continuous plotting (default 500). The values at the time the device is opened are used. ### Other options used by package graphics `max.contour.segments`: positive integer, defaulting to `25000` if not set. A limit on the number of segments in a single contour line in `[contour](../../graphics/html/contour)` or `[contourLines](../../grdevices/html/contourlines)`. ### Options set in package stats These will be set when package stats (or its namespace) is loaded if not already set. `contrasts`: the default `[contrasts](../../stats/html/contrasts)` used in model fitting such as with `[aov](../../stats/html/aov)` or `[lm](../../stats/html/lm)`. A character vector of length two, the first giving the function to be used with unordered factors and the second the function to be used with ordered factors. By default the elements are named `c("unordered", "ordered")`, but the names are unused. `na.action`: the name of a function for treating missing values (`[NA](na)`'s) for certain situations, see `[na.action](../../stats/html/na.action)` and `[na.pass](../../stats/html/na.fail)`. `show.coef.Pvalues`: logical, affecting whether P values are printed in summary tables of coefficients. See `[printCoefmat](../../stats/html/printcoefmat)`. `show.nls.convergence`: logical, should `[nls](../../stats/html/nls)` convergence messages be printed for successful fits? `show.signif.stars`: logical, should stars be printed on summary tables of coefficients? See `[printCoefmat](../../stats/html/printcoefmat)`. `ts.eps`: the relative tolerance for certain time series (`[ts](../../stats/html/ts)`) computations. Default `1e-05`. `ts.S.compat`: logical. Used to select S compatibility for plotting time-series spectra. See the description of argument `log` in `[plot.spec](../../stats/html/plot.spec)`. ### Options set (or used) in package utils These will be set (apart from `Ncpus`) when package utils (or its namespace) is loaded if not already set. `BioC_mirror`: The URL of a Bioconductor mirror for use by `[setRepositories](../../utils/html/setrepositories)`, e.g. the default "https://bioconductor.org" or the European mirror "https://bioconductor.statistik.tu-dortmund.de". Can be set by `[chooseBioCmirror](../../utils/html/choosebiocmirror)`. `browser`: The HTML browser to be used by `[browseURL](../../utils/html/browseurl)`. This sets the default browser on UNIX or a non-default browser on Windows. Alternatively, an **R** function that is called with a URL as its argument. See `[browseURL](../../utils/html/browseurl)` for further details. `ccaddress`: default Cc: address used by `[create.post](../../utils/html/create.post)` (and hence`[bug.report](../../utils/html/bug.report)` and `[help.request](../../utils/html/help.request)`). Can be `FALSE` or `""`. `citation.bibtex.max`: default 1; the maximal number of bibentries (`[bibentry](../../utils/html/bibentry)`) in a `[citation](../../utils/html/citation)` for which the bibtex version is printed in addition to the text one. `de.cellwidth`: integer: the cell widths (number of characters) to be used in the data editor `[dataentry](../../utils/html/dataentry)`. If this is unset (the default), 0, negative or `NA`, variable cell widths are used. `demo.ask`: default for the `ask` argument of `[demo](../../utils/html/demo)`. `editor`: a non-empty character string or an **R** function that sets the default text editor, e.g., for `[edit](../../utils/html/edit)` and `[file.edit](../../utils/html/file.edit)`. Set from the environment variable EDITOR on UNIX, or if unset VISUAL or `vi`. As a string it should specify the name of or path to an external command. `example.ask`: default for the `ask` argument of `[example](../../utils/html/example)`. `help.ports`: optional integer vector for setting ports of the internal HTTP server, see `[startDynamicHelp](../../tools/html/startdynamichelp)`. `help.search.types`: default types of documentation to be searched by `[help.search](../../utils/html/help.search)` and `[??](../../utils/html/help.search)`. `help.try.all.packages`: default for an argument of `[help](../../utils/html/help)`. `help_type`: default for an argument of `[help](../../utils/html/help)`, used also as the help type by `[?](../../utils/html/question)`. `HTTPUserAgent`: string used as the ‘user agent’ in HTTP(S) requests by `[download.file](../../utils/html/download.file)`, `[url](connections)` and `[curlGetHeaders](curlgetheaders)`, or `NULL` when requests will be made without a user agent header. The default is `R (<version> <platform> <arch> <os>)` except when libcurl is used when it is `libcurl/7.<xx>.<y>` for the libcurl version in use. `install.lock`: logical: should per-directory package locking be used by `[install.packages](../../utils/html/install.packages)`? Most useful for binary installs on macOS and Windows, but can be used in a startup file for source installs *via* `R CMD [INSTALL](../../utils/html/install)`. For binary installs, can also be the character string `"pkglock"`. `internet.info`: The minimum level of information to be printed on URL downloads etc, using the `"internal"` and `"libcurl"` methods. Default is 2, for failure causes. Set to 1 or 0 to get more detailed information (for the `"internal"` method 0 provides more information than 1). `install.packages.check.source`: Used by `[install.packages](../../utils/html/install.packages)` (and indirectly `[update.packages](../../utils/html/update.packages)`) on platforms which support binary packages. Possible values `"yes"` and `"no"`, with unset being equivalent to `"yes"`. `install.packages.compile.from.source`: Used by `[install.packages](../../utils/html/install.packages)(type = "both")` (and indirectly `[update.packages](../../utils/html/update.packages)`) on platforms which support binary packages. Possible values are `"never"`, `"interactive"` (which means ask in interactive use and `"never"` in batch use) and `"always"`. The default is taken from environment variable R\_COMPILE\_AND\_INSTALL\_PACKAGES, with default `"interactive"` if unset. However, `install.packages` uses `"never"` unless a `make` program is found, consulting the environment variable MAKE. `mailer`: default emailing method used by `[create.post](../../utils/html/create.post)` and hence `[bug.report](../../utils/html/bug.report)` and `[help.request](../../utils/html/help.request)`. `menu.graphics`: Logical: should graphical menus be used if available?. Defaults to `TRUE`. Currently applies to `[select.list](../../utils/html/select.list)`, `[chooseCRANmirror](../../utils/html/choosecranmirror)`, `[setRepositories](../../utils/html/setrepositories)` and to select from multiple (text) help files in `[help](../../utils/html/help)`. `Ncpus`: an integer *n >= 1*, used in `[install.packages](../../utils/html/install.packages)` as default for the number of cpus to use in a potentially parallel installation, as `Ncpus = getOption("Ncpus", 1L)`, i.e., when unset is equivalent to a setting of 1. `pkgType`: The default type of packages to be downloaded and installed – see `[install.packages](../../utils/html/install.packages)`. Possible values are platform dependently on Windows `"win.binary"`, `"source"` and `"both"` (the default). on Unix-alikes `"source"` (the default except under a CRAN macOS build), `"mac.binary"` and `"both"` (the default for CRAN macOS builds). (`"mac.binary.el-capitan"`, `"mac.binary.mavericks"`, `"mac.binary.leopard"` and `"mac.binary.universal"` are no longer in use.) Value `"binary"` is a synonym for the native binary type (if there is one); `"both"` is used by `[install.packages](../../utils/html/install.packages)` to choose between source and binary installs. `repos`: URLs of the repositories for use by `[update.packages](../../utils/html/update.packages)`. Defaults to `c(CRAN="@CRAN@")`, a value that causes some utilities to prompt for a CRAN mirror. To avoid this do set the CRAN mirror, by something like `local({r <- getOption("repos"); r["CRAN"] <- "http://my.local.cran"; options(repos = r)})`. Note that you can add more repositories (Bioconductor, R-Forge, Rforge.net ...) using `[setRepositories](../../utils/html/setrepositories)`. `SweaveHooks`, `SweaveSyntax`: see `[Sweave](../../utils/html/sweave)`. `unzip`: a character string used by `[unzip](../../utils/html/unzip)`: the path of the external program `unzip` or `"internal"`. Defaults (platform dependently) on unix-alikes to the value of R\_UNZIPCMD, which is set in ‘etc/Renviron’ to the path of the `unzip` command found during configuration and otherwise to `""`. on Windows to `"internal"` when the internal unzip code is used. ### Options set in package parallel These will be set when package parallel (or its namespace) is loaded if not already set. `mc.cores`: a integer giving the maximum allowed number of *additional* **R** processes allowed to be run in parallel to the current **R** process. Defaults to the setting of the environment variable MC\_CORES if set. Most applications which use this assume a limit of `2` if it is unset. ### Options used on Unix only `dvipscmd`: character string giving a command to be used in the (deprecated) off-line printing of help pages *via* PostScript. Defaults to `"dvips"`. ### Options used on Windows only `warn.FPU`: logical, by default undefined. If true, a <warning> is produced whenever [dyn.load](dynload) repairs the control word damaged by a buggy DLL. ### Note For compatibility with S there is a visible object `.Options` whose value is a pairlist containing the current `options()` (in no particular order). Assigning to it will make a local copy and not change the original. (*Using* it however is faster than calling `options()`). An option set to `NULL` is indistinguishable from a non existing option. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### Examples ``` op <- options(); utils::str(op) # op is a named list getOption("width") == options()$width # the latter needs more memory options(digits = 15) pi # set the editor, and save previous value old.o <- options(editor = "nedit") old.o options(check.bounds = TRUE, warn = 1) x <- NULL; x[4] <- "yes" # gives a warning options(digits = 5) print(1e5) options(scipen = 3); print(1e5) options(op) # reset (all) initial options options("digits") ## Not run: ## set contrast handling to be like S options(contrasts = c("contr.helmert", "contr.poly")) ## End(Not run) ## Not run: ## on error, terminate the R session with error status 66 options(error = quote(q("no", status = 66, runLast = FALSE))) stop("test it") ## End(Not run) ## Not run: ## Set error actions for debugging: ## enter browser on error, see ?recover: options(error = recover) ## allows to call debugger() afterwards, see ?debugger: options(error = dump.frames) ## A possible setting for non-interactive sessions options(error = quote({dump.frames(to.file = TRUE); q()})) ## End(Not run) # Compare the two ways to get an option and use it # acconting for the possibility it might not be set. if(as.logical(getOption("performCleanp", TRUE))) cat("do cleanup\n") ## Not run: # a clumsier way of expressing the above w/o the default. tmp <- getOption("performCleanup") if(is.null(tmp)) tmp <- TRUE if(tmp) cat("do cleanup\n") ## End(Not run) ```
programming_docs
r None `locales` Query or Set Aspects of the Locale --------------------------------------------- ### Description Get details of or set aspects of the locale for the **R** process. ### Usage ``` Sys.getlocale(category = "LC_ALL") Sys.setlocale(category = "LC_ALL", locale = "") ``` ### Arguments | | | | --- | --- | | `category` | character string. The following categories should always be supported: `"LC_ALL"`, `"LC_COLLATE"`, `"LC_CTYPE"`, `"LC_MONETARY"`, `"LC_NUMERIC"` and `"LC_TIME"`. Some systems (not Windows) will also support `"LC_MESSAGES"`, `"LC_PAPER"` and `"LC_MEASUREMENT"`. | | `locale` | character string. A valid locale name on the system in use. Normally `""` (the default) will pick up the default locale for the system. | ### Details The locale describes aspects of the internationalization of a program. Initially most aspects of the locale of **R** are set to `"C"` (which is the default for the C language and reflects North-American usage – also known as `"POSIX"`). **R** sets `"LC_CTYPE"` and `"LC_COLLATE"`, which allow the use of a different character set and alphabetic comparisons in that character set (including the use of `<sort>`), `"LC_MONETARY"` (for use by `[Sys.localeconv](sys.localeconv)`) and `"LC_TIME"` may affect the behaviour of `[as.POSIXlt](as.posixlt)` and `<strptime>` and functions which use them (but not `<date>`). The first seven categories described here are those specified by POSIX. `"LC_MESSAGES"` will be `"C"` on systems that do not support message translation, and is not supported on Windows. Trying to use an unsupported category is an error for `Sys.setlocale`. Note that setting category `"LC_ALL"` sets only categories `"LC_COLLATE"`, `"LC_CTYPE"`, `"LC_MONETARY"` and `"LC_TIME"`. Attempts to set an invalid locale are ignored. There may or may not be a warning, depending on the OS. Attempts to change the character set (by `Sys.setlocale("LC_CTYPE", )`, if that implies a different character set) during a session may not work and are likely to lead to some confusion. Note that the LANGUAGE environment variable has precedence over `"LC_MESSAGES"` in selecting the language for message translation on most **R** platforms. On platforms where ICU is used for collation the locale used for collation can be reset by `[icuSetCollate](icusetcollate)`. Except on Windows, the initial setting is taken from the `"LC_COLLATE"` category, and it is reset when this is changed by a call to `Sys.setlocale`. ### Value A character string of length one describing the locale in use (after setting for `Sys.setlocale`), or an empty character string if the current locale settings are invalid or `NULL` if locale information is unavailable. For `category = "LC_ALL"` the details of the string are system-specific: it might be a single locale name or a set of locale names separated by `"/"` (Solaris, macOS) or `";"` (Windows, Linux). For portability, it is best to query categories individually: it is not necessarily the case that the result of `foo <- Sys.getlocale()` can be used in `Sys.setlocale("LC_ALL", locale = foo)`. ### Available locales On most Unix-alikes the POSIX shell command `locale -a` will list the ‘available public’ locales. What that means is platform-dependent. On recent Linuxen this may mean ‘available to be installed’ as on some RPM-based systems the locale data is in separate RPMs. On Debian/Ubuntu the set of available locales is managed by OS-specific facilities such as `locale-gen` and `locale -a` lists those currently enabled. For Windows, Microsoft moves its documentation frequently so a Web search is the best way to find current information. ### Warning Setting `"LC_NUMERIC"` to any value other than `"C"` may cause **R** to function anomalously, so gives a warning. Input conversions in **R** itself are unaffected, but the reading and writing of ASCII `<save>` files will be, as may packages which do their own input/output. Setting it temporarily on a Unix-alike to produce graphical or text output may work well enough, but `<options>(OutDec)` is often preferable. Almost all the output routines used by **R** itself under Windows ignore the setting of `"LC_NUMERIC"` since they make use of the Trio library which is not internationalized. ### Note Changing the values of locale categories whilst **R** is running ought to be noticed by the OS services, and usually is but exceptions have been seen (usually in collation services). Do not use the value of `Sys.getlocale("LC_CTYPE")` to attempt to find the character set – for example UTF-8 locales can have suffix .UTF-8 or .utf8 (more common on Linux than UTF-8) or none (as on macOS) and Latin-9 locales can have suffix ISO8859-15, iso885915, iso885915@euro or ISO8859-15@euro. Use `<l10n_info>` instead. ### See Also `<strptime>` for uses of `category = "LC_TIME"`. `[Sys.localeconv](sys.localeconv)` for details of numerical and monetary representations. `<l10n_info>` gives some summary facts about the locale and its encoding (including if it is UTF-8). The ‘R Installation and Administration’ manual for background on locales and how to find out locale names on your system. ### Examples ``` Sys.getlocale() Sys.getlocale("LC_TIME") ## Not run: Sys.setlocale("LC_TIME", "de") # Solaris: details are OS-dependent Sys.setlocale("LC_TIME", "de_DE") # Many Unix-alikes Sys.setlocale("LC_TIME", "de_DE.UTF-8") # Linux, macOS, other Unix-alikes Sys.setlocale("LC_TIME", "de_DE.utf8") # some Linux versions Sys.setlocale("LC_TIME", "German") # Windows ## End(Not run) Sys.getlocale("LC_PAPER") # may or may not be set ## Not run: Sys.setlocale("LC_COLLATE", "C") # turn off locale-specific sorting, # usually (but not on all platforms) ## End(Not run) ``` r None `browserText` Functions to Retrieve Values Supplied by Calls to the Browser ---------------------------------------------------------------------------- ### Description A call to browser can provide context by supplying either a text argument or a condition argument. These functions can be used to retrieve either of these arguments. ### Usage ``` browserText(n = 1) browserCondition(n = 1) browserSetDebug(n = 1) ``` ### Arguments | | | | --- | --- | | `n` | The number of contexts to skip over, it must be non-negative. | ### Details Each call to `browser` can supply either a text string or a condition. The functions `browserText` and `browserCondition` provide ways to retrieve those values. Since there can be multiple browser contexts active at any time we also support retrieving values from the different contexts. The innermost (most recently initiated) browser context is numbered 1: other contexts are numbered sequentially. `browserSetDebug` provides a mechanism for initiating the browser in one of the calling functions. See `[sys.frame](sys.parent)` for a more complete discussion of the calling stack. To use `browserSetDebug` you select some calling function, determine how far back it is in the call stack and call `browserSetDebug` with `n` set to that value. Then, by typing `c` at the browser prompt you will cause evaluation to continue, and provided there are no intervening calls to browser or other interrupts, control will halt again once evaluation has returned to the closure specified. This is similar to the up functionality in gdb or the "step out" functionality in other debuggers. ### Value `browserText` returns the text, while `browserCondition` returns the condition from the specified browser context. `browserSetDebug` returns NULL, invisibly. ### Note It may be of interest to allow for querying further up the set of browser contexts and this functionality may be added at a later date. ### Author(s) R. Gentleman ### See Also `<browser>` r None `capabilities` Report Capabilities of this Build of R ------------------------------------------------------ ### Description Report on the optional features which have been compiled into this build of **R**. ### Usage ``` capabilities(what = NULL, Xchk = any(nas %in% c("X11", "jpeg", "png", "tiff"))) ``` ### Arguments | | | | --- | --- | | `what` | character vector or `NULL`, specifying required components. `NULL` implies that all are required. | | `Xchk` | `<logical>` with a smart default, indicating if X11-related capabilities should be fully checked, notably on macOS. If set to false, may avoid a warning “No protocol specified” and e.g., the "X11" capability may be returned as `NA`. | ### Value A named logical vector. Current components are | | | | --- | --- | | `jpeg` | is the `[jpeg](../../grdevices/html/png)` function operational? | | `png` | is the `[png](../../grdevices/html/png)` function operational? | | `tiff` | is the `[tiff](../../grdevices/html/png)` function operational? | | `tcltk` | is the tcltk package operational? Note that to make use of Tk you will almost always need to check that `"X11"` is also available. | | `X11` | are the `[X11](../../grdevices/html/x11)` graphics device and the X11-based data editor available? This loads the X11 module if not already loaded, and checks that the default display can be contacted unless a `X11` device has already been used. | | `aqua` | is the `[quartz](../../grdevices/html/quartz)` function operational? Only on some macOS builds, including CRAN binary distributions of **R**. Note that this is distinct from `.Platform$GUI == "AQUA"`, which is true only when using the Mac `R.app` GUI console. | | | | | --- | --- | | `http/ftp` | does the internal method for `[url](connections)` and `[download.file](../../utils/html/download.file)` support http:// and ftp:// URLs? Always `TRUE` as from **R** 3.3.0. | | `sockets` | are `[make.socket](../../utils/html/make.socket)` and related functions available? Always `TRUE` as from **R** 3.3.0. | | `libxml` | is there support for integrating `libxml` with the **R** event loop? Always `TRUE` as from **R** 3.3.0. | | `fifo` | are FIFO <connections> supported? | | `cledit` | is command-line editing available in the current **R** session? This is false in non-interactive sessions. It will be true for the command-line interface if `readline` support has been compiled in and --no-readline was *not* used when **R** was invoked. (If --interactive was used, command-line editing will not actually be available.) | | `iconv` | is internationalization conversion via `<iconv>` supported? Always true in current **R**. | | `NLS` | is there Natural Language Support (for message translations)? | | `Rprof` | is there support for `[Rprof](../../utils/html/rprof)()` profiling? This is true if **R** was configured (before compilation) with default settings which include `--enable-R-profiling`. | | `profmem` | is there support for memory profiling? See `<tracemem>`. | | `cairo` | is there support for the `[svg](../../grdevices/html/cairo)`, `[cairo\_pdf](../../grdevices/html/cairo)` and `[cairo\_ps](../../grdevices/html/cairo)` devices, and for `type = "cairo"` in the `[bmp](../../grdevices/html/png)`, `[jpeg](../../grdevices/html/png)`, `[png](../../grdevices/html/png)` and `[tiff](../../grdevices/html/png)` devices? Prior to **R** 4.1.0 this also indicated Cairo support in the `[X11](../../grdevices/html/x11)` device, but it is now possible to build **R** with Cairo support for the bitmap devices without support for the `X11` device (usually when that is not supported at all). | | `ICU` | is ICU available for collation? See the help on [Comparison](comparison) and `[icuSetCollate](icusetcollate)`: it is never used for a C locale. | | `long.double` | does this build use a `C` `long double` type which is longer than `double`? Some platforms do not have such a type, and on others its use can be suppressed by the configure option --disable-long-double. Although not guaranteed, it is a reasonable assumption that if present long doubles will have at least as much range and accuracy as the ISO/IEC 60559 80-bit ‘extended precision’ format. Since **R** 4.0.0 `[.Machine](zmachine)` gives information on the long-double type (if present). | | `libcurl` | is `libcurl` available in this build? Used by function `[curlGetHeaders](curlgetheaders)` and optionally by `[download.file](../../utils/html/download.file)` and `[url](connections)`. As from **R** 3.3.0 always true for Unix-alikes, and true for CRAN Windows builds. | ### Note to macOS users Capabilities `"jpeg"`, `"png"` and `"tiff"` refer to the X11-based versions of these devices. If `capabilities("aqua")` is true, then these devices with `type = "quartz"` will be available, and out-of-the-box will be the default type. Thus for example the `[tiff](../../grdevices/html/png)` device will be available if `capabilities("aqua") || capabilities("tiff")` if the defaults are unchanged. ### See Also `[.Platform](platform)`, `[extSoftVersion](extsoftversion)`, and `[grSoftVersion](../../grdevices/html/grsoftversion)` (and links there) for availability of capabilities *external* to **R** but used from **R** functions. ### Examples ``` capabilities() if(!capabilities("ICU")) warning("ICU is not available") ## Does not call the internal X11-checking function: capabilities(Xchk = FALSE) ## See also the examples for 'connections'. ``` r None `seq.POSIXt` Generate Regular Sequences of Times ------------------------------------------------- ### Description The method for `<seq>` for date-time classes. ### Usage ``` ## S3 method for class 'POSIXt' seq(from, to, by, length.out = NULL, along.with = NULL, ...) ``` ### Arguments | | | | --- | --- | | `from` | starting date. Required. | | `to` | end date. Optional. | | `by` | increment of the sequence. Optional. See ‘Details’. | | `length.out` | integer, optional. Desired length of the sequence. | | `along.with` | take the length from the length of this argument. | | `...` | arguments passed to or from other methods. | ### Details `by` can be specified in several ways. * A number, taken to be in seconds. * A object of class `<difftime>` * A character string, containing one of `"sec"`, `"min"`, `"hour"`, `"day"`, `"DSTday"`, `"week"`, `"month"`, `"quarter"` or `"year"`. This can optionally be preceded by a (positive or negative) integer and a space, or followed by `"s"`. The difference between `"day"` and `"DSTday"` is that the former ignores changes to/from daylight savings time and the latter takes the same clock time each day. `"week"` ignores DST (it is a period of 144 hours), but `"7 DSTdays"` can be used as an alternative. `"month"` and `"year"` allow for DST. The [time zone](timezones) of the result is taken from `from`: remember that GMT means UTC (and not the time zone of Greenwich, England) and so does not have daylight savings time. Using `"month"` first advances the month without changing the day: if this results in an invalid day of the month, it is counted forward into the next month: see the examples. ### Value A vector of class `"POSIXct"`. ### See Also `[DateTimeClasses](datetimeclasses)` ### Examples ``` ## first days of years seq(ISOdate(1910,1,1), ISOdate(1999,1,1), "years") ## by month seq(ISOdate(2000,1,1), by = "month", length.out = 12) seq(ISOdate(2000,1,31), by = "month", length.out = 4) ## quarters seq(ISOdate(1990,1,1), ISOdate(2000,1,1), by = "quarter") # or "3 months" ## days vs DSTdays: use c() to lose the time zone. seq(c(ISOdate(2000,3,20)), by = "day", length.out = 10) seq(c(ISOdate(2000,3,20)), by = "DSTday", length.out = 10) seq(c(ISOdate(2000,3,20)), by = "7 DSTdays", length.out = 4) ``` r None `lengths` Lengths of List or Vector Elements --------------------------------------------- ### Description Get the length of each element of a `<list>` or atomic vector (`[is.atomic](is.recursive)`) as an integer or numeric vector. ### Usage ``` lengths(x, use.names = TRUE) ``` ### Arguments | | | | --- | --- | | `x` | a `<list>`, list-like such as an `<expression>` or an atomic vector (for which the result is trivial). | | `use.names` | logical indicating if the result should inherit the `<names>` from `x`. | ### Details This function loops over `x` and returns a compatible vector containing the length of each element in `x`. Effectively, `length(x[[i]])` is called for all `i`, so any methods on `length` are considered. `lengths` is generic: you can write methods to handle specific classes of objects, see [InternalMethods](internalmethods). ### Value A non-negative `<integer>` of length `length(x)`, except when any element has a length of more than *2^31 - 1* elements, when it returns a double vector. When `use.names` is true, the names are taken from the names on `x`, if any. ### Note One raison d'être of `lengths(x)` is its use as a more efficient version of `sapply(x, length)` and similar `*apply` calls to `<length>`. This is the reason why `x` may be an atomic vector, even though `lengths(x)` is trivial in that case. ### See Also `<length>` for getting the length of any **R** object. ### Examples ``` require(stats) ## summarize by month l <- split(airquality$Ozone, airquality$Month) avgOz <- lapply(l, mean, na.rm=TRUE) ## merge result airquality$avgOz <- rep(unlist(avgOz, use.names=FALSE), lengths(l)) ## but this is safer and cleaner, but can be slower airquality$avgOz <- unsplit(avgOz, airquality$Month) ## should always be true, except when a length does not fit in 32 bits stopifnot(identical(lengths(l), vapply(l, length, integer(1L)))) ## empty lists are not a problem x <- list() stopifnot(identical(lengths(x), integer())) ## nor are "list-like" expressions: lengths(expression(u, v, 1+ 0:9)) ## and we should dispatch to length methods f <- c(rep(1, 3), rep(2, 6), 3) dates <- split(as.POSIXlt(Sys.time() + 1:10), f) stopifnot(identical(lengths(dates), vapply(dates, length, integer(1L)))) ``` r None `search` Give Search Path for R Objects ---------------------------------------- ### Description Gives a list of `<attach>`ed *packages* (see `<library>`), and **R** objects, usually `<data.frame>s`. ### Usage ``` search() searchpaths() ``` ### Value A character vector, starting with `"[.GlobalEnv](environment)"`, and ending with `"package:base"` which is **R**'s base package required always. `searchpaths` gives a similar character vector, with the entries for packages being the path to the package used to load the code. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. (`search`.) Chambers, J. M. (1998) *Programming with Data. A Guide to the S Language*. Springer. (`searchpaths`.) ### See Also `[.packages](zpackages)` to list just the packages on search path. `[loadedNamespaces](ns-load)` to list loaded namespaces. `<attach>` and `<detach>` to change the search path, `[objects](ls)` to find **R** objects in there. ### Examples ``` search() searchpaths() ``` r None `asplit` Split Array/Matrix By Its Margins ------------------------------------------- ### Description Split an array or matrix by its margins. ### Usage ``` asplit(x, MARGIN) ``` ### Arguments | | | | --- | --- | | `x` | an array, including a matrix. | | `MARGIN` | a vector giving the margins to split by. E.g., for a matrix `1` indicates rows, `2` indicates columns, `c(1, 2)` indicates rows and columns. Where `x` has named dimnames, it can be a character vector selecting dimension names. | ### Details The values of the splits can also be obtained (less efficiently) by `split(x, slice.index(x, MARGIN))`. `<apply>` always simplifies common length results, so attempting to split via `apply(x, MARGIN, identity)` does not work (as it simply gives `x`). By chaining `asplit` with `<lapply>` or `[vapply](lapply)`, one can obtain variants of `apply` which do not auto-simplify. ### Value A “list array” with dimension *dv* and each element an array of dimension *de* and dimnames preserved as available, where *dv* and *de* are, respectively, the dimensions of `x` included and not included in `MARGIN`. ### Examples ``` ## A 3-dimensional array of dimension 2 x 3 x 4: d <- 2 : 4 x <- array(seq_len(prod(d)), d) x ## Splitting by margin 2 gives a 1-d list array of length 3 ## consisting of 2 x 4 arrays: asplit(x, 2) ## Spltting by margins 1 and 2 gives a 2 x 3 list array ## consisting of 1-d arrays of length 4:a asplit(x, c(1, 2)) ## Compare to split(x, slice.index(x, c(1, 2))) ## A 2 x 3 matrix: (x <- matrix(1 : 6, 2, 3)) ## To split x by its rows, one can use asplit(x, 1) ## or less efficiently split(x, slice.index(x, 1)) split(x, row(x)) ```
programming_docs
r None `write` Write Data to a File ----------------------------- ### Description The data (usually a matrix) `x` are written to file `file`. If `x` is a two-dimensional matrix you need to transpose it to get the columns in `file` the same as those in the internal representation. ### Usage ``` write(x, file = "data", ncolumns = if(is.character(x)) 1 else 5, append = FALSE, sep = " ") ``` ### Arguments | | | | --- | --- | | `x` | the data to be written out, usually an atomic vector. | | `file` | a `[connection](connections)`, or a character string naming the file to write to. If `""`, print to the standard output connection. When `[.Platform](platform)$OS.type != "windows"`, and it is `"|cmd"`, the output is piped to the command given by ‘cmd’. | | `ncolumns` | the number of columns to write the data in. | | `append` | if `TRUE` the data `x` are appended to the connection. | | `sep` | a string used to separate columns. Using `sep = "\t"` gives tab delimited output; default is `" "`. | ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `write` is a wrapper for `<cat>`, which gives further details on the format used. `<save>` for writing any **R** objects, `[write.table](../../utils/html/write.table)` for data frames, and `<scan>` for reading data. ### Examples ``` # create a 2 by 5 matrix x <- matrix(1:10, ncol = 5) fil <- tempfile("data") # the file data contains x, two rows, five cols # 1 3 5 7 9 will form the first row write(t(x), fil) if(interactive()) file.show(fil) unlink(fil) # tidy up # Writing to the "console" 'tab-delimited' # two rows, five cols but the first row is 1 2 3 4 5 write(x, "", sep = "\t") ``` r None `pipeOp` Forward Pipe Operator ------------------------------- ### Description Pipe a value into a call expression or a function expression. ### Usage ``` lhs |> rhs ``` ### Arguments | | | | --- | --- | | `lhs` | expression producing a value. | | `rhs` | a call expression or an expression of the form `symbol => call`. | ### Details A pipe expression passes, or pipes, the result of the left-hand side expression `lhs` to the right-hand side expression `rhs`. If the `rhs` expression is a call, then the `lhs` is inserted as the first argument in the call. So `x |> f(y)` is interpreted as `f(x, y)`. To avoid ambiguities, functions in `rhs` calls may not be syntactically special, such as `+` or `if`. Pipe notation allows a nested sequence of calls to be written in a way that may make the sequence of processing steps easier to follow. Currently, pipe operations are implemented as syntax transformations. So an expression written as `x |> f(y)` is parsed as `f(x, y)`. It is worth emphasizing that while the code in a pipeline is written sequentially, regular R semantics for evaluation apply and so piped expressions will be evaluated only when first used in the `rhs` expression. ### Value Returns the result of evaluating the transformed expression. ### Background The forward pipe operator is motivated by the pipe introduced in the [magrittr](https://CRAN.R-project.org/package=magrittr) package, but is more streamlined. It is similar to the pipe or pipeline operators introduced in other languages, including F#, Julia, and JavaScript. ### Examples ``` # simple uses: mtcars |> head() # same as head(mtcars) mtcars |> head(2) # same as head(mtcars, 2) mtcars |> subset(cyl == 4) |> nrow() # same as nrow(subset(mtcars, cyl == 4)) # passing the lhs into an argument other than the first: mtcars |> subset(cyl == 4) |> (function(d) lm(mpg ~ disp, data = d))() # the pipe operator is implemented as a syntax transformation: quote(mtcars |> subset(cyl == 4) |> nrow()) # regular R evaluation semantics apply stop() |> (function(...) {})() # stop() is not used on RHS so is not evaluated ``` r None `source` Read R Code from a File, a Connection or Expressions -------------------------------------------------------------- ### Description `source` causes **R** to accept its input from the named file or URL or connection or expressions directly. Input is read and `<parse>`d from that file until the end of the file is reached, then the parsed expressions are evaluated sequentially in the chosen environment. `withAutoprint(exprs)` is a wrapper for `source(exprs = exprs, ..)` with different defaults. Its main purpose is to evaluate and auto-print expressions as if in a toplevel context, e.g, as in the **R** console. ### Usage ``` source(file, local = FALSE, echo = verbose, print.eval = echo, exprs, spaced = use_file, verbose = getOption("verbose"), prompt.echo = getOption("prompt"), max.deparse.length = 150, width.cutoff = 60L, deparseCtrl = "showAttributes", chdir = FALSE, encoding = getOption("encoding"), continue.echo = getOption("continue"), skip.echo = 0, keep.source = getOption("keep.source")) withAutoprint(exprs, evaluated = FALSE, local = parent.frame(), print. = TRUE, echo = TRUE, max.deparse.length = Inf, width.cutoff = max(20, getOption("width")), deparseCtrl = c("keepInteger", "showAttributes", "keepNA"), ...) ``` ### Arguments | | | | --- | --- | | `file` | a [connection](connections) or a character string giving the pathname of the file or URL to read from. `""` indicates the connection `[stdin](showconnections)()`. | | `local` | `TRUE`, `FALSE` or an environment, determining where the parsed expressions are evaluated. `FALSE` (the default) corresponds to the user's workspace (the global environment) and `TRUE` to the environment from which `source` is called. | | `echo` | logical; if `TRUE`, each expression is printed after parsing, before evaluation. | | `print.eval, print.` | logical; if `TRUE`, the result of `eval(i)` is printed for each expression `i`; defaults to the value of `echo`. | | `exprs` | for `source()` and `withAutoprint(*, evaluated=TRUE)`: *instead* of specifying `file`, an `<expression>`, `<call>`, or `<list>` of `<call>`'s, but *not* an unevaluated “expression”. for `withAutoprint()` (with default `evaluated=FALSE`): one or more unevaluated “expressions”. | | `evaluated` | logical indicating that `exprs` is passed to `source(exprs= *)` and hence must be evaluated, i.e., a formal `expression`, `call` or `list` of calls. | | `spaced` | logical indicating if newline (hence empty line) should be printed before each expression (when `echo = TRUE`). | | `verbose` | if `TRUE`, more diagnostics (than just `echo = TRUE`) are printed during parsing and evaluation of input, including extra info for **each** expression. | | `prompt.echo` | character; gives the prompt to be used if `echo = TRUE`. | | `max.deparse.length` | integer; is used only if `echo` is `TRUE` and gives the maximal number of characters output for the deparse of a single expression. | | `width.cutoff` | integer, passed to `<deparse>()` which is used (only) when there are no source references. | | `deparseCtrl` | `<character>` vector, passed as `control` to `<deparse>()`, see also `[.deparseOpts](deparseopts)`. In **R** version <= 3.3.x, this was hardcoded to `"showAttributes"`, which is the default currently; `deparseCtrl = "all"` may be preferable, when strict back compatibility is not of importance. | | `chdir` | logical; if `TRUE` and `file` is a pathname, the **R** working directory is temporarily changed to the directory containing `file` for evaluating. | | `encoding` | character vector. The encoding(s) to be assumed when `file` is a character string: see `[file](connections)`. A possible value is `"unknown"` when the encoding is guessed: see the ‘Encodings’ section. | | `continue.echo` | character; gives the prompt to use on continuation lines if `echo = TRUE`. | | `skip.echo` | integer; how many comment lines at the start of the file to skip if `echo = TRUE`. | | `keep.source` | logical: should the source formatting be retained when echoing expressions, if possible? | | `...` | (for `withAutoprint()`:) further (non-file related) arguments to be passed to `source(.)`. | ### Details Note that running code via `source` differs in a few respects from entering it at the **R** command line. Since expressions are not executed at the top level, auto-printing is not done. So you will need to include explicit `print` calls for things you want to be printed (and remember that this includes plotting by [lattice](https://CRAN.R-project.org/package=lattice), FAQ Q7.22). Since the complete file is parsed before any of it is run, syntax errors result in none of the code being run. If an error occurs in running a syntactically correct script, anything assigned into the workspace by code that has been run will be kept (just as from the command line), but diagnostic information such as `<traceback>()` will contain additional calls to `[withVisible](withvisible)`. All versions of **R** accept input from a connection with end of line marked by LF (as used on Unix), CRLF (as used on DOS/Windows) or CR (as used on classic Mac OS) and map this to newline. The final line can be incomplete, that is missing the final end-of-line marker. If `keep.source` is true (the default in interactive use), the source of functions is kept so they can be listed exactly as input. Unlike input from a console, lines in the file or on a connection can contain an unlimited number of characters. When `skip.echo > 0`, that many comment lines at the start of the file will not be echoed. This does not affect the execution of the code at all. If there are executable lines within the first `skip.echo` lines, echoing will start with the first of them. If `echo` is true and a deparsed expression exceeds `max.deparse.length`, that many characters are output followed by `.... [TRUNCATED]` . ### Encodings By default the input is read and parsed in the current encoding of the **R** session. This is usually what is required, but occasionally re-encoding is needed, e.g. if a file from a UTF-8-using system is to be read on Windows (or *vice versa*). The rest of this paragraph applies if `file` is an actual filename or URL (and not `""` nor a connection). If `encoding = "unknown"`, an attempt is made to guess the encoding: the result of `[localeToCharset](../../utils/html/localetocharset)()` is used as a guide. If `encoding` has two or more elements, they are tried in turn until the file/URL can be read without error in the trial encoding. If an actual `encoding` is specified (rather than the default or `"unknown"`) in a Latin-1 or UTF-8 locale then character strings in the result will be translated to the current encoding and marked as such (see `[Encoding](encoding)`). If `file` is a connection (including one specified by `""`), it is not possible to re-encode the input inside `source`, and so the `encoding` argument is just used to mark character strings in the parsed input in Latin-1 and UTF-8 locales: see `<parse>`. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[demo](../../utils/html/demo)` which uses `source`; `<eval>`, `<parse>` and `<scan>`; `<options>("keep.source")`. `<sys.source>` which is a streamlined version to source a file into an environment. ‘The R Language Definition’ for a discussion of source directives. ### Examples ``` someCond <- 7 > 6 ## want an if-clause to behave "as top level" wrt auto-printing : ## (all should look "as if on top level", e.g. non-assignments should print:) if(someCond) withAutoprint({ x <- 1:12 x-1 (y <- (x-5)^2) z <- y z - 10 }) ## If you want to source() a bunch of files, something like ## the following may be useful: sourceDir <- function(path, trace = TRUE, ...) { op <- options(); on.exit(options(op)) # to reset after each for (nm in list.files(path, pattern = "[.][RrSsQq]$")) { if(trace) cat(nm,":") source(file.path(path, nm), ...) if(trace) cat("\n") options(op) } } suppressWarnings( rm(x,y) ) # remove 'x' or 'y' from global env withAutoprint({ x <- 1:2; cat("x=",x,"\n"); y <- x^2 }) ## x and y now exist: stopifnot(identical(x, 1:2), identical(y, x^2)) withAutoprint({ formals(sourceDir); body(sourceDir) }, max.deparse.length = 20, verbose = TRUE) ``` r None `readline` Read a Line from the Terminal ----------------------------------------- ### Description `readline` reads a line from the terminal (in interactive use). ### Usage ``` readline(prompt = "") ``` ### Arguments | | | | --- | --- | | `prompt` | the string printed when prompting the user for input. Should usually end with a space `" "`. | ### Details The prompt string will be truncated to a maximum allowed length, normally 256 chars (but can be changed in the source code). This can only be used in an <interactive> session. ### Value A character vector of length one. Both leading and trailing spaces and tabs are stripped from the result. In non-<interactive> use the result is as if the response was RETURN and the value is `""`. ### See Also `[readLines](readlines)` for reading text lines from connections, including files. ### Examples ``` fun <- function() { ANSWER <- readline("Are you a satisfied R user? ") ## a better version would check the answer less cursorily, and ## perhaps re-prompt if (substr(ANSWER, 1, 1) == "n") cat("This is impossible. YOU LIED!\n") else cat("I knew it.\n") } if(interactive()) fun() ``` r None `toString` Convert an R Object to a Character String ----------------------------------------------------- ### Description This is a helper function for `<format>` to produce a single character string describing an **R** object. ### Usage ``` toString(x, ...) ## Default S3 method: toString(x, width = NULL, ...) ``` ### Arguments | | | | --- | --- | | `x` | The object to be converted. | | `width` | Suggestion for the maximum field width. Values of `NULL` or `0` indicate no maximum. The minimum value accepted is 6 and smaller values are taken as 6. | | `...` | Optional arguments passed to or from methods. | ### Details This is a generic function for which methods can be written: only the default method is described here. Most methods should honor the `width` argument to specify the maximum display width (as measured by `<nchar>(type = "width")`) of the result. The default method first converts `x` to character and then concatenates the elements separated by `", "`. If `width` is supplied and is not `NULL`, the default method returns the first `width - 4` characters of the result with `....` appended, if the full result would use more than `width` characters. ### Value A character vector of length 1 is returned. ### Author(s) Robert Gentleman ### See Also `<format>` ### Examples ``` x <- c("a", "b", "aaaaaaaaaaa") toString(x) toString(x, width = 8) ``` r None `forceAndCall` Call a function with Some Arguments Forced ---------------------------------------------------------- ### Description Call a function with a specified number of leading arguments forced before the call if the function is a closure. ### Usage ``` forceAndCall(n, FUN, ...) ``` ### Arguments | | | | --- | --- | | `n` | number of leading arguments to force. | | `FUN` | function to call. | | `...` | arguments to `FUN`. | ### Details `forceAndCall` calls the function `FUN` with arguments specified in `...`. If the value of `FUN` is a closure then the first `n` arguments to the function are evaluated (i.e. their delayed evaluation promises are forced) before executing the function body. If the value of `FUN` is a primitive then the call `FUN(...)` is evaluated in the usual way. `forceAndCall` is intended to help defining higher order functions like `<apply>` to behave more reasonably when the result returned by the function applied is a closure that captured its arguments. ### See Also `<force>`, `[promise](delayedassign)`, `[closure](function)`. r None `proc.time` Running Time of R ------------------------------ ### Description `proc.time` determines how much real and CPU time (in seconds) the currently running **R** process has already taken. ### Usage ``` proc.time() ``` ### Details `proc.time` returns five elements for backwards compatibility, but its `print` method prints a named vector of length 3. The first two entries are the total user and system CPU times of the current **R** process and any child processes on which it has waited, and the third entry is the ‘real’ elapsed time since the process was started. ### Value An object of class `"proc_time"` which is a numeric vector of length 5, containing the user, system, and total elapsed times for the currently running **R** process, and the cumulative sum of user and system times of any child processes spawned by it on which it has waited. (The `print` method uses the `summary` method to combine the child times with those of the main process.) The definition of ‘user’ and ‘system’ times is from your OS. Typically it is something like *The ‘user time’ is the CPU time charged for the execution of user instructions of the calling process. The ‘system time’ is the CPU time charged for execution by the system on behalf of the calling process.* Times of child processes are not available on Windows and will always be given as `NA`. The resolution of the times will be system-specific and on Unix-alikes times are rounded down to milliseconds. On modern systems they will be that accurate, but on older systems they might be accurate to 1/100 or 1/60 sec. They are typically available to 10ms on Windows. This is a <primitive> function. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<system.time>` for timing an **R** expression, `<gc.time>` for how much of the time was spent in garbage collection. `[setTimeLimit](settimelimit)` to *limit* the CPU or elapsed time for the session or an expression. ### Examples ``` ## a way to time an R expression: system.time is preferred ptm <- proc.time() for (i in 1:50) mad(stats::runif(500)) proc.time() - ptm ``` r None `libPaths` Search Paths for Packages ------------------------------------- ### Description `.libPaths` gets/sets the library trees within which packages are looked for. ### Usage ``` .libPaths(new, include.site = TRUE) .Library .Library.site ``` ### Arguments | | | | --- | --- | | `new` | a character vector with the locations of **R** library trees. Tilde expansion (`<path.expand>`) is done, and if any element contains one of `*?[`, globbing is done where supported by the platform: see `[Sys.glob](sys.glob)`. | | `include.site` | a logical value indicating whether the value of `.Library.site` should be included in the new set of library tree locations. Defaulting to `TRUE`, it is ignored when `.libPaths` is called without the `new` argument. | ### Details `.Library` is a character string giving the location of the default library, the ‘library’ subdirectory of R\_HOME. `.Library.site` is a (possibly empty) character vector giving the locations of the site libraries, by default the ‘site-library’ subdirectory of R\_HOME (which may not exist). `.libPaths` is used for getting or setting the library trees that **R** knows about (and hence uses when looking for packages). If called with argument `new`, by default, the library search path is set to the existing directories in `unique(c(new, .Library.site, .Library))` and this is returned. If `include.site` is `FALSE` when the `new` argument is set, `.Library.site` is excluded from the new library search path. If called without the `new` argument, a character vector with the currently active library trees is returned. How paths `new` with a trailing slash are treated is OS-dependent. On a POSIX filesystem existing directories can usually be specified with a trailing slash: on Windows filepaths with a trailing slash (or backslash) are invalid and so will never be added to the library search path. The library search path is initialized at startup from the environment variable R\_LIBS (which should be a colon-separated list of directories at which **R** library trees are rooted) followed by those in environment variable R\_LIBS\_USER. Only directories which exist at the time will be included. By default R\_LIBS is unset, and R\_LIBS\_USER is set to directory ‘R/R.version$platform-library/x.y’ of the home directory (or ‘Library/R/arch/x.y/library’ for CRAN macOS builds), for **R** x.y.z. `.Library.site` can be set via the environment variable R\_LIBS\_SITE (as a non-empty colon-separated list of library trees). Both R\_LIBS\_USER and R\_LIBS\_SITE feature possible expansion of specifiers for **R** version specific information as part of the startup process. The possible conversion specifiers all start with a % and are followed by a single letter (use %% to obtain %), with currently available conversion specifications as follows: %V **R** version number including the patchlevel (e.g., 2.5.0). %v **R** version number excluding the patchlevel (e.g., 2.5). %p the platform for which **R** was built, the value of `[R.version](version)$platform`. %o the underlying operating system, the value of `[R.version](version)$os`. %a the architecture (CPU) **R** was built on/for, the value of `[R.version](version)$arch`. (See `<version>` for details on R version information.) Function `.libPaths` always uses the values of `.Library` and `.Library.site` in the base namespace. `.Library.site` can be set by the site in ‘Rprofile.site’, which should be followed by a call to `.libPaths(.libPaths())` to make use of the updated value. For consistency, the paths are always normalized by `[normalizePath](normalizepath)(winslash = "/")`. ### Value A character vector of file paths. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<library>` ### Examples ``` .libPaths() # all library trees R knows about ```
programming_docs
r None `solve` Solve a System of Equations ------------------------------------ ### Description This generic function solves the equation `a %*% x = b` for `x`, where `b` can be either a vector or a matrix. ### Usage ``` solve(a, b, ...) ## Default S3 method: solve(a, b, tol, LINPACK = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `a` | a square numeric or complex matrix containing the coefficients of the linear system. Logical matrices are coerced to numeric. | | `b` | a numeric or complex vector or matrix giving the right-hand side(s) of the linear system. If missing, `b` is taken to be an identity matrix and `solve` will return the inverse of `a`. | | `tol` | the tolerance for detecting linear dependencies in the columns of `a`. The default is `.Machine$double.eps`. Not currently used with complex matrices `a`. | | `LINPACK` | logical. Defunct and an error. | | `...` | further arguments passed to or from other methods | ### Details `a` or `b` can be complex, but this uses double complex arithmetic which might not be available on all platforms. The row and column names of the result are taken from the column names of `a` and of `b` respectively. If `b` is missing the column names of the result are the row names of `a`. No check is made that the column names of `a` and the row names of `b` are equal. For back-compatibility `a` can be a (real) QR decomposition, although `[qr.solve](qr)` should be called in that case. `[qr.solve](qr)` can handle non-square systems. Unsuccessful results from the underlying LAPACK code will result in an error giving a positive error code: these can only be interpreted by detailed study of the FORTRAN code. ### Source The default method is an interface to the LAPACK routines `DGESV` and `ZGESV`. LAPACK is from <https://www.netlib.org/lapack/>. ### References Anderson. E. and ten others (1999) *LAPACK Users' Guide*. Third Edition. SIAM. Available on-line at <https://www.netlib.org/lapack/lug/lapack_lug.html>. Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[solve.qr](qr)` for the `qr` method, `<chol2inv>` for inverting from the Choleski factor `<backsolve>`, `[qr.solve](qr)`. ### Examples ``` hilbert <- function(n) { i <- 1:n; 1 / outer(i - 1, i, "+") } h8 <- hilbert(8); h8 sh8 <- solve(h8) round(sh8 %*% h8, 3) A <- hilbert(4) A[] <- as.complex(A) ## might not be supported on all platforms try(solve(A)) ``` r None `Bessel` Bessel Functions -------------------------- ### Description Bessel Functions of integer and fractional order, of first and second kind, *J(nu)* and *Y(nu)*, and Modified Bessel functions (of first and third kind), *I(nu)* and *K(nu)*. ### Usage ``` besselI(x, nu, expon.scaled = FALSE) besselK(x, nu, expon.scaled = FALSE) besselJ(x, nu) besselY(x, nu) ``` ### Arguments | | | | --- | --- | | `x` | numeric, *≥ 0*. | | `nu` | numeric; The *order* (maybe fractional and negative) of the corresponding Bessel function. | | `expon.scaled` | logical; if `TRUE`, the results are exponentially scaled in order to avoid overflow (*I(nu)*) or underflow (*K(nu)*), respectively. | ### Details If `expon.scaled = TRUE`, *exp(-x) I(x;nu)*, or *exp(x) K(x;nu)* are returned. For *nu < 0*, formulae 9.1.2 and 9.6.2 from Abramowitz & Stegun are applied (which is probably suboptimal), except for `besselK` which is symmetric in `nu`. The current algorithms will give warnings about accuracy loss for large arguments. In some cases, these warnings are exaggerated, and the precision is perfect. For large `nu`, say in the order of millions, the current algorithms are rarely useful. ### Value Numeric vector with the (scaled, if `expon.scaled = TRUE`) values of the corresponding Bessel function. The length of the result is the maximum of the lengths of the parameters. All parameters are recycled to that length. ### Author(s) Original Fortran code: W. J. Cody, Argonne National Laboratory Translation to C and adaptation to **R**: Martin Maechler [[email protected]](mailto:[email protected]). ### Source The C code is a translation of Fortran routines from <https://www.netlib.org/specfun/ribesl>, ../rjbesl, etc. The four source code files for bessel[IJKY] each contain a paragraph “Acknowledgement” and “References”, a short summary of which is besselI based on (code) by David J. Sookne, see Sookne (1973)... Modifications... An earlier version was published in Cody (1983). besselJ as `besselI` besselK based on (code) by J. B. Campbell (1980)... Modifications... besselY draws heavily on Temme's Algol program for *Y*... and on Campbell's programs for *Y\_ν(x)* .... ... heavily modified. ### References Abramowitz, M. and Stegun, I. A. (1972). *Handbook of Mathematical Functions*. Dover, New York; Chapter 9: Bessel Functions of Integer Order. In order of “Source” citation above: Sookne, David J. (1973). Bessel Functions of Real Argument and Integer Order. *Journal of Research of the National Bureau of Standards*, **77B**, 125–132. doi: [10.6028/jres.077B.012](https://doi.org/10.6028/jres.077B.012). Cody, William J. (1983). Algorithm 597: Sequence of modified Bessel functions of the first kind. *ACM Transactions on Mathematical Software*, **9**(2), 242–245. doi: [10.1145/357456.357462](https://doi.org/10.1145/357456.357462). Campbell, J.B. (1980). On Temme's algorithm for the modified Bessel function of the third kind. *ACM Transactions on Mathematical Software*, **6**(4), 581–586. doi: [10.1145/355921.355928](https://doi.org/10.1145/355921.355928). Campbell, J.B. (1979). Bessel functions J\_nu(x) and Y\_nu(x) of float order and float argument. *Computer Physics Communications*, **18**, 133–142. doi: [10.1016/0010-4655(79)90030-4](https://doi.org/10.1016/0010-4655(79)90030-4). Temme, Nico M. (1976). On the numerical evaluation of the ordinary Bessel function of the second kind. *Journal of Computational Physics*, **21**, 343–350. doi: [10.1016/0021-9991(76)90032-2](https://doi.org/10.1016/0021-9991(76)90032-2). ### See Also Other special mathematical functions, such as `[gamma](special)`, *Γ(x)*, and `[beta](special)`, *B(x)*. ### Examples ``` require(graphics) nus <- c(0:5, 10, 20) x <- seq(0, 4, length.out = 501) plot(x, x, ylim = c(0, 6), ylab = "", type = "n", main = "Bessel Functions I_nu(x)") for(nu in nus) lines(x, besselI(x, nu = nu), col = nu + 2) legend(0, 6, legend = paste("nu=", nus), col = nus + 2, lwd = 1) x <- seq(0, 40, length.out = 801); yl <- c(-.5, 1) plot(x, x, ylim = yl, ylab = "", type = "n", main = "Bessel Functions J_nu(x)") abline(h=0, v=0, lty=3) for(nu in nus) lines(x, besselJ(x, nu = nu), col = nu + 2) legend("topright", legend = paste("nu=", nus), col = nus + 2, lwd = 1, bty="n") ## Negative nu's -------------------------------------------------- xx <- 2:7 nu <- seq(-10, 9, length.out = 2001) ## --- I() --- --- --- --- matplot(nu, t(outer(xx, nu, besselI)), type = "l", ylim = c(-50, 200), main = expression(paste("Bessel ", I[nu](x), " for fixed ", x, ", as ", f(nu))), xlab = expression(nu)) abline(v = 0, col = "light gray", lty = 3) legend(5, 200, legend = paste("x=", xx), col=seq(xx), lty=1:5) ## --- J() --- --- --- --- bJ <- t(outer(xx, nu, besselJ)) matplot(nu, bJ, type = "l", ylim = c(-500, 200), xlab = quote(nu), ylab = quote(J[nu](x)), main = expression(paste("Bessel ", J[nu](x), " for fixed ", x))) abline(v = 0, col = "light gray", lty = 3) legend("topright", legend = paste("x=", xx), col=seq(xx), lty=1:5) ## ZOOM into right part: matplot(nu[nu > -2], bJ[nu > -2,], type = "l", xlab = quote(nu), ylab = quote(J[nu](x)), main = expression(paste("Bessel ", J[nu](x), " for fixed ", x))) abline(h=0, v = 0, col = "gray60", lty = 3) legend("topright", legend = paste("x=", xx), col=seq(xx), lty=1:5) ##--------------- x --> 0 ----------------------------- x0 <- 2^seq(-16, 5, length.out=256) plot(range(x0), c(1e-40, 1), log = "xy", xlab = "x", ylab = "", type = "n", main = "Bessel Functions J_nu(x) near 0\n log - log scale") ; axis(2, at=1) for(nu in sort(c(nus, nus+0.5))) lines(x0, besselJ(x0, nu = nu), col = nu + 2, lty= 1+ (nu%%1 > 0)) legend("right", legend = paste("nu=", paste(nus, nus+0.5, sep=", ")), col = nus + 2, lwd = 1, bty="n") x0 <- 2^seq(-10, 8, length.out=256) plot(range(x0), 10^c(-100, 80), log = "xy", xlab = "x", ylab = "", type = "n", main = "Bessel Functions K_nu(x) near 0\n log - log scale") ; axis(2, at=1) for(nu in sort(c(nus, nus+0.5))) lines(x0, besselK(x0, nu = nu), col = nu + 2, lty= 1+ (nu%%1 > 0)) legend("topright", legend = paste("nu=", paste(nus, nus + 0.5, sep = ", ")), col = nus + 2, lwd = 1, bty="n") x <- x[x > 0] plot(x, x, ylim = c(1e-18, 1e11), log = "y", ylab = "", type = "n", main = "Bessel Functions K_nu(x)"); axis(2, at=1) for(nu in nus) lines(x, besselK(x, nu = nu), col = nu + 2) legend(0, 1e-5, legend=paste("nu=", nus), col = nus + 2, lwd = 1) yl <- c(-1.6, .6) plot(x, x, ylim = yl, ylab = "", type = "n", main = "Bessel Functions Y_nu(x)") for(nu in nus){ xx <- x[x > .6*nu] lines(xx, besselY(xx, nu=nu), col = nu+2) } legend(25, -.5, legend = paste("nu=", nus), col = nus+2, lwd = 1) ## negative nu in bessel_Y -- was bogus for a long time curve(besselY(x, -0.1), 0, 10, ylim = c(-3,1), ylab = "") for(nu in c(seq(-0.2, -2, by = -0.1))) curve(besselY(x, nu), add = TRUE) title(expression(besselY(x, nu) * " " * {nu == list(-0.1, -0.2, ..., -2)})) ``` r None `deparse` Expression Deparsing ------------------------------- ### Description Turn unevaluated expressions into character strings. ### Usage ``` deparse(expr, width.cutoff = 60L, backtick = mode(expr) %in% c("call", "expression", "(", "function"), control = c("keepNA", "keepInteger", "niceNames", "showAttributes"), nlines = -1L) deparse1(expr, collapse = " ", width.cutoff = 500L, ...) ``` ### Arguments | | | | --- | --- | | `expr` | any **R** expression. | | `width.cutoff` | integer in *[20, 500]* determining the cutoff (in bytes) at which line-breaking is tried. | | `backtick` | logical indicating whether symbolic names should be enclosed in backticks if they do not follow the standard syntax. | | `control` | character vector (or `NULL`) of deparsing options. See `[.deparseOpts](deparseopts)`. | | `nlines` | integer: the maximum number of lines to produce. Negative values indicate no limit. | | `collapse` | a string, passed to `<paste>()`. | | `...` | further arguments passed to `deparse()`. | ### Details These functions turn unevaluated expressions (where ‘expression’ is taken in a wider sense than the strict concept of a vector of `<mode>` and type (`<typeof>`) `"expression"` used in `<expression>`) into character strings (a kind of inverse to `<parse>`). A typical use of this is to create informative labels for data sets and plots. The example shows a simple use of this facility. It uses the functions `deparse` and `substitute` to create labels for a plot which are character string versions of the actual arguments to the function `myplot`. The default for the `backtick` option is not to quote single symbols but only composite expressions. This is a compromise to avoid breaking existing code. Using `control = c("all", "hexDigits")` comes closest to making `deparse()` an inverse of `parse()` (but we have not yet seen an example where `"all"`, now including `"digits17"`, would not have been as good). However, not all objects are deparse-able even with these options and a warning will be issued if the function recognizes that it is being asked to do the impossible. Unless `control` contains `"digits17"` or `"hexDigits"`, (or `"all"` or `"exact"` which include one of these), numeric and complex vectors are converted using 15 significant digits: see `[as.character](character)` for more details. `width.cutoff` is a lower bound for the line lengths: deparsing a line proceeds until at least `width.cutoff` *bytes* have been output and e.g. `arg = value` expressions will not be split across lines. `deparse1()` is a simple utility added in **R** 4.0.0 to ensure a string result (`<character>` vector of length one), typically used in name construction, as `deparse1(<substitute>(.))`. ### Note To avoid the risk of a source attribute out of sync with the actual function definition, the source attribute of a function will never be deparsed as an attribute. Deparsing internal structures may not be accurate: for example the graphics display list recorded by `[recordPlot](../../grdevices/html/recordplot)` is not intended to be deparsed and `.Internal` calls will be shown as primitive calls. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[.deparseOpts](deparseopts)` for available `control` settings; `<dput>()` and `<dump>()` for related functions using identical internal deparsing functionality. `<substitute>`, `<parse>`, `<expression>`. `Quotes` for quoting conventions, including backticks. ### Examples ``` require(stats); require(graphics) deparse(args(lm)) deparse(args(lm), width.cutoff = 500) myplot <- function(x, y) { plot(x, y, xlab = deparse1(substitute(x)), ylab = deparse1(substitute(y))) } e <- quote(`foo bar`) deparse(e) deparse(e, backtick = TRUE) e <- quote(`foo bar`+1) deparse(e) deparse(e, control = "all") # wraps it w/ quote( . ) ``` r None `parse` Parse R Expressions ---------------------------- ### Description `parse()` returns the parsed but unevaluated expressions in an `<expression>`, a “list” of `<call>`s. `str2expression(s)` and `str2lang(s)` return special versions of `parse(text=s, keep.source=FALSE)` and can therefore be regarded as transforming character strings `s` to expressions, calls, etc. ### Usage ``` parse(file = "", n = NULL, text = NULL, prompt = "?", keep.source = getOption("keep.source"), srcfile, encoding = "unknown") str2lang(s) str2expression(text) ``` ### Arguments | | | | --- | --- | | `file` | a [connection](connections), or a character string giving the name of a file or a URL to read the expressions from. If `file` is `""` and `text` is missing or `NULL` then input is taken from the console. | | `n` | integer (or coerced to integer). The maximum number of expressions to parse. If `n` is `NULL` or negative or `NA` the input is parsed in its entirety. | | `text` | character vector. The text to parse. Elements are treated as if they were lines of a file. Other **R** objects will be coerced to character if possible. | | `prompt` | the prompt to print when parsing from the keyboard. `NULL` means to use **R**'s prompt, `getOption("prompt")`. | | `keep.source` | a logical value; if `TRUE`, keep source reference information. | | `srcfile` | `NULL`, a character vector, or a `<srcfile>` object. See the ‘Details’ section. | | `encoding` | encoding to be assumed for input strings. If the value is `"latin1"` or `"UTF-8"` it is used to mark character strings as known to be in Latin-1 or UTF-8: it is not used to re-encode the input. To do the latter, specify the encoding as part of the connection `con` or *via* `<options>(encoding=)`: see the example under `[file](connections)`. Arguments `encoding = "latin1"` and `encoding = "UTF-8"` are ignored with a warning when running in a MBCS locale. | | `s` | a `<character>` vector of length `1`, i.e., a “string”. | ### Details `parse(....)`: If `text` has length greater than zero (after coercion) it is used in preference to `file`. All versions of **R** accept input from a connection with end of line marked by LF (as used on Unix), CRLF (as used on DOS/Windows) or CR (as used on classic Mac OS). The final line can be incomplete, that is missing the final EOL marker. When input is taken from the console, `n = NULL` is equivalent to `n = 1`, and `n < 0` will read until an EOF character is read. (The EOF character is Ctrl-Z for the Windows front-ends.) The line-length limit is 4095 bytes when reading from the console (which may impose a lower limit: see ‘An Introduction to R’). The default for `srcfile` is set as follows. If `keep.source` is not `TRUE`, `srcfile` defaults to a character string, either `"<text>"` or one derived from `file`. When `keep.source` is `TRUE`, if `text` is used, `srcfile` will be set to a `[srcfilecopy](srcfile)` containing the text. If a character string is used for `file`, a `<srcfile>` object referring to that file will be used. When `srcfile` is a character string, error messages will include the name, but source reference information will not be added to the result. When `srcfile` is a `<srcfile>` object, source reference information will be retained. `str2expression(s)`: for a `<character>` vector `s`, `str2expression(s)` corresponds to `<parse>(text = s, keep.source=FALSE)`, which is always of type (`<typeof>`) and `<class>` `expression`. `str2lang(s)`: for a `<character>` string `s`, `str2lang(s)` corresponds to `<parse>(text = s, keep.source=FALSE)[[1]]` (plus a check that both `s` and the `parse(*)` result are of length one) which is typically a `call` but may also be a `symbol` aka `<name>`, `[NULL](null)` or an atomic constant such as `2`, `1L`, or `TRUE`. Put differently, the value of `str2lang(.)` is a call or one of its parts, in short “a call or simpler”. Currently, encoding is not handled in `str2lang()` and `str2expression()`. ### Value `parse()` and `str2expression()` return an object of type `"<expression>"`, for `parse()` with up to `n` elements if specified as a non-negative integer. `str2lang(s)`, `s` a string, returns “a `<call>` or simpler”, see the ‘Details:’ section. When `srcfile` is non-`NULL`, a `"srcref"` attribute will be attached to the result containing a list of `[srcref](srcfile)` records corresponding to each element, a `"srcfile"` attribute will be attached containing a copy of `srcfile`, and a `"wholeSrcref"` attribute will be attached containing a `[srcref](srcfile)` record corresponding to all of the parsed text. Detailed parse information will be stored in the `"srcfile"` attribute, to be retrieved by `[getParseData](../../utils/html/getparsedata)`. A syntax error (including an incomplete expression) will throw an error. Character strings in the result will have a declared encoding if `encoding` is `"latin1"` or `"UTF-8"`, or if `text` is supplied with every element of known encoding in a Latin-1 or UTF-8 locale. ### Partial parsing When a syntax error occurs during parsing, `parse` signals an error. The partial parse data will be stored in the `srcfile` argument if it is a `<srcfile>` object and the `text` argument was used to supply the text. In other cases it will be lost when the error is triggered. The partial parse data can be retrieved using `[getParseData](../../utils/html/getparsedata)` applied to the `srcfile` object. Because parsing was incomplete, it will typically include references to `"parent"` entries that are not present. ### Note Using `parse(text = *, ..)` or its simplified and hence more efficient versions `str2lang()` or `str2expression()` is at least an order of magnitude less efficient than `<call>(..)` or `[as.call](call)()`. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Murdoch, D. (2010). “Source References”. *The R Journal*, **2**(2), 16–19. doi: [10.32614/RJ-2010-010](https://doi.org/10.32614/RJ-2010-010). ### See Also `<scan>`, `<source>`, `<eval>`, `<deparse>`. The source reference information can be used for debugging (see e.g. `[setBreakpoint](../../utils/html/findlinenum)`) and profiling (see `[Rprof](../../utils/html/rprof)`). It can be examined by `[getSrcref](../../utils/html/sourceutils)` and related functions. More detailed information is available through `[getParseData](../../utils/html/getparsedata)`. ### Examples ``` fil <- tempfile(fileext = ".Rdmped") cat("x <- c(1, 4)\n x ^ 3 -10 ; outer(1:7, 5:9)\n", file = fil) # parse 3 statements from our temp file parse(file = fil, n = 3) unlink(fil) ## str2lang(<string>) || str2expression(<character>) : stopifnot(exprs = { identical( str2lang("x[3] <- 1+4"), quote(x[3] <- 1+4)) identical( str2lang("log(y)"), quote(log(y)) ) identical( str2lang("abc" ), quote(abc) -> qa) is.symbol(qa) & !is.call(qa) # a symbol/name, not a call identical( str2lang("1.375" ), 1.375) # just a number, not a call identical( str2expression(c("# a comment", "", "42")), expression(42) ) }) # A partial parse with a syntax error txt <- " x <- 1 an error " sf <- srcfile("txt") try(parse(text = txt, srcfile = sf)) getParseData(sf) ```
programming_docs
r None `traceback` Get and Print Call Stacks -------------------------------------- ### Description By default `traceback()` prints the call stack of the last uncaught error, i.e., the sequence of calls that lead to the error. This is useful when an error occurs with an unidentifiable error message. It can also be used to print the current stack or arbitrary lists of calls. `.traceback()` now *returns* the above call stack (and `traceback(x, *)` can be regarded as convenience function for printing the result of `.traceback(x)`). ### Usage ``` traceback(x = NULL, max.lines = getOption("traceback.max.lines", getOption("deparse.max.lines", -1L))) .traceback(x = NULL, max.lines = getOption("traceback.max.lines", getOption("deparse.max.lines", -1L))) ``` ### Arguments | | | | --- | --- | | `x` | `NULL` (default, meaning `.Traceback`), or an integer count of calls to skip in the current stack, or a list or pairlist of calls. See the details. | | `max.lines` | a number, the maximum number of lines to be printed *per call*. The default is unlimited. Applies only when `x` is `NULL`, a `<list>` or a `[pairlist](list)` of calls, see the details. | ### Details The default display is of the stack of the last uncaught error as stored as a list of `<call>`s in `.Traceback`, which `traceback` prints in a user-friendly format. The stack of calls always contains all function calls and all foreign function calls (such as `[.Call](callexternal)`): if profiling is in progress it will include calls to some primitive functions. (Calls to builtins are included, but not to specials.) Errors which are caught *via* `<try>` or `[tryCatch](conditions)` do not generate a traceback, so what is printed is the call sequence for the last uncaught error, and not necessarily for the last error. If `x` is numeric, then the current stack is printed, skipping `x` entries at the top of the stack. For example, `options(error = function() traceback(3))` will print the stack at the time of the error, skipping the call to `traceback()` and `.traceback()` and the error function that called it. Otherwise, `x` is assumed to be a list or pairlist of calls or deparsed calls and will be displayed in the same way. `.traceback()` and by extension `traceback()` may trigger deparsing of `<call>`s. This is an expensive operation for large calls so it may be advisable to set `max.lines` to a reasonable value when such calls are on the call stack. ### Value `.traceback()` returns the deparsed call stack deepest call first as a list or pairlist. The number of lines deparsed from the call can be limited via `max.lines`. Calls for which `max.lines` results in truncated output will gain a `"truncated"` attribute. `traceback()` formats, prints, and returns the call stack produced by `.traceback()` invisibly. ### Warning It is undocumented where `.Traceback` is stored nor that it is visible, and this is subject to change. As of **R** 4.0.0 `.Traceback` contains the `<call>`s as language objects, whereas previously these calls were deparsed. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### Examples ``` foo <- function(x) { print(1); bar(2) } bar <- function(x) { x + a.variable.which.does.not.exist } ## Not run: foo(2) # gives a strange error traceback() ## End(Not run) ## 2: bar(2) ## 1: foo(2) bar ## Ah, this is the culprit ... ## This will print the stack trace at the time of the error. options(error = function() traceback(3)) ``` r None `expand.grid` Create a Data Frame from All Combinations of Factor Variables ---------------------------------------------------------------------------- ### Description Create a data frame from all combinations of the supplied vectors or factors. See the description of the return value for precise details of the way this is done. ### Usage ``` expand.grid(..., KEEP.OUT.ATTRS = TRUE, stringsAsFactors = TRUE) ``` ### Arguments | | | | --- | --- | | `...` | vectors, factors or a list containing these. | | `KEEP.OUT.ATTRS` | a logical indicating the `"out.attrs"` attribute (see below) should be computed and returned. | | `stringsAsFactors` | logical specifying if character vectors are converted to factors. | ### Value A data frame containing one row for each combination of the supplied factors. The first factors vary fastest. The columns are labelled by the factors if these are supplied as named arguments or named components of a list. The row names are ‘automatic’. Attribute `"out.attrs"` is a list which gives the dimension and dimnames for use by `[predict](../../stats/html/predict)` methods. ### Note Conversion to a factor is done with levels in the order they occur in the character vectors (and not alphabetically, as is most common when converting to factors). ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S.* Wadsworth & Brooks/Cole. ### See Also `[combn](../../utils/html/combn)` (package `utils`) for the generation of all combinations of n elements, taken m at a time. ### Examples ``` require(utils) expand.grid(height = seq(60, 80, 5), weight = seq(100, 300, 50), sex = c("Male","Female")) x <- seq(0, 10, length.out = 100) y <- seq(-1, 1, length.out = 20) d1 <- expand.grid(x = x, y = y) d2 <- expand.grid(x = x, y = y, KEEP.OUT.ATTRS = FALSE) object.size(d1) - object.size(d2) ##-> 5992 or 8832 (on 32- / 64-bit platform) ``` r None `Encoding` Read or Set the Declared Encodings for a Character Vector --------------------------------------------------------------------- ### Description Read or set the declared encodings for a character vector. ### Usage ``` Encoding(x) Encoding(x) <- value enc2native(x) enc2utf8(x) ``` ### Arguments | | | | --- | --- | | `x` | A character vector. | | `value` | A character vector of positive length. | ### Details Character strings in **R** can be declared to be encoded in `"latin1"` or `"UTF-8"` or as `"bytes"`. These declarations can be read by `Encoding`, which will return a character vector of values `"latin1"`, `"UTF-8"` `"bytes"` or `"unknown"`, or set, when `value` is recycled as needed and other values are silently treated as `"unknown"`. ASCII strings will never be marked with a declared encoding, since their representation is the same in all supported encodings. Strings marked as `"bytes"` are intended to be non-ASCII strings which should be manipulated as bytes, and never converted to a character encoding (so writing them to a text file is supported only by `writeLines(useBytes = TRUE)`). `enc2native` and `enc2utf8` convert elements of character vectors to the native encoding or UTF-8 respectively, taking any marked encoding into account. They are <primitive> functions, designed to do minimal copying. There are other ways for character strings to acquire a declared encoding apart from explicitly setting it (and these have changed as **R** has evolved). The parser marks strings containing \u or \U escapes. Functions `<scan>`, `[read.table](../../utils/html/read.table)`, `[readLines](readlines)`, and `<parse>` have an `encoding` argument that is used to declare encodings, `<iconv>` declares encodings from its `to` argument, and console input in suitable locales is also declared. `[intToUtf8](utf8conversion)` declares its output as `"UTF-8"`, and output text connections (see `[textConnection](textconnections)`) are marked if running in a suitable locale. Under some circumstances (see its help page) `<source>(encoding=)` will mark encodings of character strings it outputs. Most character manipulation functions will set the encoding on output strings if it was declared on the corresponding input. These include `<chartr>`, `<strsplit>(useBytes = FALSE)`, `[tolower](chartr)` and `[toupper](chartr)` as well as `[sub](grep)(useBytes = FALSE)` and `[gsub](grep)(useBytes = FALSE)`. Note that such functions do not *preserve* the encoding, but if they know the input encoding and that the string has been successfully re-encoded (to the current encoding or UTF-8), they mark the output. `<substr>` does preserve the encoding, and `<chartr>`, `[tolower](chartr)` and `[toupper](chartr)` preserve UTF-8 encoding on systems with Unicode wide characters. With their `fixed` and `perl` options, `<strsplit>`, `[sub](grep)` and `gsub` will give a marked UTF-8 result if any of the inputs are UTF-8. `<paste>` and `<sprintf>` return elements marked as bytes if any of the corresponding inputs is marked as bytes, and otherwise marked as UTF-8 if any of the inputs is marked as UTF-8. `<match>`, `<pmatch>`, `<charmatch>`, `<duplicated>` and `<unique>` all match in UTF-8 if any of the elements are marked as UTF-8. There is some ambiguity as to what is meant by a ‘Latin-1’ locale, since some OSes (notably Windows) make use of character positions undefined (or used for control characters) in the ISO 8859-1 character set. How such characters are interpreted is system-dependent but as from **R** 3.5.0 they are if possible interpreted as per Windows codepage 1252 (which Microsoft calls ‘Windows Latin 1 (ANSI)’) when converting to e.g. UTF-8. ### Value A character vector. For `enc2utf8` encodings are always marked: they are for `enc2native` in UTF-8 and Latin-1 locales. ### Examples ``` ## x is intended to be in latin1 x. <- x <- "fa\xE7ile" Encoding(x.) # "unknown" (UTF-8 loc.) | "latin1" (8859-1/CP-1252 loc.) | .... Encoding(x) <- "latin1" x xx <- iconv(x, "latin1", "UTF-8") Encoding(c(x., x, xx)) c(x, xx) xb <- xx; Encoding(xb) <- "bytes" xb # will be encoded in hex cat("x = ", x, ", xx = ", xx, ", xb = ", xb, "\n", sep = "") (Ex <- Encoding(c(x.,x,xx,xb))) stopifnot(identical(Ex, c(Encoding(x.), Encoding(x), Encoding(xx), Encoding(xb)))) ``` r None `make.unique` Make Character Strings Unique -------------------------------------------- ### Description Makes the elements of a character vector unique by appending sequence numbers to duplicates. ### Usage ``` make.unique(names, sep = ".") ``` ### Arguments | | | | --- | --- | | `names` | a character vector | | `sep` | a character string used to separate a duplicate name from its sequence number. | ### Details The algorithm used by `make.unique` has the property that `make.unique(c(A, B)) == make.unique(c(make.unique(A), B))`. In other words, you can append one string at a time to a vector, making it unique each time, and get the same result as applying `make.unique` to all of the strings at once. If character vector `A` is already unique, then `make.unique(c(A, B))` preserves `A`. ### Value A character vector of same length as `names` with duplicates changed, in the current locale's encoding. ### Author(s) Thomas P. Minka ### See Also `<make.names>` ### Examples ``` make.unique(c("a", "a", "a")) make.unique(c(make.unique(c("a", "a")), "a")) make.unique(c("a", "a", "a.2", "a")) make.unique(c(make.unique(c("a", "a")), "a.2", "a")) ## Now show a bit where this is used : trace(make.unique) ## Applied in data.frame() constructions: (d1 <- data.frame(x = 1, x = 2, x = 3)) # direct d2 <- data.frame(data.frame(x = 1, x = 2), x = 3) # pairwise stopifnot(identical(d1, d2), colnames(d1) == c("x", "x.1", "x.2")) untrace(make.unique) ``` r None `Startup` Initialization at Start of an R Session -------------------------------------------------- ### Description In **R**, the startup mechanism is as follows. Unless --no-environ was given on the command line, **R** searches for site and user files to process for setting environment variables. The name of the site file is the one pointed to by the environment variable R\_ENVIRON; if this is unset, ‘[R\_HOME](rhome)/etc/Renviron.site’ is used (if it exists, which it does not in a ‘factory-fresh’ installation). The name of the user file can be specified by the R\_ENVIRON\_USER environment variable; if this is unset, the files searched for are ‘.Renviron’ in the current or in the user's home directory (in that order). See ‘Details’ for how the files are read. Then **R** searches for the site-wide startup profile file of **R** code unless the command line option --no-site-file was given. The path of this file is taken from the value of the R\_PROFILE environment variable (after [tilde expansion](path.expand)). If this variable is unset, the default is ‘[R\_HOME](rhome)/etc/Rprofile.site’, which is used if it exists (which it does not in a ‘factory-fresh’ installation). This code is sourced into the base package. Users need to be careful not to unintentionally overwrite objects in base, and it is normally advisable to use `[local](eval)` if code needs to be executed: see the examples. Then, unless --no-init-file was given, **R** searches for a user profile, a file of **R** code. The path of this file can be specified by the R\_PROFILE\_USER environment variable (and [tilde expansion](path.expand) will be performed). If this is unset, a file called ‘.Rprofile’ is searched for in the current directory or in the user's home directory (in that order). The user profile file is sourced into the workspace. Note that when the site and user profile files are sourced only the base package is loaded, so objects in other packages need to be referred to by e.g. `utils::dump.frames` or after explicitly loading the package concerned. **R** then loads a saved image of the user workspace from ‘.RData’ in the current directory if there is one (unless --no-restore-data or --no-restore was specified on the command line). Next, if a function `.First` is found on the search path, it is executed as `.First()`. Finally, function `.First.sys()` in the base package is run. This calls `[require](library)` to attach the default packages specified by `<options>("defaultPackages")`. If the methods package is included, this will have been attached earlier (by function `.OptRequireMethods()`) so that namespace initializations such as those from the user workspace will proceed correctly. A function `.First` (and `[.Last](quit)`) can be defined in appropriate ‘.Rprofile’ or ‘Rprofile.site’ files or have been saved in ‘.RData’. If you want a different set of packages than the default ones when you start, insert a call to `<options>` in the ‘.Rprofile’ or ‘Rprofile.site’ file. For example, `options(defaultPackages = character())` will attach no extra packages on startup (only the base package) (or set `R_DEFAULT_PACKAGES=NULL` as an environment variable before running **R**). Using `options(defaultPackages = "")` or `R_DEFAULT_PACKAGES=""` enforces the R *system* default. On front-ends which support it, the commands history is read from the file specified by the environment variable R\_HISTFILE (default ‘.Rhistory’ in the current directory) unless --no-restore-history or --no-restore was specified. The command-line option --vanilla implies --no-site-file, --no-init-file, --no-environ and (except for `R CMD`) --no-restore ### Details Note that there are two sorts of files used in startup: *environment files* which contain lists of environment variables to be set, and *profile files* which contain **R** code. Lines in a site or user environment file should be either comment lines starting with `#`, or lines of the form `name=value`. The latter sets the environmental variable `name` to `value`, overriding an existing value. If `value` contains an expression of the form `${foo-bar}`, the value is that of the environmental variable `foo` if that is set, otherwise `bar`. For `${foo:-bar}`, the value is that of `foo` if that is set to a non-empty value, otherwise `bar`. (If it is of the form `${foo}`, the default is `""`.) This construction can be nested, so `bar` can be of the same form (as in `${foo-${bar-blah}}`). Note that the braces are essential: for example `$HOME` will not be interpreted. Leading and trailing white space in `value` are stripped. `value` is then processed in a similar way to a Unix shell: in particular the outermost level of (single or double) quotes is stripped, and backslashes are removed except inside quotes. On systems with sub-architectures (mainly Windows), the files ‘Renviron.site’ and ‘Rprofile.site’ are looked for first in architecture-specific directories, e.g. ‘[R\_HOME](rhome)/etc/i386/Renviron.site’. And e.g. ‘.Renviron.i386’ will be used in preference to ‘.Renviron’. There is a 100,000 byte limit on the length of a line (after expansions) in environment files (prior to **R** 4.1 it was 10,000). ### Note It is not intended that there be interaction with the user during startup code. Attempting to do so can crash the **R** process. On Unix versions of **R** there is also a file ‘[R\_HOME](rhome)/etc/Renviron’ which is read very early in the start-up processing. It contains environment variables set by **R** in the configure process. Values in that file can be overridden in site or user environment files: do not change ‘[R\_HOME](rhome)/etc/Renviron’ itself. Note that this is distinct from ‘[R\_HOME](rhome)/etc/Renviron.site’. Command-line options may well not apply to alternative front-ends: they do not apply to `R.app` on macOS. `R CMD check` and `R CMD build` do not always read the standard startup files, but they do always read specific Renviron files. The location of these can be controlled by the environment variables R\_CHECK\_ENVIRON and R\_BUILD\_ENVIRON. If these are set their value is used as the path for the Renviron file; otherwise, files ‘~/.R/check.Renviron’ or ‘~/.R/build.Renviron’ or sub-architecture-specific versions are employed. If you want ‘~/.Renviron’ or ‘~/.Rprofile’ to be ignored by child **R** processes (such as those run by `R CMD check` and `R CMD build`), set the appropriate environment variable R\_ENVIRON\_USER or R\_PROFILE\_USER to (if possible, which it is not on Windows) `""` or to the name of a non-existent file. Prior to **R** 4.0.0, `${foo-bar}` in an environment file skipped an empty `foo`: this has been changed to match the POSIX rules for parameter substitution in shells. ### See Also For the definition of the ‘home’ directory on Windows see the ‘rw-FAQ’ Q2.14. It can be found from a running **R** by `Sys.getenv("R_USER")`. `[.Last](quit)` for final actions at the close of an **R** session. `[commandArgs](commandargs)` for accessing the command line arguments. There are examples of using startup files to set defaults for graphics devices in the help for `[X11](../../grdevices/html/x11)` and `[quartz](../../grdevices/html/quartz)`. *An Introduction to R* for more command-line options: those affecting memory management are covered in the help file for [Memory](memory). `[readRenviron](readrenviron)` to read ‘.Renviron’ files. For profiling code, see `[Rprof](../../utils/html/rprof)`. ### Examples ``` ## Not run: ## Example ~/.Renviron on Unix R_LIBS=~/R/library PAGER=/usr/local/bin/less ## Example .Renviron on Windows R_LIBS=C:/R/library MY_TCLTK="c:/Program Files/Tcl/bin" ## Example of setting R_DEFAULT_PACKAGES (from R CMD check) R_DEFAULT_PACKAGES='utils,grDevices,graphics,stats' # this loads the packages in the order given, so they appear on # the search path in reverse order. ## Example of .Rprofile options(width=65, digits=5) options(show.signif.stars=FALSE) setHook(packageEvent("grDevices", "onLoad"), function(...) grDevices::ps.options(horizontal=FALSE)) set.seed(1234) .First <- function() cat("\n Welcome to R!\n\n") .Last <- function() cat("\n Goodbye!\n\n") ## Example of Rprofile.site local({ # add MASS to the default packages, set a CRAN mirror old <- getOption("defaultPackages"); r <- getOption("repos") r["CRAN"] <- "http://my.local.cran" options(defaultPackages = c(old, "MASS"), repos = r) ## (for Unix terminal users) set the width from COLUMNS if set cols <- Sys.getenv("COLUMNS") if(nzchar(cols)) options(width = as.integer(cols)) # interactive sessions get a fortune cookie (needs fortunes package) if (interactive()) fortunes::fortune() }) ## if .Renviron contains FOOBAR="coo\bar"doh\ex"abc\"def'" ## then we get # > cat(Sys.getenv("FOOBAR"), "\n") # coo\bardoh\exabc"def' ## End(Not run) ```
programming_docs
r None `as.environment` Coerce to an Environment Object ------------------------------------------------- ### Description A generic function coercing an **R** object to an `<environment>`. A number or a character string is converted to the corresponding environment on the search path. ### Usage ``` as.environment(x) ``` ### Arguments | | | | --- | --- | | `x` | an **R** object to convert. If it is already an environment, just return it. If it is a positive number, return the environment corresponding to that position on the search list. If it is `-1`, the environment it is called from. If it is a character string, match the string to the names on the search list. If it is a list, the equivalent of `<list2env>(x, parent = emptyenv())` is returned. If `<is.object>(x)` is true and it has a `<class>` for which an `as.environment` method is found, that is used. | ### Details This is a <primitive> generic function: you can write methods to handle specific classes of objects, see [InternalMethods](internalmethods). ### Value The corresponding environment object. ### Author(s) John Chambers ### See Also `<environment>` for creation and manipulation, `<search>`; `<list2env>`. ### Examples ``` as.environment(1) ## the global environment identical(globalenv(), as.environment(1)) ## is TRUE try( ## <<- stats need not be attached as.environment("package:stats")) ee <- as.environment(list(a = "A", b = pi, ch = letters[1:8])) ls(ee) # names of objects in ee utils::ls.str(ee) ``` r None `Sys.info` Extract System and User Information ----------------------------------------------- ### Description Reports system and user information. ### Usage ``` Sys.info() ``` ### Details This uses POSIX or Windows system calls. Note that OS names (`sysname`) might not be what you expect: for example macOS identifies itself as Darwin and Solaris as SunOS. `Sys.info()` returns details of the platform **R** is running on, whereas `[R.version](version)` gives details of the platform **R** was built on: the `release` and `version` may well be different. ### Value A character vector with fields | | | | --- | --- | | `sysname` | The operating system name. | | `release` | The OS release. | | `version` | The OS version. | | `nodename` | A name by which the machine is known on the network (if any). | | `machine` | A concise description of the hardware, often the CPU type. | | `login` | The user's login name, or `"unknown"` if it cannot be ascertained. | | `user` | The name of the real user ID, or `"unknown"` if it cannot be ascertained. | | `effective_user` | The name of the effective user ID, or `"unknown"` if it cannot be ascertained. This may differ from the real user in ‘set-user-ID’ processes. | On Unix-alike platforms: The first five fields come from the `uname(2)` system call. The login name comes from `getlogin(2)`, and the user names from `getpwuid(getuid())` and `getpwuid(geteuid())`. On Windows: The last three fields give the same value. ### Note The meaning of `release` and `version` is system-dependent: on a Unix-alike they normally refer to the kernel. There, usually `release` contains a numeric version and `version` gives additional information. Examples for `release`: ``` "4.17.11-200.fc28.x86_64" # Linux (Fedora) "3.16.0-5-amd64" # Linux (Debian) "17.7.0" # macOS 10.13.6 "5.11" # Solaris ``` There is no guarantee that the node or login or user names will be what you might reasonably expect. (In particular on some Linux distributions the login name is unknown from sessions with re-directed inputs.) The use of alternatives such as `system("whoami")` is not portable: the POSIX command `system("id")` is much more portable on Unix-alikes, provided only the POSIX options -[Ggu][nr] are used (and not the many BSD and GNU extensions). `whoami` is equivalent to `id -un` (on Solaris, `/usr/xpg4/bin/id -un`). Windows may report unexpected versions: there, see the help for ### See Also `[.Platform](platform)`, and `[R.version](version)`. `[sessionInfo](../../utils/html/sessioninfo)()` gives a synopsis of both your system and the **R** session (and gives the OS version in a human-readable form). ### Examples ``` Sys.info() ## An alternative (and probably better) way to get the login name on Unix Sys.getenv("LOGNAME") ``` r None `autoload` On-demand Loading of Packages ----------------------------------------- ### Description `autoload` creates a promise-to-evaluate `autoloader` and stores it with name `name` in `.AutoloadEnv` environment. When **R** attempts to evaluate `name`, `autoloader` is run, the package is loaded and `name` is re-evaluated in the new package's environment. The result is that **R** behaves as if `package` was loaded but it does not occupy memory. `.Autoloaded` contains the names of the packages for which autoloading has been promised. ### Usage ``` autoload(name, package, reset = FALSE, ...) autoloader(name, package, ...) .AutoloadEnv .Autoloaded ``` ### Arguments | | | | --- | --- | | `name` | string giving the name of an object. | | `package` | string giving the name of a package containing the object. | | `reset` | logical: for internal use by `autoloader`. | | `...` | other arguments to `<library>`. | ### Value This function is invoked for its side-effect. It has no return value. ### See Also `[delayedAssign](delayedassign)`, `<library>` ### Examples ``` require(stats) autoload("interpSpline", "splines") search() ls("Autoloads") .Autoloaded x <- sort(stats::rnorm(12)) y <- x^2 is <- interpSpline(x, y) search() ## now has splines detach("package:splines") search() is2 <- interpSpline(x, y+x) search() ## and again detach("package:splines") ``` r None `Sys.glob` Wildcard Expansion on File Paths -------------------------------------------- ### Description Function to do wildcard expansion (also known as ‘globbing’) on file paths. ### Usage ``` Sys.glob(paths, dirmark = FALSE) ``` ### Arguments | | | | --- | --- | | `paths` | character vector of patterns for relative or absolute filepaths. Missing values will be ignored. | | `dirmark` | logical: should matches to directories from patterns that do not already end in `/` have a slash appended? May not be supported on all platforms. | ### Details This expands tilde (see [tilde expansion](path.expand)) and wildcards in file paths. For precise details of wildcards expansion, see your system's documentation on the `glob` system call. There is a POSIX 1003.2 standard (see <https://pubs.opengroup.org/onlinepubs/9699919799/functions/glob.html>) but some OSes will go beyond this. All systems should interpret `*` (match zero or more characters), `?` (match a single character) and (probably) `[` (begin a character class or range). The handling of paths ending with a separator is system-dependent. On a POSIX-2008 compliant OS they will match directories (only), but as they are not valid filepaths on Windows, they match nothing there. (Earlier POSIX standards allowed them to match files.) The rest of these details are indicative (and based on the POSIX standard). If a filename starts with `.` this may need to be matched explicitly: for example `Sys.glob("*.RData")` may or may not match ‘.RData’ but will not usually match ‘.aa.RData’. Note that this is platform-dependent: e.g. on Solaris `Sys.glob("*.*")` matches ‘.’ and ‘..’. `[` begins a character class. If the first character in `[...]` is not `!`, this is a character class which matches a single character against any of the characters specified. The class cannot be empty, so `]` can be included provided it is first. If the first character is `!`, the character class matches a single character which is *none* of the specified characters. Whether `.` in a character class matches a leading `.` in the filename is OS-dependent. Character classes can include ranges such as `[A-Z]`: include `-` as a character by having it first or last in a class. (The interpretation of ranges should be locale-specific, so the example is not a good idea in an Estonian locale.) One can remove the special meaning of `?`, `*` and `[` by preceding them by a backslash (except within a character class). ### Value A character vector of matched file paths. The order is system-specific (but in the order of the elements of `paths`): it is normally collated in either the current locale or in byte (ASCII) order; however, on Windows collation is in the order of Unicode points. Directory errors are normally ignored, so the matches are to accessible file paths (but not necessarily accessible files). ### See Also `<path.expand>`. [Quotes](quotes) for handling backslashes in character strings. ### Examples ``` Sys.glob(file.path(R.home(), "library", "*", "R", "*.rdx")) ``` r None `getwd` Get or Set Working Directory ------------------------------------- ### Description `getwd` returns an absolute filepath representing the current working directory of the **R** process; `setwd(dir)` is used to set the working directory to `dir`. ### Usage ``` getwd() setwd(dir) ``` ### Arguments | | | | --- | --- | | `dir` | A character string: [tilde expansion](path.expand) will be done. | ### Details See <files> for how file paths with marked encodings are interpreted. ### Value `getwd` returns a character string or `NULL` if the working directory is not available. On Windows the path returned will use `/` as the path separator and be encoded in UTF-8. The path will not have a trailing `/` unless it is the root directory (of a drive or share on Windows). `setwd` returns the current directory before the change, invisibly and with the same conventions as `getwd`. It will give an error if it does not succeed (including if it is not implemented). ### Note Note that the return value is said to be **an** absolute filepath: there can be more than one representation of the path to a directory and on some OSes the value returned can differ after changing directories and changing back to the same directory (for example if symbolic links have been traversed). ### See Also `<list.files>` for the *contents* of a directory. `[normalizePath](normalizepath)` for a ‘canonical’ path name. ### Examples ``` (WD <- getwd()) if (!is.null(WD)) setwd(WD) ``` r None `kappa` Compute or Estimate the Condition Number of a Matrix ------------------------------------------------------------- ### Description The condition number of a regular (square) matrix is the product of the *norm* of the matrix and the norm of its inverse (or pseudo-inverse), and hence depends on the kind of matrix-norm. `kappa()` computes by default (an estimate of) the 2-norm condition number of a matrix or of the *R* matrix of a *QR* decomposition, perhaps of a linear fit. The 2-norm condition number can be shown to be the ratio of the largest to the smallest *non-zero* singular value of the matrix. `rcond()` computes an approximation of the **r**eciprocal **cond**ition number, see the details. ### Usage ``` kappa(z, ...) ## Default S3 method: kappa(z, exact = FALSE, norm = NULL, method = c("qr", "direct"), ...) ## S3 method for class 'lm' kappa(z, ...) ## S3 method for class 'qr' kappa(z, ...) .kappa_tri(z, exact = FALSE, LINPACK = TRUE, norm = NULL, ...) rcond(x, norm = c("O","I","1"), triangular = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `z, x` | A matrix or a the result of `<qr>` or a fit from a class inheriting from `"lm"`. | | `exact` | logical. Should the result be exact? | | `norm` | character string, specifying the matrix norm with respect to which the condition number is to be computed, see also `<norm>`. For `rcond`, the default is `"O"`, meaning the **O**ne- or 1-norm. The (currently only) other possible value is `"I"` for the infinity norm. | | `method` | a partially matched character string specifying the method to be used; `"qr"` is the default for back-compatibility, mainly. | | `triangular` | logical. If true, the matrix used is just the lower triangular part of `z`. | | `LINPACK` | logical. If true and `z` is not complex, the LINPACK routine `dtrco()` is called; otherwise the relevant LAPACK routine is. | | `...` | further arguments passed to or from other methods; for `kappa.*()`, notably `LINPACK` when `norm` is not `"2"`. | ### Details For `kappa()`, if `exact = FALSE` (the default) the 2-norm condition number is estimated by a cheap approximation. However, the exact calculation (via `<svd>`) is also likely to be quick enough. Note that the 1- and Inf-norm condition numbers are much faster to calculate, and `rcond()` computes these ***r**eciprocal* condition numbers, also for complex matrices, using standard LAPACK routines. `kappa` and `rcond` are different interfaces to *partly* identical functionality. `.kappa_tri` is an internal function called by `kappa.qr` and `kappa.default`. Unsuccessful results from the underlying LAPACK code will result in an error giving a positive error code: these can only be interpreted by detailed study of the FORTRAN code. ### Value The condition number, *kappa*, or an approximation if `exact = FALSE`. ### Author(s) The design was inspired by (but differs considerably from) the S function of the same name described in Chambers (1992). ### Source The LAPACK routines `DTRCON` and `ZTRCON` and the LINPACK routine `DTRCO`. LAPACK and LINPACK are from <https://www.netlib.org/lapack/> and <https://www.netlib.org/linpack/> and their guides are listed in the references. ### References Anderson. E. and ten others (1999) *LAPACK Users' Guide*. Third Edition. SIAM. Available on-line at <https://www.netlib.org/lapack/lug/lapack_lug.html>. Chambers, J. M. (1992) *Linear models.* Chapter 4 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. Dongarra, J. J., Bunch, J. R., Moler, C. B. and Stewart, G. W. (1978) *LINPACK Users Guide.* Philadelphia: SIAM Publications. ### See Also `<norm>`; `<svd>` for the singular value decomposition and `<qr>` for the *QR* one. ### Examples ``` kappa(x1 <- cbind(1, 1:10)) # 15.71 kappa(x1, exact = TRUE) # 13.68 kappa(x2 <- cbind(x1, 2:11)) # high! [x2 is singular!] hilbert <- function(n) { i <- 1:n; 1 / outer(i - 1, i, "+") } sv9 <- svd(h9 <- hilbert(9))$ d kappa(h9) # pretty high! kappa(h9, exact = TRUE) == max(sv9) / min(sv9) kappa(h9, exact = TRUE) / kappa(h9) # 0.677 (i.e., rel.error = 32%) ``` r None `chkDots` Warn About Extraneous Arguments in the "..." of Its Caller --------------------------------------------------------------------- ### Description Warn about extraneous arguments in the `...` of its caller. A utility to be used e.g., in S3 methods which need a formal `...` argument but do not make any use of it. This helps catching user errors in calling the function in question (which is the caller of `chkDots()`). ### Usage ``` chkDots(..., which.call = -1, allowed = character(0)) ``` ### Arguments | | | | --- | --- | | `...` | “the dots”, as passed from the caller. | | `which.call` | passed to `[sys.call](sys.parent)()`. A caller may use -2 if the message should mention *its* caller. | | `allowed` | not yet implemented: character vector of *named* elements in `...` which are “allowed” and hence not warned about. | ### Author(s) Martin Maechler, first version outside base, June 2012. ### See Also `<warning>`, `[...](dots)`. ### Examples ``` seq.default ## <- you will see ' chkDots(...) ' seq(1,5, foo = "bar") # gives warning via chkDots() ## warning with more than one ...-entry: density.f <- function(x, ...) NextMethod("density") x <- density(structure(rnorm(10), class="f"), bar=TRUE, baz=TRUE) ``` r None `dynload` Foreign Function Interface ------------------------------------- ### Description Load or unload DLLs (also known as shared objects), and test whether a C function or Fortran subroutine is available. ### Usage ``` dyn.load(x, local = TRUE, now = TRUE, ...) dyn.unload(x) is.loaded(symbol, PACKAGE = "", type = "") ``` ### Arguments | | | | --- | --- | | `x` | a character string giving the pathname to a DLL, also known as a dynamic shared object. (See ‘Details’ for what these terms mean.) | | `local` | a logical value controlling whether the symbols in the DLL are stored in their own local table and not shared across DLLs, or added to the global symbol table. Whether this has any effect is system-dependent. | | `now` | a logical controlling whether all symbols are resolved (and relocated) immediately the library is loaded or deferred until they are used. This control is useful for developers testing whether a library is complete and has all the necessary symbols, and for users to ignore missing symbols. Whether this has any effect is system-dependent. | | `...` | other arguments for future expansion. | | `symbol` | a character string giving a symbol name. | | `PACKAGE` | if supplied, confine the search for the `name` to the DLL given by this argument (plus the conventional extension, ‘.so’, ‘.sl’, ‘.dll’, ...). This is intended to add safety for packages, which can ensure by using this argument that no other package can override their external symbols. This is used in the same way as in `[.C](foreign)`, `[.Call](callexternal)`, `[.Fortran](foreign)` and `[.External](callexternal)` functions. | | `type` | The type of symbol to look for: can be any (`""`, the default), `"Fortran"`, `"Call"` or `"External"`. | ### Details The objects `dyn.load` loads are called ‘dynamically loadable libraries’ (abbreviated to ‘DLL’) on all platforms except macOS, which uses the term for a different sort of object. On Unix-alikes they are also called ‘dynamic shared objects’ (‘DSO’), or ‘shared objects’ for short. (The POSIX standards use ‘executable object file’, but no one else does.) See ‘See Also’ and the ‘Writing R Extensions’ and ‘R Installation and Administration’ manuals for how to create and install a suitable DLL. Unfortunately a very few platforms (e.g., Compaq Tru64) do not handle the `PACKAGE` argument correctly, and may incorrectly find symbols linked into **R**. The additional arguments to `dyn.load` mirror the different aspects of the mode argument to the `dlopen()` routine on POSIX systems. They are available so that users can exercise greater control over the loading process for an individual library. In general, the default values are appropriate and you should override them only if there is good reason and you understand the implications. The `local` argument allows one to control whether the symbols in the DLL being attached are visible to other DLLs. While maintaining the symbols in their own namespace is good practice, the ability to share symbols across related ‘chapters’ is useful in many cases. Additionally, on certain platforms and versions of an operating system, certain libraries must have their symbols loaded globally to successfully resolve all symbols. One should be careful of the potential side-effect of using lazy loading via the `now` argument as `FALSE`. If a routine is called that has a missing symbol, the process will terminate immediately. The intended use is for library developers to call with value `TRUE` to check that all symbols are actually resolved and for regular users to call with `FALSE` so that missing symbols can be ignored and the available ones can be called. The initial motivation for adding these was to avoid such termination in the `_init()` routines of the Java virtual machine library. However, symbols loaded locally may not be (read probably) available to other DLLs. Those added to the global table are available to all other elements of the application and so can be shared across two different DLLs. Some (very old) systems do not provide (explicit) support for local/global and lazy/eager symbol resolution. This can be the source of subtle bugs. One can arrange to have warning messages emitted when unsupported options are used. This is done by setting either of the options `verbose` or `warn` to be non-zero via the `<options>` function. There is a short discussion of these additional arguments with some example code available at <http://www.stat.ucdavis.edu/~duncan/R/dynload/>. ### Value The function `dyn.load` is used for its side effect which links the specified DLL to the executing **R** image. Calls to `.C`, `.Call`, `.Fortran` and `.External` can then be used to execute compiled C functions or Fortran subroutines contained in the library. The return value of `dyn.load` is an object of class `DLLInfo`. See `[getLoadedDLLs](getloadeddlls)` for information about this class. The function `dyn.unload` unlinks the DLL. Note that unloading a DLL and then re-loading a DLL of the same name may or may not work: on Solaris it uses the first version loaded. Note also that some DLLs cannot be safely unloaded at all: unloading a DLL which implements C finalizers but does not unregister them on unload causes R to crash. `is.loaded` checks if the symbol name is loaded *and searchable* and hence available for use as a character string value for argument `.NAME` in `.C` or `.Fortran` or `.Call` or `.External`. It will succeed if any one of the four calling functions would succeed in using the entry point unless `type` is specified. (See `[.Fortran](foreign)` for how Fortran symbols are mapped.) Note that symbols in base packages are not searchable, and other packages can be so marked. ### Warning Do not use `dyn.unload` on a DLL loaded by `<library.dynam>`: use `[library.dynam.unload](library.dynam)`. This is needed for system housekeeping. ### Note `is.loaded` requires the name you would give to `.C` etc and **not** (as in S) that remapped by the defunct functions `symbol.C` or `symbol.For`. By default, the maximum number of DLLs that can be loaded is now 614 when the OS limit on the number of open files allows or can be increased, but less otherwise (but it will be at least 100). A specific maximum can be requested *via* the environment variable R\_MAX\_NUM\_DLLS, which has to be set (to a value between 100 and 1000 inclusive) before starting an **R** session. If the OS limit on the number of open files does not allow using this maximum and cannot be increased, **R** will fail to start with an error. The maximum is not allowed to be greater than 60% of the OS limit on the number of open files (essentially unlimited on Windows, on Unix typically 1024, but 256 on macOS). The limit can sometimes (including on macOS) be modified using command `ulimit -n` (`sh`, `bash`) or `limit descriptors` (`csh`) in the shell used to launch **R**. Increasing R\_MAX\_NUM\_DLLS comes with some memory overhead. If the OS limit on the number of open files cannot be determined, the DLL limit is 100 and cannot be changed *via* R\_MAX\_NUM\_DLLS. The creation of DLLs and the runtime linking of them into executing programs is very platform dependent. In recent years there has been some simplification in the process because the C subroutine call `dlopen` has become the POSIX standard for doing this. Under Unix-alikes `dyn.load` uses the `dlopen` mechanism and should work on all platforms which support it. On Windows it uses the standard mechanism (`LoadLibrary`) for loading DLLs. The original code for loading DLLs in Unix-alikes was provided by Heiner Schwarte. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<library.dynam>` to be used inside a package's `[.onLoad](ns-hooks)` initialization. `[SHLIB](../../utils/html/shlib)` for how to create suitable DLLs. `[.C](foreign)`, `[.Fortran](foreign)`, `[.External](callexternal)`, `[.Call](callexternal)`. ### Examples ``` ## expect all of these to be false in R >= 3.0.0. is.loaded("supsmu") # Fortran entry point in stats is.loaded("supsmu", "stats", "Fortran") is.loaded("PDF", type = "External") # pdf() device in grDevices ```
programming_docs
r None `unlink` Delete Files and Directories -------------------------------------- ### Description `unlink` deletes the file(s) or directories specified by `x`. ### Usage ``` unlink(x, recursive = FALSE, force = FALSE, expand = TRUE) ``` ### Arguments | | | | --- | --- | | `x` | a character vector with the names of the file(s) or directories to be deleted. | | `recursive` | logical. Should directories be deleted recursively? | | `force` | logical. Should permissions be changed (if possible) to allow the file or directory to be removed? | | `expand` | logical. Should wildcards (see ‘Details’ below) and tilde (see `<path.expand>`) be expanded? | ### Details If `recursive = FALSE` directories are not deleted, not even empty ones. On most platforms ‘file’ includes symbolic links, fifos and sockets. `unlink(x, recursive = TRUE)` deletes just the symbolic link if the target of such a link is a directory. Wildcard expansion (normally ‘\*’ and ‘?’ are allowed) is done by the internal code of `[Sys.glob](sys.glob)`. Wildcards never match a leading ‘.’ in the filename, and files ‘.’, ‘..’ and ‘~’ will never be considered for deletion. Wildcards will only be expanded if the system supports it. Most systems will support not only ‘\*’ and ‘?’ but also character classes such as ‘[a-z]’ (see the `man` pages for the system call `glob` on your OS). The metacharacters `* ? [` can occur in Unix filenames, and this makes it difficult to use `unlink` to delete such files (see `[file.remove](files)`), although escaping the metacharacters by backslashes usually works. If a metacharacter matches nothing it is considered as a literal character. `recursive = TRUE` might not be supported on all platforms, when it will be ignored, with a warning: however there are no known current examples. ### Value `0` for success, `1` for failure, invisibly. Not deleting a non-existent file is not a failure, nor is being unable to delete a directory if `recursive = FALSE`. However, missing values in `x` are regarded as failures. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[file.remove](files)`. r None `data.matrix` Convert a Data Frame to a Numeric Matrix ------------------------------------------------------- ### Description Return the matrix obtained by converting all the variables in a data frame to numeric mode and then binding them together as the columns of a matrix. Factors and ordered factors are replaced by their internal codes. ### Usage ``` data.matrix(frame, rownames.force = NA) ``` ### Arguments | | | | --- | --- | | `frame` | a data frame whose components are logical vectors, factors or numeric or character vectors. | | `rownames.force` | logical indicating if the resulting matrix should have character (rather than `NULL`) `[rownames](colnames)`. The default, `NA`, uses `NULL` rownames if the data frame has ‘automatic’ row.names or for a zero-row data frame. | ### Details Logical and factor columns are converted to integers. Character columns are first converted to factors and then to integers. Any other column which is not numeric (according to `[is.numeric](numeric)`) is converted by `[as.numeric](numeric)` or, for S4 objects, `[as](../../methods/html/as)(, "numeric")`. If all columns are integer (after conversion) the result is an integer matrix, otherwise a numeric (double) matrix. ### Value If `frame` inherits from class `"data.frame"`, an integer or numeric matrix of the same dimensions as `frame`, with dimnames taken from the `row.names` (or `NULL`, depending on `rownames.force`) and `names`. Otherwise, the result of `[as.matrix](matrix)`. ### Note The default behaviour for data frames differs from **R** < 2.5.0 which always gave the result character rownames. ### References Chambers, J. M. (1992) *Data for models.* Chapter 3 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `[as.matrix](matrix)`, `<data.frame>`, `<matrix>`. ### Examples ``` DF <- data.frame(a = 1:3, b = letters[10:12], c = seq(as.Date("2004-01-01"), by = "week", length.out = 3), stringsAsFactors = TRUE) data.matrix(DF[1:2]) data.matrix(DF) ``` r None `startsWith` Does String Start or End With Another String? ----------------------------------------------------------- ### Description Determines if entries of `x` start or end with string (entries of) `prefix` or `suffix` respectively, where strings are recycled to common lengths. ### Usage ``` startsWith(x, prefix) endsWith(x, suffix) ``` ### Arguments | | | | --- | --- | | `x` | vector of `<character>` string whose “starts” are considered. | | `prefix, suffix` | `<character>` vector (often of length one). | ### Details `startsWith()` is equivalent to but much faster than ``` substring(x, 1, nchar(prefix)) == prefix ``` or also ``` grepl("^<prefix>", x) ``` where `prefix` is not to contain special regular expression characters (and for `grepl`, `x` does not contain missing values, see below). The code has an optimized branch for the most common usage in which `prefix` or `suffix` is of length one, and is further optimized in a UTF-8 or 8-byte locale if that is an ASCII string. ### Value A `<logical>` vector, of “common length” of `x` and `prefix` (or `suffix`), i.e., of the longer of the two lengths unless one of them is zero when the result is also of zero length. A shorter input is recycled to the output length. ### See Also `[grepl](grep)`, `[substring](substr)`; the partial string matching functions `<charmatch>` and `<pmatch>` solve a different task. ### Examples ``` startsWith(search(), "package:") # typically at least two FALSE, nowadays often three x1 <- c("Foobar", "bla bla", "something", "another", "blu", "brown", "blau blüht der Enzian")# non-ASCII x2 <- cbind( startsWith(x1, "b"), startsWith(x1, "bl"), startsWith(x1, "bla"), endsWith(x1, "n"), endsWith(x1, "an")) rownames(x2) <- x1; colnames(x2) <- c("b", "b1", "bla", "n", "an") x2 ## Non-equivalence in case of missing values in 'x', see Details: x <- c("all", "but", NA_character_) cbind(startsWith(x, "a"), substring(x, 1L, 1L) == "a", grepl("^a", x)) ``` r None `plot` Generic X-Y Plotting ---------------------------- ### Description Generic function for plotting of **R** objects. For simple scatter plots, `[plot.default](../../graphics/html/plot.default)` will be used. However, there are `plot` methods for many **R** objects, including `<function>`s, `<data.frame>`s, `[density](../../stats/html/density)` objects, etc. Use `methods(plot)` and the documentation for these. Most of these methods are implemented using traditional graphics (the graphics package), but this is not mandatory. For more details about graphical parameter arguments used by traditional graphics, see `[par](../../graphics/html/par)`. ### Usage ``` plot(x, y, ...) ``` ### Arguments | | | | --- | --- | | `x` | the coordinates of points in the plot. Alternatively, a single plotting structure, function or *any **R** object with a `plot` method* can be provided. | | `y` | the y coordinates of points in the plot, *optional* if `x` is an appropriate structure. | | `...` | Arguments to be passed to methods, such as [graphical parameters](../../graphics/html/par) (see `[par](../../graphics/html/par)`). Many methods will accept the following arguments: `type` what type of plot should be drawn. Possible types are * `"p"` for **p**oints, * `"l"` for **l**ines, * `"b"` for **b**oth, * `"c"` for the lines part alone of `"b"`, * `"o"` for both ‘**o**verplotted’, * `"h"` for ‘**h**istogram’ like (or ‘high-density’) vertical lines, * `"s"` for stair **s**teps, * `"S"` for other **s**teps, see ‘Details’ below, * `"n"` for no plotting. All other `type`s give a warning or an error; using, e.g., `type = "punkte"` being equivalent to `type = "p"` for S compatibility. Note that some methods, e.g. `[plot.factor](../../graphics/html/plot.factor)`, do not accept this. `main` an overall title for the plot: see `[title](../../graphics/html/title)`. `sub` a sub title for the plot: see `[title](../../graphics/html/title)`. `xlab` a title for the x axis: see `[title](../../graphics/html/title)`. `ylab` a title for the y axis: see `[title](../../graphics/html/title)`. `asp` the *y/x* aspect ratio, see `[plot.window](../../graphics/html/plot.window)`. | ### Details The two step types differ in their x-y preference: Going from *(x1,y1)* to *(x2,y2)* with *x1 < x2*, `type = "s"` moves first horizontal, then vertical, whereas `type = "S"` moves the other way around. ### Note The `plot` generic was moved from the graphics package to the base package in **R** 4.0.0. It is currently re-exported from the graphics namespace to allow packages importing it from there to continue working, but this may change in future versions of **R**. ### See Also `[plot.default](../../graphics/html/plot.default)`, `[plot.formula](../../graphics/html/plot.formula)` and other methods; `[points](../../graphics/html/points)`, `[lines](../../graphics/html/lines)`, `[par](../../graphics/html/par)`. For thousands of points, consider using `[smoothScatter](../../graphics/html/smoothscatter)()` instead of `plot()`. For X-Y-Z plotting see `[contour](../../graphics/html/contour)`, `[persp](../../graphics/html/persp)` and `[image](../../graphics/html/image)`. ### Examples ``` require(stats) # for lowess, rpois, rnorm require(graphics) # for plot methods plot(cars) lines(lowess(cars)) plot(sin, -pi, 2*pi) # see ?plot.function ## Discrete Distribution Plot: plot(table(rpois(100, 5)), type = "h", col = "red", lwd = 10, main = "rpois(100, lambda = 5)") ## Simple quantiles/ECDF, see ecdf() {library(stats)} for a better one: plot(x <- sort(rnorm(47)), type = "s", main = "plot(x, type = \"s\")") points(x, cex = .5, col = "dark red") ``` r None `ifelse` Conditional Element Selection --------------------------------------- ### Description `ifelse` returns a value with the same shape as `test` which is filled with elements selected from either `yes` or `no` depending on whether the element of `test` is `TRUE` or `FALSE`. ### Usage ``` ifelse(test, yes, no) ``` ### Arguments | | | | --- | --- | | `test` | an object which can be coerced to logical mode. | | `yes` | return values for true elements of `test`. | | `no` | return values for false elements of `test`. | ### Details If `yes` or `no` are too short, their elements are recycled. `yes` will be evaluated if and only if any element of `test` is true, and analogously for `no`. Missing values in `test` give missing values in the result. ### Value A vector of the same length and attributes (including dimensions and `"class"`) as `test` and data values from the values of `yes` or `no`. The mode of the answer will be coerced from logical to accommodate first any values taken from `yes` and then any values taken from `no`. ### Warning The mode of the result may depend on the value of `test` (see the examples), and the class attribute (see `[oldClass](class)`) of the result is taken from `test` and may be inappropriate for the values selected from `yes` and `no`. Sometimes it is better to use a construction such as ``` (tmp <- yes; tmp[!test] <- no[!test]; tmp) ``` , possibly extended to handle missing values in `test`. Further note that `if(test) yes else no` is much more efficient and often much preferable to `ifelse(test, yes, no)` whenever `test` is a simple true/false result, i.e., when `length(test) == 1`. The `srcref` attribute of functions is handled specially: if `test` is a simple true result and `yes` evaluates to a function with `srcref` attribute, `ifelse` returns `yes` including its attribute (the same applies to a false `test` and `no` argument). This functionality is only for backwards compatibility, the form `if(test) yes else no` should be used whenever `yes` and `no` are functions. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[if](control)`. ### Examples ``` x <- c(6:-4) sqrt(x) #- gives warning sqrt(ifelse(x >= 0, x, NA)) # no warning ## Note: the following also gives the warning ! ifelse(x >= 0, sqrt(x), NA) ## ifelse() strips attributes ## This is important when working with Dates and factors x <- seq(as.Date("2000-02-29"), as.Date("2004-10-04"), by = "1 month") ## has many "yyyy-mm-29", but a few "yyyy-03-01" in the non-leap years y <- ifelse(as.POSIXlt(x)$mday == 29, x, NA) head(y) # not what you expected ... ==> need restore the class attribute: class(y) <- class(x) y ## This is a (not atypical) case where it is better *not* to use ifelse(), ## but rather the more efficient and still clear: y2 <- x y2[as.POSIXlt(x)$mday != 29] <- NA ## which gives the same as ifelse()+class() hack: stopifnot(identical(y2, y)) ## example of different return modes (and 'test' alone determining length): yes <- 1:3 no <- pi^(1:4) utils::str( ifelse(NA, yes, no) ) # logical, length 1 utils::str( ifelse(TRUE, yes, no) ) # integer, length 1 utils::str( ifelse(FALSE, yes, no) ) # double, length 1 ``` r None `substitute` Substituting and Quoting Expressions -------------------------------------------------- ### Description `substitute` returns the parse tree for the (unevaluated) expression `expr`, substituting any variables bound in `env`. `quote` simply returns its argument. The argument is not evaluated and can be any R expression. `enquote` is a simple one-line utility which transforms a call of the form `Foo(....)` into the call `quote(Foo(....))`. This is typically used to protect a `<call>` from early evaluation. ### Usage ``` substitute(expr, env) quote(expr) enquote(cl) ``` ### Arguments | | | | --- | --- | | `expr` | any syntactically valid **R** expression | | `cl` | a `<call>`, i.e., an **R** object of `<class>` (and `<mode>`) `"call"`. | | `env` | an environment or a list object. Defaults to the current evaluation environment. | ### Details The typical use of `substitute` is to create informative labels for data sets and plots. The `myplot` example below shows a simple use of this facility. It uses the functions `<deparse>` and `substitute` to create labels for a plot which are character string versions of the actual arguments to the function `myplot`. Substitution takes place by examining each component of the parse tree as follows: If it is not a bound symbol in `env`, it is unchanged. If it is a promise object, i.e., a formal argument to a function or explicitly created using `[delayedAssign](delayedassign)()`, the expression slot of the promise replaces the symbol. If it is an ordinary variable, its value is substituted, unless `env` is `[.GlobalEnv](environment)` in which case the symbol is left unchanged. Both `quote` and `substitute` are ‘special’ <primitive> functions which do not evaluate their arguments. ### Value The `<mode>` of the result is generally `"call"` but may in principle be any type. In particular, single-variable expressions have mode `"name"` and constants have the appropriate base mode. ### Note `substitute` works on a purely lexical basis. There is no guarantee that the resulting expression makes any sense. Substituting and quoting often cause confusion when the argument is `expression(...)`. The result is a call to the `<expression>` constructor function and needs to be evaluated with `<eval>` to give the actual expression object. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<missing>` for argument ‘missingness’, `<bquote>` for partial substitution, `[sQuote](squote)` and `[dQuote](squote)` for adding quotation marks to strings, `[all.names](allnames)` to retrieve the symbol names from an expression or call. ### Examples ``` require(graphics) (s.e <- substitute(expression(a + b), list(a = 1))) #> expression(1 + b) (s.s <- substitute( a + b, list(a = 1))) #> 1 + b c(mode(s.e), typeof(s.e)) # "call", "language" c(mode(s.s), typeof(s.s)) # (the same) # but: (e.s.e <- eval(s.e)) #> expression(1 + b) c(mode(e.s.e), typeof(e.s.e)) # "expression", "expression" substitute(x <- x + 1, list(x = 1)) # nonsense myplot <- function(x, y) plot(x, y, xlab = deparse1(substitute(x)), ylab = deparse1(substitute(y))) ## Simple examples about lazy evaluation, etc: f1 <- function(x, y = x) { x <- x + 1; y } s1 <- function(x, y = substitute(x)) { x <- x + 1; y } s2 <- function(x, y) { if(missing(y)) y <- substitute(x); x <- x + 1; y } a <- 10 f1(a) # 11 s1(a) # 11 s2(a) # a typeof(s2(a)) # "symbol" ``` r None `grouping` Grouping Permutation -------------------------------- ### Description `grouping` returns a permutation which rearranges its first argument such that identical values are adjacent to each other. Also returned as attributes are the group-wise partitioning and the maximum group size. ### Usage ``` grouping(...) ``` ### Arguments | | | | --- | --- | | `...` | a sequence of numeric, character or logical vectors, all of the same length, or a classed **R** object. | ### Details The function partially sorts the elements so that identical values are adjacent. `NA` values come last. This is guaranteed to be stable, so ties are preserved, and if the data are already grouped/sorted, the grouping is unchanged. This is useful for aggregation and is particularly fast for character vectors. Under the covers, the `"radix"` method of `<order>` is used, and the same caveats apply, including restrictions on character encodings and lack of support for [long vectors](longvectors) (those with *2^31* or more elements). Real-valued numbers are slightly rounded to account for numerical imprecision. Like `order`, for a classed **R** object the grouping is based on the result of `<xtfrm>`. ### Value An object of class `"grouping"`, the representation of which should be considered experimental and subject to change. It is an integer vector with two attributes: | | | | --- | --- | | `ends` | subscripts in the result corresponding to the last member of each group | | `maxgrpn` | the maximum group size | ### See Also `<order>`, `<xtfrm>`. ### Examples ``` (ii <- grouping(x <- c(1, 1, 3:1, 1:4, 3), y <- c(9, 9:1), z <- c(2, 1:9))) ## 6 5 2 1 7 4 10 8 3 9 rbind(x, y, z)[, ii] ``` r None `dput` Write an Object to a File or Recreate it ------------------------------------------------ ### Description Writes an ASCII text representation of an **R** object to a file, the **R** console, or a connection, or uses one to recreate the object. ### Usage ``` dput(x, file = "", control = c("keepNA", "keepInteger", "niceNames", "showAttributes")) dget(file, keep.source = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | an object. | | `file` | either a character string naming a file or a [connection](connections). `""` indicates output to the console. | | `control` | character vector indicating deparsing options. See `[.deparseOpts](deparseopts)` for their description. | | `keep.source` | logical: should the source formatting be retained when parsing functions, if possible? | ### Details `dput` opens `file` and deparses the object `x` into that file. The object name is not written (unlike `dump`). If `x` is a function the associated environment is stripped. Hence scoping information can be lost. Deparsing an object is difficult, and not always possible. With the default `control`, `dput()` attempts to deparse in a way that is readable, but for more complex or unusual objects (see `<dump>`), not likely to be parsed as identical to the original. Use `control = "all"` for the most complete deparsing; use `control = NULL` for the simplest deparsing, not even including attributes. `dput` will warn if fewer characters were written to a file than expected, which may indicate a full or corrupt file system. To display saved source rather than deparsing the internal representation include `"useSource"` in `control`. **R** currently saves source only for function definitions. If you do not care about source representation (e.g., for a data object), for speed set `options(keep.source = FALSE`) when calling `source`. ### Value For `dput`, the first argument invisibly. For `dget`, the object created. ### Note This is **not** a good way to transfer objects between **R** sessions. `<dump>` is better, but the functions `<save>` and `[saveRDS](readrds)` are designed to be used for transporting **R** data, and will work with **R** objects that `dput` does not handle correctly as well as being much faster. To avoid the risk of a source attribute out of sync with the actual function definition, the source attribute of a function will never be written as an attribute. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<deparse>`, `<dump>`, `<write>`. ### Examples ``` fil <- tempfile() ## Write an ASCII version of the 'base' function mean() to our temp file, .. dput(base::mean, fil) ## ... read it back into 'bar' and confirm it is the same bar <- dget(fil) stopifnot(all.equal(bar, base::mean, check.environment = FALSE)) ## Create a function with comments baz <- function(x) { # Subtract from one 1-x } ## and display it dput(baz) ## and now display the saved source dput(baz, control = "useSource") ## Numeric values: xx <- pi^(1:3) dput(xx) dput(xx, control = "digits17") dput(xx, control = "hexNumeric") dput(xx, fil); dget(fil) - xx # slight rounding on all platforms dput(xx, fil, control = "digits17") dget(fil) - xx # slight rounding on some platforms dput(xx, fil, control = "hexNumeric"); dget(fil) - xx unlink(fil) xn <- setNames(xx, paste0("pi^",1:3)) dput(xn) # nicer, now "niceNames" being part of default 'control' dput(xn, control = "S_compat") # no names ## explicitly asking for output as in R < 3.5.0: dput(xn, control = c("keepNA", "keepInteger", "showAttributes")) ```
programming_docs
r None `substr` Substrings of a Character Vector ------------------------------------------ ### Description Extract or replace substrings in a character vector. ### Usage ``` substr(x, start, stop) substring(text, first, last = 1000000L) substr(x, start, stop) <- value substring(text, first, last = 1000000L) <- value ``` ### Arguments | | | | --- | --- | | `x, text` | a character vector. | | `start, first` | integer. The first element to be replaced. | | `stop, last` | integer. The last element to be replaced. | | `value` | a character vector, recycled if necessary. | ### Details `substring` is compatible with S, with `first` and `last` instead of `start` and `stop`. For vector arguments, it expands the arguments cyclically to the length of the longest *provided* none are of zero length. When extracting, if `start` is larger than the string length then `""` is returned. For the extraction functions, `x` or `text` will be converted to a character vector by `[as.character](character)` if it is not already one. For the replacement functions, if `start` is larger than the string length then no replacement is done. If the portion to be replaced is longer than the replacement string, then only the portion the length of the string is replaced. If any argument is an `NA` element, the corresponding element of the answer is `NA`. Elements of the result will be have the encoding declared as that of the current locale (see `[Encoding](encoding)`) if the corresponding input had a declared Latin-1 or UTF-8 encoding and the current locale is either Latin-1 or UTF-8. If an input element has declared `"bytes"` encoding (see `[Encoding](encoding)`, the subsetting is done in units of bytes not characters. ### Value For `substr`, a character vector of the same length and with the same attributes as `x` (after possible coercion). For `substring`, a character vector of length the longest of the arguments. This will have names taken from `x` (if it has any after coercion, repeated as needed), and other attributes copied from `x` if it is the longest of the arguments). Elements of `x` with a declared encoding (see `[Encoding](encoding)`) will be returned with the same encoding. ### Note The S4 version of `substring<-` ignores `last`; this version does not. These functions are often used with `<nchar>` to truncate a display. That does not really work (you want to limit the width, not the number of characters, so it would be better to use `<strtrim>`), but at least make sure you use the default `nchar(type = "c")`. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. (`substring`.) ### See Also `<strsplit>`, `<paste>`, `<nchar>`. ### Examples ``` substr("abcdef", 2, 4) substring("abcdef", 1:6, 1:6) ## strsplit is more efficient ... substr(rep("abcdef", 4), 1:4, 4:5) x <- c("asfef", "qwerty", "yuiop[", "b", "stuff.blah.yech") substr(x, 2, 5) substring(x, 2, 4:6) substring(x, 2) <- c("..", "+++") x ``` r None `getDLLRegisteredRoutines` Reflectance Information for C/Fortran routines in a DLL ----------------------------------------------------------------------------------- ### Description This function allows us to query the set of routines in a DLL that are registered with R to enhance dynamic lookup, error handling when calling native routines, and potentially security in the future. This function provides a description of each of the registered routines in the DLL for the different interfaces, i.e. `[.C](foreign)`, `[.Call](callexternal)`, `[.Fortran](foreign)` and `[.External](callexternal)`. ### Usage ``` getDLLRegisteredRoutines(dll, addNames = TRUE) ``` ### Arguments | | | | --- | --- | | `dll` | a character string or `DLLInfo` object. The character string specifies the file name of the DLL of interest, and is given without the file name extension (e.g., the ‘.dll’ or ‘.so’) and with no directory/path information. So a file ‘MyPackage/libs/MyPackage.so’ would be specified as MyPackage. The `DLLInfo` objects can be obtained directly in calls to `[dyn.load](dynload)` and `<library.dynam>`, or can be found after the DLL has been loaded using `[getLoadedDLLs](getloadeddlls)`, which returns a list of `DLLInfo` objects (index-able by DLL file name). The `DLLInfo` approach avoids any ambiguities related to two DLLs having the same name but corresponding to files in different directories. | | `addNames` | a logical value. If this is `TRUE`, the elements of the returned lists are named using the names of the routines (as seen by R via registration or raw name). If `FALSE`, these names are not computed and assigned to the lists. As a result, the call should be quicker. The name information is also available in the `NativeSymbolInfo` objects in the lists. | ### Details This takes the registration information after it has been registered and processed by the R internals. In other words, it uses the extended information. There is `print` methods for the class, which prints only the types which have registered routines. ### Value A list of class `"DLLRegisteredRoutines"` with four elements corresponding to the routines registered for the `.C`, `.Call`, `.Fortran` and `.External` interfaces. Each is a list (of class `"NativeRoutineList"`) with as many elements as there were routines registered for that interface. Each element identifies a routine and is an object of class `"NativeSymbolInfo"`. An object of this class has the following fields: | | | | --- | --- | | `name` | the registered name of the routine (not necessarily the name in the C code). | | `address` | the memory address of the routine as resolved in the loaded DLL. This may be `NULL` if the symbol has not yet been resolved. | | `dll` | an object of class `DLLInfo` describing the DLL. This is same for all elements returned. | | `numParameters` | the number of arguments the native routine is to be called with. | ### Author(s) Duncan Temple Lang [[email protected]](mailto:[email protected]) ### References ‘Writing R Extensions’ manual for symbol registration. Duncan Temple Lang (2001). “In Search of C/C++ & FORTRAN Routines”. *R News*, **1**(3), 20–23. <https://www.r-project.org/doc/Rnews/Rnews_2001-3.pdf>. ### See Also `[getLoadedDLLs](getloadeddlls)`, `[getNativeSymbolInfo](getnativesymbolinfo)` for information on the entry points listed. ### Examples ``` dlls <- getLoadedDLLs() getDLLRegisteredRoutines(dlls[["base"]]) getDLLRegisteredRoutines("stats") ``` r None `Rhome` Return the R Home Directory ------------------------------------ ### Description Return the **R** home directory, or the full path to a component of the **R** installation. ### Usage ``` R.home(component = "home") ``` ### Arguments | | | | --- | --- | | `component` | As well as `"home"` which gives the **R** home directory, other known values are `"bin"`, `"doc"`, `"etc"`, `"include"`, `"modules"` and `"share"` giving the paths to the corresponding parts of an **R** installation. | ### Details The **R** home directory is the top-level directory of the **R** installation being run. The **R** home directory is often referred to as R\_HOME, and is the value of an environment variable of that name in an **R** session. It can be found outside an **R** session by `R [RHOME](../../utils/html/rhome)`. ### Value A character string giving the **R** home directory or path to a particular component. Normally the components are all subdirectories of the **R** home directory, but this need not be the case in a Unix-like installation. The value for `"modules"` and on Windows `"bin"` is a sub-architecture-specific location. On a Unix-alike, the constructed paths are based on the current values of the environment variables R\_HOME and where set R\_SHARE\_DIR, R\_DOC\_DIR and R\_INCLUDE\_DIR (these are set on startup and should not be altered). On Windows the values of `R.home()` and R\_HOME are switched to the 8.3 short form of path elements if required and if the Windows service to do that is enabled. The value of R\_HOME is set to use forward slashes (since many package maintainers pass it unquoted to shells, for example in ‘Makefile’s). ### See Also `[commandArgs](commandargs)()[1]` may provide related information. ### Examples ``` ## These result quite platform dependently : rbind(home = R.home(), bin = R.home("bin")) # often a sub directory of 'home' list.files(R.home("bin")) ``` r None `nchar` Count the Number of Characters (or Bytes or Width) ----------------------------------------------------------- ### Description `nchar` takes a character vector as an argument and returns a vector whose elements contain the sizes of the corresponding elements of `x`. Internally, it is a generic, for which methods can be defined (see [InternalMethods](internalmethods)). `nzchar` is a fast way to find out if elements of a character vector are non-empty strings. ### Usage ``` nchar(x, type = "chars", allowNA = FALSE, keepNA = NA) nzchar(x, keepNA = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | character vector, or a vector to be coerced to a character vector. Giving a factor is an error. | | `type` | character string: partial matching to one of `c("bytes", "chars", "width")`. See ‘Details’. | | `allowNA` | logical: should `NA` be returned for invalid multibyte strings or `"bytes"`-encoded strings (rather than throwing an error)? | | `keepNA` | logical: should `NA` be returned when `x` is `[NA](na)`? If false, `nchar()` returns `2`, as that is the number of printing characters used when strings are written to output, and `nzchar()` is `TRUE`. The default for `nchar()`, `NA`, means to use `keepNA = TRUE` unless `type` is `"width"`. | ### Details The ‘size’ of a character string can be measured in one of three ways (corresponding to the `type` argument): `bytes` The number of bytes needed to store the string (plus in C a final terminator which is not counted). `chars` The number of characters. `width` The number of columns `<cat>` will use to print the string in a monospaced font. The same as `chars` if this cannot be calculated. These will often be the same, and almost always will be in single-byte locales (but note how `type` determines the default for `keepNA`). There will be differences between the first two with multibyte character sequences, e.g. in UTF-8 locales. The internal equivalent of the default method of `[as.character](character)` is performed on `x` (so there is no method dispatch). If you want to operate on non-vector objects passing them through `<deparse>` first will be required. ### Value For `nchar`, an integer vector giving the sizes of each element. For missing values (i.e., `NA`, i.e., `[NA\_character\_](na)`), `nchar()` returns `[NA\_integer\_](na)` if `keepNA` is true, and `2`, the number of printing characters, if false. `type = "width"` gives (an approximation to) the number of columns used in printing each element in a terminal font, taking into account double-width, zero-width and ‘composing’ characters. The approximation is likely to be poor when there are unassigned or non-printing characters. If `allowNA = TRUE` and an element is detected as invalid in a multi-byte character set such as UTF-8, its number of characters and the width will be `NA`. Otherwise the number of characters will be non-negative, so `!is.na(nchar(x, "chars", TRUE))` is a test of validity. A character string marked with `"bytes"` encoding (see `[Encoding](encoding)`) has a number of bytes, but neither a known number of characters nor a width, so the latter two types are `NA` if `allowNA = TRUE`, otherwise an error. Names, dims and dimnames are copied from the input. For `nzchar`, a logical vector of the same length as `x`, true if and only if the element has non-zero length; if the element is `NA`, `nzchar()` is true when `keepNA` is false (the default) and `NA` otherwise. ### Note This does **not** by default give the number of characters that will be used to `print()` the string. Use `[encodeString](encodestring)` to find that. Where character strings have been marked as UTF-8, the number of characters and widths will be computed in UTF-8, even though printing may use escapes such as <U+2642> in a non-UTF-8 locale. The concept of ‘width’ is a slippery one even in a monospaced font. Some human languages have the concept of *combining* characters, in which two or more characters are rendered together: an example would be `"y\u306"`, which is two characters of width one: combining characters are given width zero, and there are other zero-width characters such as the zero-width space `"\u200b"`. Some East Asian languages have ‘wide’ characters, ideographs which are conventionally printed across two columns when mixed with ASCII and other ‘narrow’ characters in those languages. The problem is that whether a computer prints wide characters over two or one columns depends on the font, with it not being uncommon to use two columns in a font intended for East Asian users and a single column in a ‘Western’ font. Unicode has encodings for ‘fullwidth’ versions of ASCII characters and ‘halfwidth’ versions of Katakana (Japanese) and Hangul (Korean) characters. Then there is the ‘East Asian Ambiguous class’ (Greek, Cyrillic, signs, some accented Latin chars, etc), for which the historical practice was to use two columns in East Asia and one elsewhere. The width quoted by `nchar` for characters in that class (and some others) depends on the locale, being one except in some East Asian locales on some OSes (notably Windows). Control characters are given width zero in multi-byte locales, but are usually given width one in single-byte ones (as their positions are often undefined and maybe re-used as in CP1252 *vs* Latin-1). ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Unicode Standard Annex #11: *East Asian Width.* <https://www.unicode.org/reports/tr11/> ### See Also `[strwidth](../../graphics/html/strwidth)` giving width of strings for plotting; `<paste>`, `<substr>`, `<strsplit>` ### Examples ``` x <- c("asfef", "qwerty", "yuiop[", "b", "stuff.blah.yech") nchar(x) # 5 6 6 1 15 nchar(deparse(mean)) # 18 17 <-- unless mean differs from base::mean x[3] <- NA; x nchar(x, keepNA= TRUE) # 5 6 NA 1 15 nchar(x, keepNA=FALSE) # 5 6 2 1 15 stopifnot(identical(nchar(x ), nchar(x, keepNA= TRUE)), identical(nchar(x, "w"), nchar(x, keepNA=FALSE)), identical(is.na(x), is.na(nchar(x)))) ##' nchar() for all three types : nchars <- function(x, ...) vapply(c("chars", "bytes", "width"), function(tp) nchar(x, tp, ...), integer(length(x))) nchars("\u200b") # in R versions (>= 2015-09-xx): ## chars bytes width ## 1 3 0 data.frame(x, nchars(x)) ## all three types : same unless for NA ## force the same by forcing 'keepNA': (ncT <- nchars(x, keepNA = TRUE)) ## .... NA NA NA .... (ncF <- nchars(x, keepNA = FALSE))## .... 2 2 2 .... stopifnot(apply(ncT, 1, function(.) length(unique(.))) == 1, apply(ncF, 1, function(.) length(unique(.))) == 1) ``` r None `rep` Replicate Elements of Vectors and Lists ---------------------------------------------- ### Description `rep` replicates the values in `x`. It is a generic function, and the (internal) default method is described here. `rep.int` and `rep_len` are faster simplified versions for two common cases. Internally, they are generic, so methods can be defined for them (see [InternalMethods](internalmethods)). ### Usage ``` rep(x, ...) rep.int(x, times) rep_len(x, length.out) ``` ### Arguments | | | | --- | --- | | `x` | a vector (of any mode including a `<list>`) or a factor or (for `rep` only) a `POSIXct` or `POSIXlt` or `Date` object; or an S4 object containing such an object. | | `...` | further arguments to be passed to or from other methods. For the internal default method these can include: `times` an integer-valued vector giving the (non-negative) number of times to repeat each element if of length `length(x)`, or to repeat the whole vector if of length 1. Negative or `NA` values are an error. A `double` vector is accepted, other inputs being coerced to an integer or double vector. `length.out` non-negative integer. The desired length of the output vector. Other inputs will be coerced to a double vector and the first element taken. Ignored if `NA` or invalid. `each` non-negative integer. Each element of `x` is repeated `each` times. Other inputs will be coerced to an integer or double vector and the first element taken. Treated as `1` if `NA` or invalid. | | `times, length.out` | see `...` above. | ### Details The default behaviour is as if the call was ``` rep(x, times = 1, length.out = NA, each = 1) ``` . Normally just one of the additional arguments is specified, but if `each` is specified with either of the other two, its replication is performed first, and then that implied by `times` or `length.out`. If `times` consists of a single integer, the result consists of the whole input repeated this many times. If `times` is a vector of the same length as `x` (after replication by `each`), the result consists of `x[1]` repeated `times[1]` times, `x[2]` repeated `times[2]` times and so on. `length.out` may be given in place of `times`, in which case `x` is repeated as many times as is necessary to create a vector of this length. If both are given, `length.out` takes priority and `times` is ignored. Non-integer values of `times` will be truncated towards zero. If `times` is a computed quantity it is prudent to add a small fuzz or use `<round>`. And analogously for `each`. If `x` has length zero and `length.out` is supplied and is positive, the values are filled in using the extraction rules, that is by an `NA` of the appropriate class for an atomic vector (`0` for raw vectors) and `NULL` for a list. ### Value An object of the same type as `x`. `rep.int` and `rep_len` return no attributes (except the class if returning a factor). The default method of `rep` gives the result names (which will almost always contain duplicates) if `x` had names, but retains no other attributes. ### Note Function `rep.int` is a simple case which was provided as a separate function partly for S compatibility and partly for speed (especially when names can be dropped). The performance of `rep` has been improved since, but `rep.int` is still at least twice as fast when `x` has names. The name `rep.int` long precedes making `rep` generic. Function `rep` is a primitive, but (partial) matching of argument names is performed as for normal functions. For historical reasons `rep` (only) works on `NULL`: the result is always `NULL` even when `length.out` is positive. Although it has never been documented, these functions have always worked on <expression> vectors. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<seq>`, `<sequence>`, `[replicate](lapply)`. ### Examples ``` rep(1:4, 2) rep(1:4, each = 2) # not the same. rep(1:4, c(2,2,2,2)) # same as second. rep(1:4, c(2,1,2,1)) rep(1:4, each = 2, length.out = 4) # first 4 only. rep(1:4, each = 2, length.out = 10) # 8 integers plus two recycled 1's. rep(1:4, each = 2, times = 3) # length 24, 3 complete replications rep(1, 40*(1-.8)) # length 7 on most platforms rep(1, 40*(1-.8)+1e-7) # better ## replicate a list fred <- list(happy = 1:10, name = "squash") rep(fred, 5) # date-time objects x <- .leap.seconds[1:3] rep(x, 2) rep(as.POSIXlt(x), rep(2, 3)) ## named factor x <- factor(LETTERS[1:4]); names(x) <- letters[1:4] x rep(x, 2) rep(x, each = 2) rep.int(x, 2) # no names rep_len(x, 10) ```
programming_docs
r None `numeric_version` Numeric Versions ----------------------------------- ### Description A simple S3 class for representing numeric versions including package versions, and associated methods. ### Usage ``` numeric_version(x, strict = TRUE) package_version(x, strict = TRUE) R_system_version(x, strict = TRUE) getRversion() ``` ### Arguments | | | | --- | --- | | `x` | a character vector with suitable numeric version strings (see ‘Details’); for `package_version`, alternatively an R version object as obtained by `[R.version](version)`. | | `strict` | a logical indicating whether invalid numeric versions should results in an error (default) or not. | ### Details Numeric versions are sequences of one or more non-negative integers, usually (e.g., in package ‘DESCRIPTION’ files) represented as character strings with the elements of the sequence concatenated and separated by single . or - characters. **R** package versions consist of at least two such integers, an **R** system version of exactly three (major, minor and patchlevel). Functions `numeric_version`, `package_version` and `R_system_version` create a representation from such strings (if suitable) which allows for coercion and testing, combination, comparison, summaries (min/max), inclusion in data frames, subscripting, and printing. The classes can hold a vector of such representations. `getRversion` returns the version of the running **R** as an R system version object. The `[[` operator extracts or replaces a single version. To access the integers of a version use two indices: see the examples. ### See Also `[compareVersion](../../utils/html/compareversion)`; `[packageVersion](../../utils/html/packagedescription)` for the version of a specific **R** package. `[R.version](version)` etc for the version of **R** (and the information underlying `getRversion()`). ### Examples ``` x <- package_version(c("1.2-4", "1.2-3", "2.1")) x < "1.4-2.3" c(min(x), max(x)) x[2, 2] x$major x$minor if(getRversion() <= "2.5.0") { ## work around missing feature cat("Your version of R, ", as.character(getRversion()), ", is outdated.\n", "Now trying to work around that ...\n", sep = "") } x[[c(1, 3)]] # '4' as a numeric vector, same as x[1, 3] x[1, 3] # 4 as an integer x[[2, 3]] <- 0 # zero the patchlevel x[[c(2, 3)]] <- 0 # same x x[[3]] <- "2.2.3"; x x <- c(x, package_version("0.0")) is.na(x)[4] <- TRUE stopifnot(identical(is.na(x), c(rep(FALSE,3), TRUE)), anyNA(x)) ``` r None `stopifnot` Ensure the Truth of R Expressions ---------------------------------------------- ### Description If any of the expressions (in `...` or `exprs`) are not `<all>` `TRUE`, `<stop>` is called, producing an error message indicating the *first* expression which was not (`<all>`) true. ### Usage ``` stopifnot(..., exprs, exprObject, local = TRUE) ``` ### Arguments | | | | --- | --- | | `..., exprs` | any number of **R** expressions, which should each evaluate to (a logical vector of all) `[TRUE](logical)`. Use *either* `...` *or* `exprs`, the latter typically an unevaluated expression of the form ``` { expr1 expr2 .... } ``` Note that e.g., positive numbers are *not* `TRUE`, even when they are coerced to `TRUE`, e.g., inside `if(.)` or in arithmetic computations in **R**. If names are provided to `...`, they will be used in lieu of the default error message. | | `exprObject` | alternative to `exprs` or `...`: an ‘expression-like’ object, typically an `<expression>`, but also a `<call>`, a `<name>`, or atomic constant such as `TRUE`. | | `local` | (only when `exprs` is used:) indicates the `<environment>` in which the expressions should be evaluated; by default the one from where `stopifnot()` has been called. | ### Details This function is intended for use in regression tests or also argument checking of functions, in particular to make them easier to read. `stopifnot(A, B)` or equivalently `stopifnot(exprs= {A ; B})` are conceptually equivalent to ``` { if(any(is.na(A)) || !all(A)) stop(...); if(any(is.na(B)) || !all(B)) stop(...) } ``` Since **R** version 3.6.0, `stopifnot()` no longer handles potential errors or warnings (by `[tryCatch](conditions)()` etc) for each single expression and may use `[sys.call](sys.parent)(<n>)` to get a meaningful and short error message in case an expression did not evaluate to all TRUE. This provides considerably less overhead. Since **R** version 3.5.0, expressions *are* evaluated sequentially, and hence evaluation stops as soon as there is a “non-TRUE”, as indicated by the above conceptual equivalence statement. Also, since **R** version 3.5.0, `stopifnot(exprs = { ... })` can be used alternatively and may be preferable in the case of several expressions, as they are more conveniently evaluated interactively (“no extraneous `,` ”). Since **R** version 3.4.0, when an expression (from `...`) is not true *and* is a call to `<all.equal>`, the error message will report the (first part of the) differences reported by `<all.equal>(*)`. ### Value (`[NULL](null)` if all statements in `...` are `TRUE`.) ### Note Trying to use the `stopifnot(exprs = ..)` version via a shortcut, say, ``` assertWRONG <- function(exprs) stopifnot(exprs = exprs) ``` is delicate and the above is *not a good idea*. Contrary to `stopifnot()` which takes care to evaluate the parts of `exprs` one by one and stop at the first non-TRUE, the above short cut would typically evaluate all parts of `exprs` and pass the result, i.e., typically of the *last* entry of `exprs` to `stopifnot()`. However, a more careful version, ``` assert <- function(exprs) eval.parent(substitute(stopifnot(exprs = exprs))) ``` may be a nice short cut for `stopifnot(exprs = *)` calls using the more commonly known verb as function name. ### See Also `<stop>`, `<warning>`; `[assertCondition](../../tools/html/assertcondition)` in package tools complements `stopifnot()` for testing warnings and errors. ### Examples ``` stopifnot(1 == 1, all.equal(pi, 3.14159265), 1 < 2) # all TRUE m <- matrix(c(1,3,3,1), 2, 2) stopifnot(m == t(m), diag(m) == rep(1, 2)) # all(.) |=> TRUE op <- options(error = expression(NULL)) # "disabling stop(.)" << Use with CARE! >> stopifnot(length(10)) # gives an error: '1' is *not* TRUE ## even when if(1) "ok" works stopifnot(all.equal(pi, 3.141593), 2 < 2, (1:10 < 12), "a" < "b") ## More convenient for interactive "line by line" evaluation: stopifnot(exprs = { all.equal(pi, 3.1415927) 2 < 2 1:10 < 12 "a" < "b" }) eObj <- expression(2 < 3, 3 <= 3:6, 1:10 < 2) stopifnot(exprObject = eObj) stopifnot(exprObject = quote(3 == 3)) stopifnot(exprObject = TRUE) # long all.equal() error messages are abbreviated: stopifnot(all.equal(rep(list(pi),4), list(3.1, 3.14, 3.141, 3.1415))) # The default error message can be overridden to be more informative: m[1,2] <- 12 stopifnot("m must be symmetric"= m == t(m)) #=> Error: m must be symmetric options(op) # revert to previous error handler ``` r None `serialize` Simple Serialization Interface ------------------------------------------- ### Description A simple low-level interface for serializing to connections. ### Usage ``` serialize(object, connection, ascii, xdr = TRUE, version = NULL, refhook = NULL) unserialize(connection, refhook = NULL) ``` ### Arguments | | | | --- | --- | | `object` | **R** object to serialize. | | `connection` | an open [connection](connections) or (for `serialize`) `NULL` or (for `unserialize`) a raw vector (see ‘Details’). | | `ascii` | a logical. If `TRUE` or `NA`, an ASCII representation is written; otherwise (default) a binary one. See also the comments in the help for `<save>`. | | `xdr` | a logical: if a binary representation is used, should a big-endian one (XDR) be used? | | `version` | the workspace format version to use. `NULL` specifies the current default version (3). The only other supported value is 2, the default from **R** 1.4.0 to **R** 3.5.0. | | `refhook` | a hook function for handling reference objects. | ### Details The function `serialize` serializes `object` to the specified connection. If `connection` is `NULL` then `object` is serialized to a raw vector, which is returned as the result of `serialize`. Sharing of reference objects is preserved within the object but not across separate calls to `serialize`. `unserialize` reads an object (as written by `serialize`) from `connection` or a raw vector. The `refhook` functions can be used to customize handling of non-system reference objects (all external pointers and weak references, and all environments other than namespace and package environments and `.GlobalEnv`). The hook function for `serialize` should return a character vector for references it wants to handle; otherwise it should return `NULL`. The hook for `unserialize` will be called with character vectors supplied to `serialize` and should return an appropriate object. For a text-mode connection, the default value of `ascii` is set to `TRUE`: only ASCII representations can be written to text-mode connections and attempting to use `ascii = FALSE` will throw an error. The format consists of a single line followed by the data: the first line contains a single character: `X` for binary serialization and `A` for ASCII serialization, followed by a new line. (The format used is identical to that used by `[readRDS](readrds)`.) As almost all systems in current use are little-endian, `xdr = FALSE` can be used to avoid byte-shuffling at both ends when transferring data from one little-endian machine to another (or between processes on the same machine). Depending on the system, this can speed up serialization and unserialization by a factor of up to 3x. ### Value For `serialize`, `NULL` unless `connection = NULL`, when the result is returned in a raw vector. For `unserialize` an **R** object. ### Warning These functions have provided a stable interface since **R** 2.4.0 (when the storage of serialized objects was changed from character to raw vectors). However, the serialization format may change in future versions of **R**, so this interface should not be used for long-term storage of **R** objects. On 32-bit platforms a raw vector is limited to *2^31 - 1* bytes, but **R** objects can exceed this and their serializations will normally be larger than the objects. ### See Also `[saveRDS](readrds)` for a more convenient interface to serialize an object to a file or connection. `<save>` and `<load>` to serialize and restore one or more named objects. The ‘R Internals’ manual for details of the format used. ### Examples ``` x <- serialize(list(1,2,3), NULL) unserialize(x) ## see also the examples for saveRDS ``` r None `dimnames` Dimnames of an Object --------------------------------- ### Description Retrieve or set the dimnames of an object. ### Usage ``` dimnames(x) dimnames(x) <- value provideDimnames(x, sep = "", base = list(LETTERS), unique = TRUE) ``` ### Arguments | | | | --- | --- | | `x` | an **R** object, for example a matrix, array or data frame. | | `value` | a possible value for `dimnames(x)`: see the ‘Value’ section. | | `sep` | a character string, used to separate `base` symbols and digits in the constructed dimnames. | | `base` | a non-empty `<list>` of character vectors. The list components are used in turn (and recycled when needed) to construct replacements for empty dimnames components. See also the examples. | | `unique` | logical indicating that the dimnames constructed are unique within each dimension in the sense of `<make.unique>`. | ### Details The functions `dimnames` and `dimnames<-` are generic. For an `<array>` (and hence in particular, for a `<matrix>`), they retrieve or set the `dimnames` attribute (see <attributes>) of the object. A list `value` can have names, and these will be used to label the dimensions of the array where appropriate. The replacement method for arrays/matrices coerces vector and factor elements of `value` to character, but does not dispatch methods for `as.character`. It coerces zero-length elements to `NULL`, and a zero-length list to `NULL`. If `value` is a list shorter than the number of dimensions, it is extended with `NULL`s to the needed length. Both have methods for data frames. The dimnames of a data frame are its `<row.names>` and its `<names>`. For the replacement method each component of `value` will be coerced by `[as.character](character)`. For a 1D matrix the `<names>` are the same thing as the (only) component of the `dimnames`. Both are <primitive> functions. `provideDimnames(x)` provides `dimnames` where “missing”, such that its result has `<character>` dimnames for each component. If `unique` is true as by default, they are unique within each component via `<make.unique>(*, sep=sep)`. ### Value The dimnames of a matrix or array can be `NULL` (which is not stored) or a list of the same length as `dim(x)`. If a list, its components are either `NULL` or a character vector with positive length of the appropriate dimension of `x`. The list can have names. It is possible that all components are `NULL`: such dimnames may get converted to `NULL`. For the `"data.frame"` method both dimnames are character vectors, and the rownames must contain no duplicates nor missing values. `provideDimnames(x)` returns `x`, with “`NULL` - free” `dimnames`, i.e. each component a character vector of correct length. ### Note Setting components of the dimnames, e.g., `dimnames(A)[[1]] <- value` is a common paradigm, but note that it will not work if the value assigned is `NULL`. Use `[rownames](colnames)` instead, or (as it does) manipulate the whole dimnames list. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[rownames](colnames)`, `<colnames>`; `<array>`, `<matrix>`, `<data.frame>`. ### Examples ``` ## simple versions of rownames and colnames ## could be defined as follows rownames0 <- function(x) dimnames(x)[[1]] colnames0 <- function(x) dimnames(x)[[2]] (dn <- dimnames(A <- provideDimnames(N <- array(1:24, dim = 2:4)))) A0 <- A; dimnames(A)[2:3] <- list(NULL) stopifnot(identical(A0, provideDimnames(A))) strd <- function(x) utils::str(dimnames(x)) strd(provideDimnames(A, base= list(letters[-(1:9)], tail(LETTERS)))) strd(provideDimnames(N, base= list(letters[-(1:9)], tail(LETTERS)))) # recycling strd(provideDimnames(A, base= list(c("AA","BB")))) # recycling on both levels ## set "empty dimnames": provideDimnames(rbind(1, 2:3), base = list(""), unique=FALSE) ``` r None `date` System Date and Time ---------------------------- ### Description Returns a character string of the current system date and time. ### Usage ``` date() ``` ### Value The string has the form `"Fri Aug 20 11:11:00 1999"`, i.e., length 24, since it relies on POSIX's `ctime` ensuring the above fixed format. Timezone and Daylight Saving Time are taken account of, but *not* indicated in the result. The day and month abbreviations are always in English, irrespective of locale. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[Sys.Date](sys.time)` and `[Sys.time](sys.time)`; `[Date](dates)` and `[DateTimeClasses](datetimeclasses)` for objects representing date and time. ### Examples ``` (d <- date()) nchar(d) == 24 ## something similar in the current locale format(Sys.time(), "%a %b %d %H:%M:%S %Y") ``` r None `Version` Version Information ------------------------------ ### Description `R.Version()` provides detailed information about the version of **R** running. `R.version` is a variable (a `<list>`) holding this information (and `version` is a copy of it for S compatibility). ### Usage ``` R.Version() R.version R.version.string version ``` ### Details This gives details of the OS under which **R** was built, not the one under which it is currently running (for which see `[Sys.info](sys.info)`). Note that OS names might not be what you expect: for example macOS Mavericks 10.9.4 identifies itself as darwin13.3.0, Linux usually as linux-gnu and Solaris 10 as solaris2.10. ### Value `R.Version` returns a list with character-string components | | | | --- | --- | | `platform` | the platform for which **R** was built. A triplet of the form CPU-VENDOR-OS, as determined by the configure script. E.g, `"i686-unknown-linux-gnu"` or `"i386-pc-mingw32"`. | | `arch` | the architecture (CPU) **R** was built on/for. | | `os` | the underlying operating system. | | `system` | CPU and OS, separated by a comma. | | `status` | the status of the version (e.g., `"alpha"`) | | `major` | the major version number | | `minor` | the minor version number, including the patchlevel | | `year` | the year the version was released | | `month` | the month the version was released | | `day` | the day the version was released | | `svn rev` | the Subversion revision number, which should be either `"unknown"` or a single number. (A range of numbers or a number with M or S appended indicates inconsistencies in the sources used to build this version of **R**.) | | `language` | always `"R"`. | | `version.string` | a `<character>` string concatenating some of the info above, useful for plotting, etc. | `R.version` and `version` are lists of class `"simple.list"` which has a `print` method. ### Note Do *not* use `R.version$os` to test the platform the code is running on: use `.Platform$OS.type` instead. Slightly different versions of the OS may report different values of `R.version$os`, as may different versions of **R**. `R.version.string` is a copy of `R.version$version.string` for simplicity and backwards compatibility. ### See Also `[sessionInfo](../../utils/html/sessioninfo)` which provides additional information; `[getRversion](numeric_version)` typically used inside R code, `[.Platform](platform)`, `[Sys.info](sys.info)`. ### Examples ``` require(graphics) R.version$os # to check how lucky you are ... plot(0) # any plot mtext(R.version.string, side = 1, line = 4, adj = 1) # a useful bottom-right note ## a good way to detect macOS: if(grepl("^darwin", R.version$os)) message("running on macOS") ## Short R version string, ("space free", useful in file/directory names; ## also fine for unreleased versions of R): shortRversion <- function() { rvs <- R.version.string if(grepl("devel", (st <- R.version$status))) rvs <- sub(paste0(" ",st," "), "-devel_", rvs, fixed=TRUE) gsub("[()]", "", gsub(" ", "_", sub(" version ", "-", rvs))) } shortRversion() ``` r None `logical` Logical Vectors -------------------------- ### Description Create or test for objects of type `"logical"`, and the basic logical constants. ### Usage ``` TRUE FALSE T; F logical(length = 0) as.logical(x, ...) is.logical(x) ``` ### Arguments | | | | --- | --- | | `length` | A non-negative integer specifying the desired length. Double values will be coerced to integer: supplying an argument of length other than one is an error. | | `x` | object to be coerced or tested. | | `...` | further arguments passed to or from other methods. | ### Details `TRUE` and `FALSE` are <reserved> words denoting logical constants in the **R** language, whereas `T` and `F` are global variables whose initial values set to these. All four are `logical(1)` vectors. Logical vectors are coerced to integer vectors in contexts where a numerical value is required, with `TRUE` being mapped to `1L`, `FALSE` to `0L` and `NA` to `NA_integer_`. ### Value `logical` creates a logical vector of the specified length. Each element of the vector is equal to `FALSE`. `as.logical` attempts to coerce its argument to be of logical type. In numeric and complex vectors, zeros are `FALSE` and non-zero values are `TRUE`. For `<factor>`s, this uses the `<levels>` (labels). Like `[as.vector](vector)` it strips attributes including names. Character strings `c("T", "TRUE", "True", "true")` are regarded as true, `c("F", "FALSE", "False", "false")` as false, and all others as `NA`. `is.logical` returns `TRUE` or `FALSE` depending on whether its argument is of logical type or not. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[NA](na)`, the other logical constant. Logical operators are documented in `[Logic](logic)`. ### Examples ``` ## non-zero values are TRUE as.logical(c(pi,0)) if (length(letters)) cat("26 is TRUE\n") ## logical interpretation of particular strings charvec <- c("FALSE", "F", "False", "false", "fAlse", "0", "TRUE", "T", "True", "true", "tRue", "1") as.logical(charvec) ## factors are converted via their levels, so string conversion is used as.logical(factor(charvec)) as.logical(factor(c(0,1))) # "0" and "1" give NA ```
programming_docs
r None `encodeString` Encode Character Vector as for Printing ------------------------------------------------------- ### Description `encodeString` escapes the strings in a character vector in the same way `print.default` does, and optionally fits the encoded strings within a field width. ### Usage ``` encodeString(x, width = 0, quote = "", na.encode = TRUE, justify = c("left", "right", "centre", "none")) ``` ### Arguments | | | | --- | --- | | `x` | A character vector, or an object that can be coerced to one by `[as.character](character)`. | | `width` | integer: the minimum field width. If `NULL` or `NA`, this is taken to be the largest field width needed for any element of `x`. | | `quote` | character: quoting character, if any. | | `na.encode` | logical: should `NA` strings be encoded? | | `justify` | character: partial matches are allowed. If padding to the minimum field width is needed, how should spaces be inserted? `justify == "none"` is equivalent to `width = 0`, for consistency with `format.default`. | ### Details This escapes backslash and the control characters \a (bell), \b (backspace), \f (formfeed), \n (line feed), \r (carriage return), \t (tab) and \v (vertical tab) as well as any non-printable characters in a single-byte locale, which are printed in octal notation (\xyz with leading zeroes). Which characters are non-printable depends on the current locale. Windows' reporting of printable characters is unreliable, so there all other control characters are regarded as non-printable, and all characters with codes 32–255 as printable in a single-byte locale. See `<print.default>` for how non-printable characters are handled in multi-byte locales. If `quote` is a single or double quote any embedded quote of the same type is escaped. Note that justification is of the quoted string, hence spaces are added outside the quotes. ### Value A character vector of the same length as `x`, with the same attributes (including names and dimensions) but with no class set. Marked UTF-8 encodings are preserved. ### Note The default for `width` is different from `format.default`, which does similar things for character vectors but without encoding using escapes. ### See Also `<print.default>` ### Examples ``` x <- "ab\bc\ndef" print(x) cat(x) # interprets escapes cat(encodeString(x), "\n", sep = "") # similar to print() factor(x) # makes use of this to print the levels x <- c("a", "ab", "abcde") encodeString(x) # width = 0: use as little as possible encodeString(x, 2) # use two or more (left justified) encodeString(x, width = NA) # left justification encodeString(x, width = NA, justify = "c") encodeString(x, width = NA, justify = "r") encodeString(x, width = NA, quote = "'", justify = "r") ``` r None `get` Return the Value of a Named Object ----------------------------------------- ### Description Search by name for an object (`get`) or zero or more objects (`mget`). ### Usage ``` get(x, pos = -1, envir = as.environment(pos), mode = "any", inherits = TRUE) mget(x, envir = as.environment(-1), mode = "any", ifnotfound, inherits = FALSE) dynGet(x, ifnotfound = , minframe = 1L, inherits = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | For `get`, an object name (given as a character string or a symbol). For `mget`, a character vector of object names. | | `pos, envir` | where to look for the object (see ‘Details’); if omitted search as if the name of the object appeared unquoted in an expression. | | `mode` | the mode or type of object sought: see the ‘Details’ section. | | `inherits` | should the enclosing frames of the environment be searched? | | `ifnotfound` | For `mget`, a `<list>` of values to be used if the item is not found: it will be coerced to a list if necessary. For `dynGet` any **R** object, e.g., a call to `<stop>()`. | | `minframe` | integer specifying the minimal frame number to look into. | ### Details The `pos` argument can specify the environment in which to look for the object in any of several ways: as a positive integer (the position in the `<search>` list); as the character string name of an element in the search list; or as an `<environment>` (including using `[sys.frame](sys.parent)` to access the currently active function calls). The default of `-1` indicates the current environment of the call to `get`. The `envir` argument is an alternative way to specify an environment. These functions look to see if each of the name(s) `x` have a value bound to it in the specified environment. If `inherits` is `TRUE` and a value is not found for `x` in the specified environment, the enclosing frames of the environment are searched until the name `x` is encountered. See `<environment>` and the ‘R Language Definition’ manual for details about the structure of environments and their enclosures. If `mode` is specified then only objects of that type are sought. `mode` here is a mixture of the meanings of `<typeof>` and `<mode>`: `"function"` covers primitive functions and operators, `"numeric"`, `"integer"` and `"double"` all refer to any numeric type, `"symbol"` and `"name"` are equivalent *but* `"language"` must be used (and not `"call"` or `"("`). For `mget`, the values of `mode` and `ifnotfound` can be either the same length as `x` or of length 1. The argument `ifnotfound` must be a list containing either the value to use if the requested item is not found or a function of one argument which will be called if the item is not found, with argument the name of the item being requested. `dynGet()` is somewhat experimental and to be used *inside* another function. It looks for an object in the callers, i.e., the `[sys.frame](sys.parent)()`s of the function. Use with caution. ### Value For `get`, the object found. If no object is found an error results. For `mget`, a named list of objects (found or specified *via* `ifnotfound`). ### Note The reverse (or “inverse”) of `a <- get(nam)` is `<assign>(nam, a)`, assigning `a` to name `nam`. `inherits = TRUE` is the default for `get` in **R** but not for S where it had a different meaning. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<exists>` for checking whether an object exists; `[get0](exists)` for an efficient way of both checking existence and getting an object. `<assign>`, the inverse of `get()`, see above. Use `[getAnywhere](../../utils/html/getanywhere)` for searching for an object anywhere, including in other namespaces, and `[getFromNamespace](../../utils/html/getfromnamespace)` to find an object in a specific namespace. ### Examples ``` get("%o%") ## test mget e1 <- new.env() mget(letters, e1, ifnotfound = as.list(LETTERS)) ``` r None `libcurlVersion` Report Version of libcurl ------------------------------------------- ### Description Report version of `libcurl` in use. ### Usage ``` libcurlVersion() ``` ### Value A character string, with value the `libcurl` version in use or `""` if none is. If `libcurl` is available, has attributes | | | | --- | --- | | `ssl_version` | A character string naming the SSL/TLS implementation and version, possibly `"none"`. It is intended for the version of OpenSSL used, but not all implementations of `libcurl` use OpenSSL — for example macOS reports `"SecureTranspart"`, its wrapper for SSL/TLS. | | `libssh_version` | A character string naming the `libssh` version, which may or may not be available (it is used for e.g. `scp` and `sftp` protocols). Where present, something like `"libssh2/1.5.0"`. | | `protocols` | A character vector of the names of supported protocols, also known as ‘schemes’ when part of a URL. | ### Warning In late 2017 a `libcurl` installation was seen divided into two libraries, `libcurl` and `libcurl-feature`, and the first had been updated but not the second. As the compiled function recording the version was in the latter, the version reported by `libcurlVersion` was misleading. ### See Also `[extSoftVersion](extsoftversion)` for versions of other third-party software. `[curlGetHeaders](curlgetheaders)`, `[download.file](../../utils/html/download.file)` and `[url](connections)` for functions which (optionally) use `libcurl`. <https://curl.se/docs/sslcerts.html> and <https://curl.se/docs/ssl-compared.html> for more details on SSL versions (the current standard being known as TLS). Normally `libcurl` used with **R** uses SecureTransport on macOS, OpenSSL on Windows and GnuTLS, NSS or OpenSSL on Unix-alikes. (At the time of writing Debian-based Linuxen use GnuTLS and RedHat-based ones use NSS, but it has been announced that Fedora 27 will switch to OpenSSL.) ### Examples ``` libcurlVersion() ``` r None `La_library` LAPACK Library ---------------------------- ### Description Report the name of the shared object file with `LAPACK` implementation in use. ### Usage ``` La_library() ``` ### Value A character vector of length one (`""` when the name is not known). The value can be used as an indication of which `LAPACK` implementation is in use. Typically, the **R** version of `LAPACK` will appear as `libRlapack.so` (`libRlapack.dylib`), depending on how R was built. Note that `libRlapack.so` (`libRlapack.dylib`) may also be shown for an external `LAPACK` implementation that had been copied, hard-linked or renamed by the system administrator. Otherwise, the shared object file will be given and its path/name may indicate the vendor/version. The detection does not work on Windows. ### See Also `[extSoftVersion](extsoftversion)` for versions of other third-party software including `BLAS`. `[La\_version](la_version)` for the version of LAPACK in use. ### Examples ``` La_library() ``` r None `complex` Complex Numbers and Basic Functionality -------------------------------------------------- ### Description Basic functions which support complex arithmetic in **R**, in addition to the arithmetic operators `+`, `-`, `*`, `/`, and `^`. ### Usage ``` complex(length.out = 0, real = numeric(), imaginary = numeric(), modulus = 1, argument = 0) as.complex(x, ...) is.complex(x) Re(z) Im(z) Mod(z) Arg(z) Conj(z) ``` ### Arguments | | | | --- | --- | | `length.out` | numeric. Desired length of the output vector, inputs being recycled as needed. | | `real` | numeric vector. | | `imaginary` | numeric vector. | | `modulus` | numeric vector. | | `argument` | numeric vector. | | `x` | an object, probably of mode `complex`. | | `z` | an object of mode `complex`, or one of a class for which a methods has been defined. | | `...` | further arguments passed to or from other methods. | ### Details Complex vectors can be created with `complex`. The vector can be specified either by giving its length, its real and imaginary parts, or modulus and argument. (Giving just the length generates a vector of complex zeroes.) `as.complex` attempts to coerce its argument to be of complex type: like `[as.vector](vector)` it strips attributes including names. Up to **R** versions 3.2.x, all forms of `NA` and `NaN` were coerced to a complex `NA`, i.e., the `[NA\_complex\_](na)` constant, for which both the real and imaginary parts are `NA`. Since **R** 3.3.0, typically only objects which are `NA` in parts are coerced to complex `NA`, but others with `NaN` parts, are *not*. As a consequence, complex arithmetic where only `NaN`'s (but no `NA`'s) are involved typically will *not* give complex `NA` but complex numbers with real or imaginary parts of `NaN`. Note that `is.complex` and `is.numeric` are never both `TRUE`. The functions `Re`, `Im`, `Mod`, `Arg` and `Conj` have their usual interpretation as returning the real part, imaginary part, modulus, argument and complex conjugate for complex values. The modulus and argument are also called the *polar coordinates*. If *z = x + i y* with real *x* and *y*, for *r = Mod(z) = √(x^2 + y^2)*, and *φ = Arg(z)*, *x = r\*cos(φ)* and *y = r\*sin(φ)*. They are all [internal generic](internalmethods) <primitive> functions: methods can be defined for them individually or *via* the `[Complex](groupgeneric)` group generic. In addition to the arithmetic operators (see [Arithmetic](arithmetic)) `+`, `-`, `*`, `/`, and `^`, the elementary trigonometric, logarithmic, exponential, square root and hyperbolic functions are implemented for complex values. Matrix multiplications (`[%\*%](matmult)`, `<crossprod>`, `[tcrossprod](crossprod)`) are also defined for complex matrices (`<matrix>`), and so are `<solve>`, `<eigen>` or `<svd>`. Internally, complex numbers are stored as a pair of <double> precision numbers, either or both of which can be `[NaN](is.finite)` (including `NA`, see `[NA\_complex\_](na)` and above) or plus or minus infinity. ### S4 methods `as.complex` is primitive and can have S4 methods set. `Re`, `Im`, `Mod`, `Arg` and `Conj` constitute the S4 group generic `[Complex](../../methods/html/s4groupgeneric)` and so S4 methods can be set for them individually or via the group generic. ### Note Operations and functions involving complex `[NaN](is.finite)` mostly rely on the C library's handling of double complex arithmetic, which typically returns `complex(re=NaN, im=NaN)` (but we have not seen a guarantee for that). For `+` and `-`, **R**'s own handling works strictly “coordinate wise”. Operations involving complex `NA`, i.e., `[NA\_complex\_](na)`, return `[NA\_complex\_](na)`. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[Arithmetic](arithmetic)`; `<polyroot>` finds all *n* complex roots of a polynomial of degree *n*. ### Examples ``` require(graphics) 0i ^ (-3:3) matrix(1i^ (-6:5), nrow = 4) #- all columns are the same 0 ^ 1i # a complex NaN ## create a complex normal vector z <- complex(real = stats::rnorm(100), imaginary = stats::rnorm(100)) ## or also (less efficiently): z2 <- 1:2 + 1i*(8:9) ## The Arg(.) is an angle: zz <- (rep(1:4, length.out = 9) + 1i*(9:1))/10 zz.shift <- complex(modulus = Mod(zz), argument = Arg(zz) + pi) plot(zz, xlim = c(-1,1), ylim = c(-1,1), col = "red", asp = 1, main = expression(paste("Rotation by "," ", pi == 180^o))) abline(h = 0, v = 0, col = "blue", lty = 3) points(zz.shift, col = "orange") showC <- function(z) noquote(sprintf("(R = %g, I = %g)", Re(z), Im(z))) ## The exact result of this *depends* on the platform, compiler, math-library: (NpNA <- NaN + NA_complex_) ; str(NpNA) # *behaves* as 'cplx NA' .. stopifnot(is.na(NpNA), is.na(NA_complex_), is.na(Re(NA_complex_)), is.na(Im(NA_complex_))) showC(NpNA)# but not always is {shows '(R = NaN, I = NA)' on some platforms} ## and this is not TRUE everywhere: identical(NpNA, NA_complex_) showC(NA_complex_) # always == (R = NA, I = NA) ``` r None `args` Argument List of a Function ----------------------------------- ### Description Displays the argument names and corresponding default values of a function or primitive. ### Usage ``` args(name) ``` ### Arguments | | | | --- | --- | | `name` | a function (a closure or a primitive). If `name` is a character string then the function with that name is found and used. | ### Details This function is mainly used interactively to print the argument list of a function. For programming, consider using `<formals>` instead. ### Value For a closure, a closure with identical formal argument list but an empty (`NULL`) body. For a primitive, a closure with the documented usage and `NULL` body. Note that some primitives do not make use of named arguments and match by position rather than name. `NULL` in case of a non-function. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<formals>`, `[help](../../utils/html/help)`; `[str](../../utils/html/str)` also prints the argument list of a function. ### Examples ``` ## "regular" (non-primitive) functions "print their arguments" ## (by returning another function with NULL body which you also see): args(ls) args(graphics::plot.default) utils::str(ls) # (just "prints": does not show a NULL) ## You can also pass a string naming a function. args("scan") ## ...but :: package specification doesn't work in this case. tryCatch(args("graphics::plot.default"), error = print) ## As explained above, args() gives a function with empty body: list(is.f = is.function(args(scan)), body = body(args(scan))) ## Primitive functions mostly behave like non-primitive functions. args(c) args(`+`) ## primitive functions without well-defined argument list return NULL: args(`if`) ``` r None `seek` Functions to Reposition Connections ------------------------------------------- ### Description Functions to re-position connections. ### Usage ``` seek(con, ...) ## S3 method for class 'connection' seek(con, where = NA, origin = "start", rw = "", ...) isSeekable(con) truncate(con, ...) ``` ### Arguments | | | | --- | --- | | `con` | a [connection](connections). | | `where` | numeric. A file position (relative to the origin specified by `origin`), or `NA`. | | `rw` | character string. Empty or `"read"` or `"write"`, partial matches allowed. | | `origin` | character string. One of `"start"`, `"current"`, `"end"`: see ‘Details’. | | `...` | further arguments passed to or from other methods. | ### Details `seek` with `where = NA` returns the current byte offset of a connection (from the beginning), and with a non-missing `where` argument the connection is re-positioned (if possible) to the specified position. `isSeekable` returns whether the connection in principle supports `seek`: currently only (possibly gz-compressed) file connections do. `where` is stored as a real but should represent an integer: non-integer values are likely to be truncated. Note that the possible values can exceed the largest representable number in an **R** `integer` on 64-bit builds, and on some 32-bit builds. File connections can be open for both writing/appending, in which case **R** keeps separate positions for reading and writing. Which `seek` refers to can be set by its `rw` argument: the default is the last mode (reading or writing) which was used. Most files are only opened for reading or writing and so default to that state. If a file is open for both reading and writing but has not been used, the default is to give the reading position (0). The initial file position for reading is always at the beginning. The initial position for writing is at the beginning of the file for modes `"r+"` and `"r+b"`, otherwise at the end of the file. Some platforms only allow writing at the end of the file in the append modes. (The reported write position for a file opened in an append mode will typically be unreliable until the file has been written to.) `gzfile` connections support `seek` with a number of limitations, using the file position of the uncompressed file. They do not support `origin = "end"`. When writing, seeking is only possible forwards: when reading seeking backwards is supported by rewinding the file and re-reading from its start. If `seek` is called with a non-`NA` value of `where`, any pushback on a text-mode connection is discarded. `truncate` truncates a file opened for writing at its current position. It works only for `file` connections, and is not implemented on all platforms: on others (including Windows) it will not work for large (> 2Gb) files. None of these should be expected to work on text-mode connections with re-encoding selected. ### Value `seek` returns the current position (before any move), as a (numeric) byte offset from the origin, if relevant, or `0` if not. Note that the position can exceed the largest representable number in an **R** `integer` on 64-bit builds, and on some 32-bit builds. `truncate` returns `NULL`: it stops with an error if it fails (or is not implemented). `isSeekable` returns a logical value, whether the connection supports `seek`. ### Warning Use of `seek` on Windows is discouraged. We have found so many errors in the Windows implementation of file positioning that users are advised to use it only at their own risk, and asked not to waste the **R** developers' time with bug reports on Windows' deficiencies. ### See Also `<connections>`
programming_docs
r None `outer` Outer Product of Arrays -------------------------------- ### Description The outer product of the arrays `X` and `Y` is the array `A` with dimension `c(dim(X), dim(Y))` where element `A[c(arrayindex.x, arrayindex.y)] = FUN(X[arrayindex.x], Y[arrayindex.y], ...)`. ### Usage ``` outer(X, Y, FUN = "*", ...) X %o% Y ``` ### Arguments | | | | --- | --- | | `X, Y` | First and second arguments for function `FUN`. Typically a vector or array. | | `FUN` | a function to use on the outer products, found *via* `<match.fun>` (except for the special case `"*"`). | | `...` | optional arguments to be passed to `FUN`. | ### Details `X` and `Y` must be suitable arguments for `FUN`. Each will be extended by `<rep>` to length the products of the lengths of `X` and `Y` before `FUN` is called. `FUN` is called with these two extended vectors as arguments (plus any arguments in `...`). It must be a vectorized function (or the name of one) expecting at least two arguments and returning a value with the same length as the first (and the second). Where they exist, the [dim]names of `X` and `Y` will be copied to the answer, and a dimension assigned which is the concatenation of the dimensions of `X` and `Y` (or lengths if dimensions do not exist). `FUN = "*"` is handled as a special case *via* `as.vector(X) %*% t(as.vector(Y))`, and is intended only for numeric vectors and arrays. `%o%` is binary operator providing a wrapper for `outer(x, y, "*")`. ### Author(s) Jonathan Rougier ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[%\*%](matmult)` for usual (*inner*) matrix vector multiplication; `<kronecker>` which is based on `outer`; `[Vectorize](vectorize)` for vectorizing a non-vectorized function. ### Examples ``` x <- 1:9; names(x) <- x # Multiplication & Power Tables x %o% x y <- 2:8; names(y) <- paste(y,":", sep = "") outer(y, x, "^") outer(month.abb, 1999:2003, FUN = "paste") ## three way multiplication table: x %o% x %o% y[1:3] ``` r None `trace` Interactive Tracing and Debugging of Calls to a Function or Method --------------------------------------------------------------------------- ### Description A call to `trace` allows you to insert debugging code (e.g., a call to `<browser>` or `[recover](../../utils/html/recover)`) at chosen places in any function. A call to `untrace` cancels the tracing. Specified methods can be traced the same way, without tracing all calls to the generic function. Trace code (`tracer`) can be any **R** expression. Tracing can be temporarily turned on or off globally by calling `tracingState`. ### Usage ``` trace(what, tracer, exit, at, print, signature, where = topenv(parent.frame()), edit = FALSE) untrace(what, signature = NULL, where = topenv(parent.frame())) tracingState(on = NULL) .doTrace(expr, msg) returnValue(default = NULL) ``` ### Arguments | | | | --- | --- | | `what` | the name, possibly `[quote](substitute)()`d, of a function to be traced or untraced. For `untrace` or for `trace` with more than one argument, more than one name can be given in the quoted form, and the same action will be applied to each one. For “hidden” functions such as S3 methods in a namespace, `where = *` typically needs to be specified as well. | | `tracer` | either a <function> or an unevaluated expression. The function will be called or the expression will be evaluated either at the beginning of the call, or before those steps in the call specified by the argument `at`. See the details section. | | `exit` | either a `<function>` or an unevaluated expression. The function will be called or the expression will be evaluated on exiting the function. See the details section. | | `at` | optional numeric vector or list. If supplied, `tracer` will be called just before the corresponding step in the body of the function. See the details section. | | `print` | If `TRUE` (as per default), a descriptive line is printed before any trace expression is evaluated. | | `signature` | If this argument is supplied, it should be a signature for a method for function `what`. In this case, the method, and *not* the function itself, is traced. | | `edit` | For complicated tracing, such as tracing within a loop inside the function, you will need to insert the desired calls by editing the body of the function. If so, supply the `edit` argument either as `TRUE`, or as the name of the editor you want to use. Then `trace()` will call `[edit](../../utils/html/edit)` and use the version of the function after you edit it. See the details section for additional information. | | `where` | where to look for the function to be traced; by default, the top-level environment of the call to `trace`. An important use of this argument is to trace functions from a package which are “hidden” or called from another package. The namespace mechanism imports the functions to be called (with the exception of functions in the base package). The functions being called are *not* the same objects seen from the top-level (in general, the imported packages may not even be attached). Therefore, you must ensure that the correct versions are being traced. The way to do this is to set argument `where` to a function in the namespace (or that namespace). The tracing computations will then start looking in the environment of that function (which will be the namespace of the corresponding package). (Yes, it's subtle, but the semantics here are central to how namespaces work in **R**.) | | `on` | logical; a call to the support function `tracingState` returns `TRUE` if tracing is globally turned on, `FALSE` otherwise. An argument of one or the other of those values sets the state. If the tracing state is `FALSE`, none of the trace actions will actually occur (used, for example, by debugging functions to shut off tracing during debugging). | | `expr, msg` | arguments to the support function `.doTrace`, calls to which are inserted into the modified function or method: `expr` is the tracing action (such as a call to `browser()`), and `msg` is a string identifying the place where the trace action occurs. | | `default` | If `returnValue` finds no return value (e.g. a function exited because of an error, restart or as a result of evaluating a return from a caller function), it will return `default` instead. | ### Details The `trace` function operates by constructing a revised version of the function (or of the method, if `signature` is supplied), and assigning the new object back where the original was found. If only the `what` argument is given, a line of trace printing is produced for each call to the function (back compatible with the earlier version of `trace`). The object constructed by `trace` is from a class that extends `"function"` and which contains the original, untraced version. A call to `untrace` re-assigns this version. If the argument `tracer` or `exit` is the name of a function, the tracing expression will be a call to that function, with no arguments. This is the easiest and most common case, with the functions `<browser>` and `[recover](../../utils/html/recover)` the likeliest candidates; the former browses in the frame of the function being traced, and the latter allows browsing in any of the currently active calls. The arguments `tracer` and `exit` are evaluated to see whether they are functions, but only their names are used in the tracing expressions. The lookup is done again when the traced function executes, so it may not be `tracer` or `exit` that will be called while tracing. The `tracer` or `exit` argument can also be an unevaluated expression (such as returned by a call to `[quote](substitute)` or `<substitute>`). This expression itself is inserted in the traced function, so it will typically involve arguments or local objects in the traced function. An expression of this form is useful if you only want to interact when certain conditions apply (and in this case you probably want to supply `print = FALSE` in the call to `trace` also). When the `at` argument is supplied, it can be a vector of integers referring to the substeps of the body of the function (this only works if the body of the function is enclosed in `{ ...}`). In this case `tracer` is *not* called on entry, but instead just before evaluating each of the steps listed in `at`. (Hint: you don't want to try to count the steps in the printed version of a function; instead, look at `as.list(body(f))` to get the numbers associated with the steps in function `f`.) The `at` argument can also be a list of integer vectors. In this case, each vector refers to a step nested within another step of the function. For example, `at = list(c(3,4))` will call the tracer just before the fourth step of the third step of the function. See the example below. Using `[setBreakpoint](../../utils/html/findlinenum)` (from package utils) may be an alternative, calling `trace(...., at, ...)`. The `exit` argument is called during `<on.exit>` processing. In an `on.exit` expression, the experimental `returnValue()` function may be called to obtain the value about to be returned by the function. Calling this function in other circumstances will give undefined results. An intrinsic limitation in the `exit` argument is that it won't work if the function itself uses `on.exit` with `add= FALSE` (the default), since the existing calls will override the one supplied by `trace`. Tracing does not nest. Any call to `trace` replaces previously traced versions of that function or method (except for edited versions as discussed below), and `untrace` always restores an untraced version. (Allowing nested tracing has too many potentials for confusion and for accidentally leaving traced versions behind.) When the `edit` argument is used repeatedly with no call to `untrace` on the same function or method in between, the previously edited version is retained. If you want to throw away all the previous tracing and then edit, call `untrace` before the next call to `trace`. Editing may be combined with automatic tracing; just supply the other arguments such as `tracer`, and the `edit` argument as well. The `edit = TRUE` argument uses the default editor (see `[edit](../../utils/html/edit)`). Tracing primitive functions (builtins and specials) from the base package works, but only by a special mechanism and not very informatively. Tracing a primitive causes the primitive to be replaced by a function with argument ... (only). You can get a bit of information out, but not much. A warning message is issued when `trace` is used on a primitive. The practice of saving the traced version of the function back where the function came from means that tracing carries over from one session to another, *if* the traced function is saved in the session image. (In the next session, `untrace` will remove the tracing.) On the other hand, functions that were in a package, not in the global environment, are not saved in the image, so tracing expires with the session for such functions. Tracing an S4 method is basically just like tracing a function, with the exception that the traced version is stored by a call to `[setMethod](../../methods/html/setmethod)` rather than by direct assignment, and so is the untraced version after a call to `untrace`. The version of `trace` described here is largely compatible with the version in S-Plus, although the two work by entirely different mechanisms. The S-Plus `trace` uses the session frame, with the result that tracing never carries over from one session to another (**R** does not have a session frame). Another relevant distinction has nothing directly to do with `trace`: The browser in S-Plus allows changes to be made to the frame being browsed, and the changes will persist after exiting the browser. The **R** browser allows changes, but they disappear when the browser exits. This may be relevant in that the S-Plus version allows you to experiment with code changes interactively, but the **R** version does not. (A future revision may include a ‘destructive’ browser for **R**.) ### Value In the simple version (just the first argument), `trace` returns an invisible `NULL`. Otherwise, the traced function(s) name(s). The relevant consequence is the assignment that takes place. `untrace` returns the function name invisibly. `tracingState` returns the current global tracing state, and possibly changes it. When called during `on.exit` processing, `returnValue` returns the value about to be returned by the exiting function. Behaviour in other circumstances is undefined. ### Note Using `trace()` is conceptually a generalization of `<debug>`, implemented differently. Namely by calling `<browser>` via its `tracer` or `exit` argument. The version of function tracing that includes any of the arguments except for the function name requires the methods package (because it uses special classes of objects to store and restore versions of the traced functions). If methods dispatch is not currently on, `trace` will load the methods namespace, but will not put the methods package on the `<search>` list. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<browser>` and `[recover](../../utils/html/recover)`, the likeliest tracing functions; also, `[quote](substitute)` and `<substitute>` for constructing general expressions. ### Examples ``` require(stats) ## Very simple use trace(sum) hist(rnorm(100)) # shows about 3-4 calls to sum() untrace(sum) ## Show how pt() is called from inside power.t.test(): if(FALSE) trace(pt) ## would show ~20 calls, but we want to see more: trace(pt, tracer = quote(cat(sprintf("tracing pt(*, ncp = %.15g)\n", ncp))), print = FALSE) # <- not showing typical extra power.t.test(20, 1, power=0.8, sd=NULL) ##--> showing the ncp root finding: untrace(pt) f <- function(x, y) { y <- pmax(y, 0.001) if (x > 0) x ^ y else stop("x must be positive") } ## arrange to call the browser on entering and exiting ## function f trace("f", quote(browser(skipCalls = 4)), exit = quote(browser(skipCalls = 4))) ## instead, conditionally assign some data, and then browse ## on exit, but only then. Don't bother me otherwise trace("f", quote(if(any(y < 0)) yOrig <- y), exit = quote(if(exists("yOrig")) browser(skipCalls = 4)), print = FALSE) ## Enter the browser just before stop() is called. First, find ## the step numbers untrace(f) # (as it has changed f's body !) as.list(body(f)) as.list(body(f)[[3]]) # -> stop(..) is [[4]] ## Now call the browser there trace("f", quote(browser(skipCalls = 4)), at = list(c(3,4))) ## Not run: f(-1,2) # --> enters browser just before stop(..) ## End(Not run) ## trace a utility function, with recover so we ## can browse in the calling functions as well. trace("as.matrix", recover) ## turn off the tracing (that happened above) untrace(c("f", "as.matrix")) ## Not run: ## Useful to find how system2() is called in a higher-up function: trace(base::system2, quote(print(ls.str()))) ## End(Not run) ##-------- Tracing hidden functions : need 'where = *' ## ## 'where' can be a function whose environment is meant: trace(quote(ar.yw.default), where = ar) a <- ar(rnorm(100)) # "Tracing ..." untrace(quote(ar.yw.default), where = ar) ## trace() more than one function simultaneously: ## expression(E1, E2, ...) here is equivalent to ## c(quote(E1), quote(E2), quote(.*), ..) trace(expression(ar.yw, ar.yw.default), where = ar) a <- ar(rnorm(100)) # --> 2 x "Tracing ..." # and turn it off: untrace(expression(ar.yw, ar.yw.default), where = ar) ## Not run: ## trace calls to the function lm() that come from ## the nlme package. ## (The function nlme is in that package, and the package ## has a namespace, so the where= argument must be used ## to get the right version of lm) trace(lm, exit = recover, where = asNamespace("nlme")) ## End(Not run) ``` r None `EnvVar` Environment Variables ------------------------------- ### Description Details of some of the environment variables which affect an **R** session. ### Details It is impossible to list all the environment variables which can affect an **R** session: some affect the OS system functions which **R** uses, and others will affect add-on packages. But here are notes on some of the more important ones. Those that set the defaults for options are consulted only at startup (as are some of the others). HOME: The user's ‘home’ directory. LANGUAGE: Optional. The language(s) to be used for message translations. This is consulted when needed. LC\_ALL: (etc) Optional. Use to set various aspects of the locale – see `[Sys.getlocale](locales)`. Consulted at startup. MAKEINDEX: The path to `makeindex`. If unset to a value determined when **R** was built. Used by the emulation mode of `[texi2dvi](../../tools/html/texi2dvi)` and `[texi2pdf](../../tools/html/texi2dvi)`. R\_BATCH: Optional – set in a batch session, that is one started by `R CMD [BATCH](../../utils/html/batch)`. Most often set to `""`, so test by something like `!is.na([Sys.getenv](sys.getenv)("R_BATCH", NA))`. R\_BROWSER: The path to the default browser. Used to set the default value of `<options>("browser")`. R\_COMPLETION: Optional. If set to `FALSE`, command-line completion is not used. (Not used by the macOS GUI.) R\_DEFAULT\_PACKAGES: A comma-separated list of packages which are to be attached in every session. See `<options>`. R\_DOC\_DIR: The location of the **R** ‘doc’ directory. Set by **R**. R\_ENVIRON: Optional. The path to the site environment file: see [Startup](startup). Consulted at startup. R\_GSCMD: Optional. The path to Ghostscript, used by `[dev2bitmap](../../grdevices/html/dev2bitmap)`, `[bitmap](../../grdevices/html/dev2bitmap)` and `[embedFonts](../../grdevices/html/embedfonts)`. Consulted when those functions are invoked. Since it will be treated as if passed to `<system>`, spaces and shell metacharacters should be escaped. R\_HISTFILE: Optional. The path of the history file: see [Startup](startup). Consulted at startup and when the history is saved. R\_HISTSIZE: Optional. The maximum size of the history file, in lines. Exactly how this is used depends on the interface. On Unix-alikes, for the `readline` command-line interface it takes effect when the history is saved (by `[savehistory](../../utils/html/savehistory)` or at the end of a session). On Windows, for `Rgui` it controls the number of lines saved to the history file: the size of the history used in the session is controlled by the console customization: see `[Rconsole](../../utils/html/rconsole)`. R\_HOME: The top-level directory of the **R** installation: see `[R.home](rhome)`. Set by **R**. R\_INCLUDE\_DIR: The location of the **R** ‘include’ directory. Set by **R**. R\_LIBS: Optional. Used for initial setting of `[.libPaths](libpaths)`. R\_LIBS\_SITE: Optional. Used for initial setting of `[.libPaths](libpaths)`. R\_LIBS\_USER: Optional. Used for initial setting of `[.libPaths](libpaths)`. R\_PAPERSIZE: Optional. Used to set the default for `<options>("papersize")`, e.g. used by `[pdf](../../grdevices/html/pdf)` and `[postscript](../../grdevices/html/postscript)`. R\_PCRE\_JIT\_STACK\_MAXSIZE: Optional. Consulted when PCRE's JIT pattern compiler is first used. See `<grep>`. R\_PDFVIEWER: The path to the default PDF viewer. Used by `R CMD Rd2pdf`. R\_PLATFORM: The platform – a string of the form `cpu-vendor-os`, see `[R.Version](version)`. R\_PROFILE: Optional. The path to the site profile file: see [Startup](startup). Consulted at startup. R\_RD4PDF: Options for `pdflatex` processing of `Rd` files. Used by `R CMD Rd2pdf`. R\_SHARE\_DIR: The location of the **R** ‘share’ directory. Set by **R**. R\_TEXI2DVICMD: The path to `texi2dvi`. Defaults to the value of TEXI2DVI, and if that is unset to a value determined when **R** was built. Only on Unix-alikes: Consulted at startup to set the default for `<options>("texi2dvi")`, used by `[texi2dvi](../../tools/html/texi2dvi)` and `[texi2pdf](../../tools/html/texi2dvi)` in package tools. R\_UNZIPCMD: The path to `unzip`. Sets the initial value for `<options>("unzip")` on a Unix-alike when namespace utils is loaded. R\_ZIPCMD: The path to `zip`. Used by `[zip](../../utils/html/zip)` and by `R CMD INSTALL --build` on Windows. TMPDIR, TMP, TEMP: Consulted (in that order) when setting the temporary directory for the session: see `[tempdir](tempfile)`. TMPDIR is also used by some of the utilities see the help for `[build](../../utils/html/pkgutils)`. TZ: Optional. The current time zone. See `[Sys.timezone](timezones)` for the system-specific formats. Consulted as needed. no\_proxy, http\_proxy, ftp\_proxy: (and more). Optional. Settings for `[download.file](../../utils/html/download.file)`: see its help for further details. ### Unix-specific Some variables set on Unix-alikes, and not (in general) on Windows. DISPLAY: Optional: used by `[X11](../../grdevices/html/x11)`, Tk (in package tcltk), the data editor and various packages. EDITOR: The path to the default editor: sets the default for `<options>("editor")` when namespace utils is loaded. PAGER: The path to the pager with the default setting of `<options>("pager")`. The default value is chosen at configuration, usually as the path to `less`. R\_PRINTCMD: Sets the default for `<options>("printcmd")`, which sets the default print command to be used by `[postscript](../../grdevices/html/postscript)`. R\_SUPPORT\_OLD\_TARS logical. Sets the default for the `support_old_tars` argument of `[untar](../../utils/html/untar)`. Should be set to `TRUE` if an old system `tar` command is used which does not support either `xz` compression or automagically detecting compression type. ### Windows-specific Some Windows-specific variables are GSC: Optional: the path to Ghostscript, used if R\_GSCMD is not set. R\_USER: The user's ‘home’ directory. Set by **R**. (HOME will be set to the same value if not already set.) TZDIR: Optional. The top-level directory of the time-zone database. See `[Sys.timezone](timezones)`. ### See Also `[Sys.getenv](sys.getenv)` and `[Sys.setenv](sys.setenv)` to read and set environmental variables in an **R** session. `<gctorture>` for environment variables controlling garbage collection.
programming_docs
r None `function` Function Definition ------------------------------- ### Description These functions provide the base mechanisms for defining new functions in the **R** language. ### Usage ``` function( arglist ) expr \( arglist ) expr return(value) ``` ### Arguments | | | | --- | --- | | `arglist` | Empty or one or more name or name=expression terms. | | `expr` | An expression. | | `value` | An expression. | ### Details The names in an argument list can be back-quoted non-standard names (see ‘[backquote](quotes)’). If `value` is missing, `NULL` is returned. If it is a single expression, the value of the evaluated expression is returned. (The expression is evaluated as soon as `return` is called, in the evaluation frame of the function and before any `<on.exit>` expression is evaluated.) If the end of a function is reached without calling `return`, the value of the last evaluated expression is returned. The shorthand form `\(x) x + 1` is parsed as `function(x) x + 1`. It may be helpful in making code containing simple function expressions more readable. ### Technical details This type of function is not the only type in **R**: they are called *closures* (a name with origins in LISP) to distinguish them from <primitive> functions. A closure has three components, its `<formals>` (its argument list), its `<body>` (`expr` in the ‘Usage’ section) and its `<environment>` which provides the enclosure of the evaluation frame when the closure is used. There is an optional further component if the closure has been byte-compiled. This is not normally user-visible, but is indicated when functions are printed. ### Note **The shorthand function notation is experimental and may change prior to release.** ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<args>`. `<formals>`, `<body>` and `<environment>` for accessing the component parts of a function. `<debug>` for debugging; using `<invisible>` inside `return(.)` for returning *invisibly*. ### Examples ``` norm <- function(x) sqrt(x%*%x) norm(1:4) ## An anonymous function: (function(x, y){ z <- x^2 + y^2; x+y+z })(0:7, 1) ``` r None `zMachine` Numerical Characteristics of the Machine ---------------------------------------------------- ### Description `.Machine` is a variable holding information on the numerical characteristics of the machine **R** is running on, such as the largest double or integer and the machine's precision. ### Usage ``` .Machine ``` ### Details The algorithm is based on Cody's (1988) subroutine MACHAR. As all current implementations of **R** use 32-bit integers and use IEC 60559 floating-point (double precision) arithmetic, the `"integer"` and `"double"` related values are the same for almost all **R** builds. Note that on most platforms smaller positive values than `.Machine$double.xmin` can occur. On a typical **R** platform the smallest positive double is about `5e-324`. ### Value A list with components | | | | --- | --- | | `double.eps` | the smallest positive floating-point number `x` such that `1 + x != 1`. It equals `double.base ^ ulp.digits` if either `double.base` is 2 or `double.rounding` is 0; otherwise, it is `(double.base ^ double.ulp.digits) / 2`. Normally `2.220446e-16`. | | `double.neg.eps` | a small positive floating-point number `x` such that `1 - x != 1`. It equals `double.base ^ double.neg.ulp.digits` if `double.base` is 2 or `double.rounding` is 0; otherwise, it is `(double.base ^ double.neg.ulp.digits) / 2`. Normally `1.110223e-16`. As `double.neg.ulp.digits` is bounded below by `-(double.digits + 3)`, `double.neg.eps` may not be the smallest number that can alter 1 by subtraction. | | `double.xmin` | the smallest non-zero normalized floating-point number, a power of the radix, i.e., `double.base ^ double.min.exp`. Normally `2.225074e-308`. | | `double.xmax` | the largest normalized floating-point number. Typically, it is equal to `(1 - double.neg.eps) * double.base ^ double.max.exp`, but on some machines it is only the second or third largest such number, being too small by 1 or 2 units in the last digit of the significand. Normally `1.797693e+308`. Note that larger unnormalized numbers can occur. | | `double.base` | the radix for the floating-point representation: normally `2`. | | `double.digits` | the number of base digits in the floating-point significand: normally `53`. | | `double.rounding` | the rounding action, one of 0 if floating-point addition chops; 1 if floating-point addition rounds, but not in the IEEE style; 2 if floating-point addition rounds in the IEEE style; 3 if floating-point addition chops, and there is partial underflow; 4 if floating-point addition rounds, but not in the IEEE style, and there is partial underflow; 5 if floating-point addition rounds in the IEEE style, and there is partial underflow. Normally `5`. | | `double.guard` | the number of guard digits for multiplication with truncating arithmetic. It is 1 if floating-point arithmetic truncates and more than `double digits` base-`double.base` digits participate in the post-normalization shift of the floating-point significand in multiplication, and 0 otherwise. Normally `0`. | | `double.ulp.digits` | the largest negative integer `i` such that `1 + double.base ^ i != 1`, except that it is bounded below by `-(double.digits + 3)`. Normally `-52`. | | `double.neg.ulp.digits` | the largest negative integer `i` such that `1 - double.base ^ i != 1`, except that it is bounded below by `-(double.digits + 3)`. Normally `-53`. | | `double.exponent` | the number of bits (decimal places if `double.base` is 10) reserved for the representation of the exponent (including the bias or sign) of a floating-point number. Normally `11`. | | `double.min.exp` | the largest in magnitude negative integer `i` such that `double.base ^ i` is positive and normalized. Normally `-1022`. | | `double.max.exp` | the smallest positive power of `double.base` that overflows. Normally `1024`. | | `integer.max` | the largest integer which can be represented. Always *2^31 - 1 = 2147483647*. | | `sizeof.long` | the number of bytes in a C `long` type: `4` or `8` (most 64-bit systems, but not Windows). | | `sizeof.longlong` | the number of bytes in a C `long long` type. Will be zero if there is no such type, otherwise usually `8`. | | `sizeof.longdouble` | the number of bytes in a C `long double` type. Will be zero if there is no such type (or its use was disabled when **R** was built), otherwise possibly `12` (most 32-bit builds) or `16` (most 64-bit builds). | | `sizeof.pointer` | the number of bytes in a C `SEXP` type. Will be `4` on 32-bit builds and `8` on 64-bit builds of **R**. | | `longdouble.eps, longdouble.neg.eps, longdouble.digits, ...` | when `<capabilities>("long.double")` is true, there are 10 such `"longdouble.<kind>"` values, specifying the `long double` property corresponding to its `"double.*"` counterpart, above, see also ‘Note’. | ### Note In the (typical) case where `<capabilities>("long.double")` is true, **R** uses the `long double` C type in quite a few places internally for accumulators in e.g. `<sum>`, reading non-integer numeric constants into (binary) double precision numbers, or arithmetic such as `x %% y`; also, `long double` can be read by `[readBin](readbin)`. For this reason, in that case, `.Machine` contains ten further components, `longdouble.eps`, `*.neg.eps`, `*.digits`, `*.rounding` `*.guard`, `*.ulp.digits`, `*.neg.ulp.digits`, `*.exponent`, `*.min.exp`, and `*.max.exp`, computed entirely analogously to their `double.*` counterparts, see there. `sizeof.longdouble` only tells you the amount of storage allocated for a long double. Often what is stored is the 80-bit extended double type of IEC 60559, padded to the double alignment used on the platform — this seems to be the case for the common **R** platforms using ix86 and x86\_64 chips. Note that it is legal for a platform to have a `long double` C type which is identical to the `double` type — this happens on ARM cpus. In that case `<capabilities>("long.double")` will be false but `.Machine` may contain `"longdouble.<kind>"` elements. ### Source Uses a C translation of Fortran code in the reference, modified by the R Core Team to defeat over-optimization in modern compilers. ### References Cody, W. J. (1988). MACHAR: A subroutine to dynamically determine machine parameters. *Transactions on Mathematical Software*, **14**(4), 303–311. doi: [10.1145/50063.51907](https://doi.org/10.1145/50063.51907). ### See Also `[.Platform](platform)` for details of the platform. ### Examples ``` .Machine ## or for a neat printout noquote(unlist(format(.Machine))) ``` r None `diff` Lagged Differences -------------------------- ### Description Returns suitably lagged and iterated differences. ### Usage ``` diff(x, ...) ## Default S3 method: diff(x, lag = 1, differences = 1, ...) ## S3 method for class 'POSIXt' diff(x, lag = 1, differences = 1, ...) ## S3 method for class 'Date' diff(x, lag = 1, differences = 1, ...) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector or matrix containing the values to be differenced. | | `lag` | an integer indicating which lag to use. | | `differences` | an integer indicating the order of the difference. | | `...` | further arguments to be passed to or from methods. | ### Details `diff` is a generic function with a default method and ones for classes `"[ts](../../stats/html/ts)"`, `"[POSIXt](datetimeclasses)"` and `"[Date](dates)"`. `[NA](na)`'s propagate. ### Value If `x` is a vector of length `n` and `differences = 1`, then the computed result is equal to the successive differences `x[(1+lag):n] - x[1:(n-lag)]`. If `difference` is larger than one this algorithm is applied recursively to `x`. Note that the returned value is a vector which is shorter than `x`. If `x` is a matrix then the difference operations are carried out on each column separately. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[diff.ts](../../stats/html/ts-methods)`, `[diffinv](../../stats/html/diffinv)`. ### Examples ``` diff(1:10, 2) diff(1:10, 2, 2) x <- cumsum(cumsum(1:10)) diff(x, lag = 2) diff(x, differences = 2) diff(.leap.seconds) ``` r None `sQuote` Quote Text -------------------- ### Description Single or double quote text by combining with appropriate single or double left and right quotation marks. ### Usage ``` sQuote(x, q = getOption("useFancyQuotes")) dQuote(x, q = getOption("useFancyQuotes")) ``` ### Arguments | | | | --- | --- | | `x` | an **R** object, to be coerced to a character vector. | | `q` | the kind of quotes to be used, see ‘Details’. | ### Details The purpose of the functions is to provide a simple means of markup for quoting text to be used in the R output, e.g., in warnings or error messages. The choice of the appropriate quotation marks depends on both the locale and the available character sets. Older Unix/X11 fonts displayed the grave accent (ASCII code 0x60) and the apostrophe (0x27) in a way that they could also be used as matching open and close single quotation marks. Using modern fonts, or non-Unix systems, these characters no longer produce matching glyphs. Unicode provides left and right single quotation mark characters (U+2018 and U+2019); if Unicode markup cannot be assumed to be available, it seems good practice to use the apostrophe as a non-directional single quotation mark. Similarly, Unicode has left and right double quotation mark characters (U+201C and U+201D); if only ASCII's typewriter characteristics can be employed, than the ASCII quotation mark (0x22) should be used as both the left and right double quotation mark. Some other locales also have the directional quotation marks, notably on Windows. TeX uses grave and apostrophe for the directional single quotation marks, and doubled grave and doubled apostrophe for the directional double quotation marks. What rendering is used depends on `q` which by default depends on the `<options>` setting for `useFancyQuotes`. If this is `FALSE` then the undirectional ASCII quotation style is used. If this is `TRUE` (the default), Unicode directional quotes are used are used where available (currently, UTF-8 locales on Unix-alikes and all Windows locales except `C`): if set to `"UTF-8"` UTF-8 markup is used (whatever the current locale). If set to `"TeX"`, TeX-style markup is used. Finally, if this is set to a character vector of length four, the first two entries are used for beginning and ending single quotes and the second two for beginning and ending double quotes: this can be used to implement non-English quoting conventions such as the use of guillemets. Where fancy quotes are used, you should be aware that they may not be rendered correctly as not all fonts include the requisite glyphs: for example some have directional single quotes but not directional double quotes. ### Value A character vector of the same length as `x` (after any coercion) in the current locale's encoding. ### References Markus Kuhn, “ASCII and Unicode quotation marks”. <https://www.cl.cam.ac.uk/~mgk25/ucs/quotes.html> ### See Also [Quotes](quotes) for quoting **R** code. `[shQuote](shquote)` for quoting OS commands. ### Examples ``` op <- options("useFancyQuotes") paste("argument", sQuote("x"), "must be non-zero") options(useFancyQuotes = FALSE) cat("\ndistinguish plain", sQuote("single"), "and", dQuote("double"), "quotes\n") options(useFancyQuotes = TRUE) cat("\ndistinguish fancy", sQuote("single"), "and", dQuote("double"), "quotes\n") options(useFancyQuotes = "TeX") cat("\ndistinguish TeX", sQuote("single"), "and", dQuote("double"), "quotes\n") if(l10n_info()$`Latin-1`) { options(useFancyQuotes = c("\xab", "\xbb", "\xbf", "?")) cat("\n", sQuote("guillemet"), "and", dQuote("Spanish question"), "styles\n") } else if(l10n_info()$`UTF-8`) { options(useFancyQuotes = c("\xc2\xab", "\xc2\xbb", "\xc2\xbf", "?")) cat("\n", sQuote("guillemet"), "and", dQuote("Spanish question"), "styles\n") } options(op) ``` r None `labels` Find Labels from Object --------------------------------- ### Description Find a suitable set of labels from an object for use in printing or plotting, for example. A generic function. ### Usage ``` labels(object, ...) ``` ### Arguments | | | | --- | --- | | `object` | Any **R** object: the function is generic. | | `...` | further arguments passed to or from other methods. | ### Value A character vector or list of such vectors. For a vector the results is the names or `seq_along(x)` and for a data frame or array it is the dimnames (with `NULL` expanded to `seq_len(d[i])`). ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S.* Wadsworth & Brooks/Cole. r None `summary` Object Summaries --------------------------- ### Description `summary` is a generic function used to produce result summaries of the results of various model fitting functions. The function invokes particular `[methods](../../utils/html/methods)` which depend on the `<class>` of the first argument. ### Usage ``` summary(object, ...) ## Default S3 method: summary(object, ..., digits, quantile.type = 7) ## S3 method for class 'data.frame' summary(object, maxsum = 7, digits = max(3, getOption("digits")-3), ...) ## S3 method for class 'factor' summary(object, maxsum = 100, ...) ## S3 method for class 'matrix' summary(object, ...) ## S3 method for class 'summaryDefault' format(x, digits = max(3L, getOption("digits") - 3L), ...) ## S3 method for class 'summaryDefault' print(x, digits = max(3L, getOption("digits") - 3L), ...) ``` ### Arguments | | | | --- | --- | | `object` | an object for which a summary is desired. | | `x` | a result of the *default* method of `summary()`. | | `maxsum` | integer, indicating how many levels should be shown for `<factor>`s. | | `digits` | integer, used for number formatting with `[signif](round)()` (for `summary.default`) or `<format>()` (for `summary.data.frame`). In `summary.default`, if not specified (i.e., `<missing>(.)`), `signif()` will *not* be called anymore (since **R** >= 3.4.0, where the default has been changed to only round in the `print` and `format` methods). | | | | | --- | --- | | `quantile.type` | integer code used in `quantile(*, type=quantile.type)` for the default method. | | `...` | additional arguments affecting the summary produced. | ### Details For `<factor>`s, the frequency of the first `maxsum - 1` most frequent levels is shown, and the less frequent levels are summarized in `"(Others)"` (resulting in at most `maxsum` frequencies). The functions `summary.lm` and `summary.glm` are examples of particular methods which summarize the results produced by `[lm](../../stats/html/lm)` and `[glm](../../stats/html/glm)`. ### Value The form of the value returned by `summary` depends on the class of its argument. See the documentation of the particular methods for details of what is produced by that method. The default method returns an object of class `c("summaryDefault", "<table>")` which has specialized `<format>` and `<print>` methods. The `<factor>` method returns an integer vector. The matrix and data frame methods return a matrix of class `"<table>"`, obtained by applying `summary` to each column and collating the results. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S*. Wadsworth & Brooks/Cole. ### See Also `[anova](../../stats/html/anova)`, `[summary.glm](../../stats/html/summary.glm)`, `[summary.lm](../../stats/html/summary.lm)`. ### Examples ``` summary(attenu, digits = 4) #-> summary.data.frame(...), default precision summary(attenu $ station, maxsum = 20) #-> summary.factor(...) lst <- unclass(attenu$station) > 20 # logical with NAs ## summary.default() for logicals -- different from *.factor: summary(lst) summary(as.factor(lst)) ``` r None `connections` Functions to Manipulate Connections (Files, URLs, ...) --------------------------------------------------------------------- ### Description Functions to create, open and close connections, i.e., “generalized files”, such as possibly compressed files, URLs, pipes, etc. ### Usage ``` file(description = "", open = "", blocking = TRUE, encoding = getOption("encoding"), raw = FALSE, method = getOption("url.method", "default")) url(description, open = "", blocking = TRUE, encoding = getOption("encoding"), method = getOption("url.method", "default"), headers = NULL) gzfile(description, open = "", encoding = getOption("encoding"), compression = 6) bzfile(description, open = "", encoding = getOption("encoding"), compression = 9) xzfile(description, open = "", encoding = getOption("encoding"), compression = 6) unz(description, filename, open = "", encoding = getOption("encoding")) pipe(description, open = "", encoding = getOption("encoding")) fifo(description, open = "", blocking = FALSE, encoding = getOption("encoding")) socketConnection(host = "localhost", port, server = FALSE, blocking = FALSE, open = "a+", encoding = getOption("encoding"), timeout = getOption("timeout"), options = getOption("socketOptions")) serverSocket(port) socketAccept(socket, blocking = FALSE, open = "a+", encoding = getOption("encoding"), timeout = getOption("timeout"), options = getOption("socketOptions")) open(con, ...) ## S3 method for class 'connection' open(con, open = "r", blocking = TRUE, ...) close(con, ...) ## S3 method for class 'connection' close(con, type = "rw", ...) flush(con) isOpen(con, rw = "") isIncomplete(con) socketTimeout(socket, timeout = -1) ``` ### Arguments | | | | --- | --- | | `description` | character string. A description of the connection: see ‘Details’. | | `open` | character string. A description of how to open the connection (if it should be opened initially). See section ‘Modes’ for possible values. | | `blocking` | logical. See the ‘Blocking’ section. | | `encoding` | The name of the encoding to be assumed. See the ‘Encoding’ section. | | `raw` | logical. If true, a ‘raw’ interface is used which will be more suitable for arguments which are not regular files, e.g. character devices. This suppresses the check for a compressed file when opening for text-mode reading, and asserts that the ‘file’ may not be seekable. | | `method` | character string, partially matched to `c("default", "internal", "wininet", "libcurl")`: see ‘Details’. | | `headers` | named character vector of HTTP headers to use in HTTP requests. It is ignored for non-HTTP URLs. The `User-Agent` header, coming from the `HTTPUserAgent` option (see `<options>`) is used as the first header, automatically. | | `compression` | integer in 0–9. The amount of compression to be applied when writing, from none to maximal available. For `xzfile` can also be negative: see the ‘Compression’ section. | | `timeout` | numeric: the timeout (in seconds) to be used for this connection. Beware that some OSes may treat very large values as zero: however the POSIX standard requires values up to 31 days to be supported. | | `options` | optional character vector with options. Currently only `"no-delay"` is supported on TCP sockets. | | `filename` | a filename within a zip file. | | `host` | character string. Host name for the port. | | `port` | integer. The TCP port number. | | `server` | logical. Should the socket be a client or a server? | | `socket` | a server socket listening for connections. | | `con` | a connection. | | `type` | character string. Currently ignored. | | `rw` | character string. Empty or `"read"` or `"write"`, partial matches allowed. | | `...` | arguments passed to or from other methods. | ### Details The first eleven functions create connections. By default the connection is not opened (except for a socket connection created by `socketConnection` or `socketAccept` and for server socket connection created by `serverSocket`), but may be opened by setting a non-empty value of argument `open`. For `file` the description is a path to the file to be opened or a complete URL (when it is the same as calling `url`), or `""` (the default) or `"clipboard"` (see the ‘Clipboard’ section). Use `"stdin"` to refer to the C-level ‘standard input’ of the process (which need not be connected to anything in a console or embedded version of **R**, and is not in `RGui` on Windows). See also `[stdin](showconnections)()` for the subtly different R-level concept of `stdin`. See `[nullfile](showconnections)()` for a platform-independent way to get filename of the null device. For `url` the description is a complete URL including scheme (such as http://, https://, ftp:// or file://). Method `"internal"` is that available since connections were introduced, method `"wininet"` is only available on Windows (it uses the WinINet functions of that OS) and method `"libcurl"` (using the library of that name: <https://curl.se/libcurl/>) is required on a Unix-alike but optional on Windows. Method `"default"` uses method `"internal"` for file: URLs and `"libcurl"` for `ftps:` URLs. On a Unix-alike it uses `"libcurl"` for http:, https: and ftp: URLs; on Windows `"wininet"` for http:, ftp: and https: URLs. Proxies can be specified: see `[download.file](../../utils/html/download.file)`. For `gzfile` the description is the path to a file compressed by `gzip`: it can also open for reading uncompressed files and those compressed by `bzip2`, `xz` or `lzma`. For `bzfile` the description is the path to a file compressed by `bzip2`. For `xzfile` the description is the path to a file compressed by `xz` (<https://en.wikipedia.org/wiki/Xz>) or (for reading only) `lzma` (<https://en.wikipedia.org/wiki/LZMA>). `unz` reads (only) single files within zip files, in binary mode. The description is the full path to the zip file, with ‘.zip’ extension if required. For `pipe` the description is the command line to be piped to or from. This is run in a shell, on Windows that specified by the COMSPEC environment variable. For `fifo` the description is the path of the fifo. (Support for `fifo` connections is optional but they are available on most Unix platforms and on Windows.) The intention is that `file` and `gzfile` can be used generally for text input (from files, http:// and https:// URLs) and binary input respectively. `open`, `close` and `seek` are generic functions: the following applies to the methods relevant to connections. `open` opens a connection. In general functions using connections will open them if they are not open, but then close them again, so to leave a connection open call `open` explicitly. `close` closes and destroys a connection. This will happen automatically in due course (with a warning) if there is no longer an **R** object referring to the connection. A maximum of 128 connections can be allocated (not necessarily open) at any one time. Three of these are pre-allocated (see `[stdout](showconnections)`). The OS will impose limits on the numbers of connections of various types, but these are usually larger than 125. `flush` flushes the output stream of a connection open for write/append (where implemented, currently for file and clipboard connections, `[stdout](showconnections)` and `[stderr](showconnections)`). If for a `file` or (on most platforms) a `fifo` connection the description is `""`, the file/fifo is immediately opened (in `"w+"` mode unless `open = "w+b"` is specified) and unlinked from the file system. This provides a temporary file/fifo to write to and then read from. `socketConnection(server=TRUE)` creates a new temporary server socket listening on the given port. As soon as a new socket connection is accepted on that port, the server socket is automatically closed. `serverSocket` creates a listening server socket which can be used for accepting multiple socket connections by `socketAccept`. To stop listening for new connections, a server socket needs to be closed explicitly by `close`. `socketConnection` and `socketAccept` support setting of socket-specific options. Currently only `"no-delay"` is implemented which enables the `TCP_NODELAY` socket option, causing the socket to flush send buffers immediately (instead of waiting to collect all output before sending). This option is useful for protocols that need fast request/response turn-around times. `socketTimeout` sets connection timeout of a socket connection. A negative `timeout` can be given to query the old value. ### Value `file`, `pipe`, `fifo`, `url`, `gzfile`, `bzfile`, `xzfile`, `unz`, `socketConnection`, `socketAccept` and `serverSocket` return a connection object which inherits from class `"connection"` and has a first more specific class. `open` and `flush` return `NULL`, invisibly. `close` returns either `NULL` or an integer status, invisibly. The status is from when the connection was last closed and is available only for some types of connections (e.g., pipes, files and fifos): typically zero values indicate success. Negative values will result in a warning; if writing, these may indicate write failures and should not be ignored. `isOpen` returns a logical value, whether the connection is currently open. `isIncomplete` returns a logical value, whether the last read attempt was blocked, or for an output text connection whether there is unflushed output. `socketTimeout` returns the old timeout value of a socket connection. ### URLs `url` and `file` support URL schemes file://, http://, https:// and ftp://. `method = "libcurl"` allows more schemes: exactly which schemes is platform-dependent (see `[libcurlVersion](libcurlversion)`), but all Unix-alike platforms will support https:// and most platforms will support ftps://. Most methods do not percent-encode special characters such as spaces in http:// URLs (see `[URLencode](../../utils/html/urlencode)`), but it seems the `"wininet"` method does. A note on file:// URLs. The most general form (from RFC1738) is file://host/path/to/file, but **R** only accepts the form with an empty `host` field referring to the local machine. On a Unix-alike, this is then file:///path/to/file, where path/to/file is relative to ‘/’. So although the third slash is strictly part of the specification not part of the path, this can be regarded as a way to specify the file ‘/path/to/file’. It is not possible to specify a relative path using a file URL. In this form the path is relative to the root of the filesystem, not a Windows concept. The standard form on Windows is file:///d:/R/repos: for compatibility with earlier versions of **R** and Unix versions, any other form is parsed as **R** as file:// plus `path_to_file`. Also, backslashes are accepted within the path even though RFC1738 does not allow them. No attempt is made to decode a percent-encoded file: URL: call `[URLdecode](../../utils/html/urlencode)` if necessary. All the methods attempt to follow redirected HTTP URLs, but the `"internal"` method is unable to follow redirections to HTTPS URLs. Server-side cached data is always accepted. Function `[download.file](../../utils/html/download.file)` and several contributed packages provide more comprehensive facilities to download from URLs. ### Modes Possible values for the argument `open` are `"r"` or `"rt"` Open for reading in text mode. `"w"` or `"wt"` Open for writing in text mode. `"a"` or `"at"` Open for appending in text mode. `"rb"` Open for reading in binary mode. `"wb"` Open for writing in binary mode. `"ab"` Open for appending in binary mode. `"r+"`, `"r+b"` Open for reading and writing. `"w+"`, `"w+b"` Open for reading and writing, truncating file initially. `"a+"`, `"a+b"` Open for reading and appending. Not all modes are applicable to all connections: for example URLs can only be opened for reading. Only file and socket connections can be opened for both reading and writing. An unsupported mode is usually silently substituted. If a file or fifo is created on a Unix-alike, its permissions will be the maximal allowed by the current setting of `umask` (see `[Sys.umask](files2)`). For many connections there is little or no difference between text and binary modes. For file-like connections on Windows, translation of line endings (between LF and CRLF) is done in text mode only (but text read operations on connections such as `[readLines](readlines)`, `<scan>` and `<source>` work for any form of line ending). Various **R** operations are possible in only one of the modes: for example `[pushBack](pushback)` is text-oriented and is only allowed on connections open for reading in text mode, and binary operations such as `[readBin](readbin)`, `<load>` and `<save>` can only be done on binary-mode connections. The mode of a connection is determined when actually opened, which is deferred if `open = ""` is given (the default for all but socket connections). An explicit call to `open` can specify the mode, but otherwise the mode will be `"r"`. (`gzfile`, `bzfile` and `xzfile` connections are exceptions, as the compressed file always has to be opened in binary mode and no conversion of line-endings is done even on Windows, so the default mode is interpreted as `"rb"`.) Most operations that need write access or text-only or binary-only mode will override the default mode of a non-yet-open connection. Append modes need to be considered carefully for compressed-file connections. They do **not** produce a single compressed stream on the file, but rather append a new compressed stream to the file. Readers may or may not read beyond end of the first stream: currently **R** does so for `gzfile`, `bzfile` and `xzfile` connections. ### Compression **R** supports `gzip`, `bzip2` and `xz` compression (also read-only support for its precursor, `lzma` compression). For reading, the type of compression (if any) can be determined from the first few bytes of the file. Thus for `file(raw = FALSE)` connections, if `open` is `""`, `"r"` or `"rt"` the connection can read any of the compressed file types as well as uncompressed files. (Using `"rb"` will allow compressed files to be read byte-by-byte.) Similarly, `gzfile` connections can read any of the forms of compression and uncompressed files in any read mode. (The type of compression is determined when the connection is created if `open` is unspecified and a file of that name exists. If the intention is to open the connection to write a file with a *different* form of compression under that name, specify `open = "w"` when the connection is created or `<unlink>` the file before creating the connection.) For write-mode connections, `compress` specifies how hard the compressor works to minimize the file size, and higher values need more CPU time and more working memory (up to ca 800Mb for `xzfile(compress = 9)`). For `xzfile` negative values of `compress` correspond to adding the `xz` argument -e: this takes more time (double?) to compress but may achieve (slightly) better compression. The default (`6`) has good compression and modest (100Mb memory) usage: but if you are using `xz` compression you are probably looking for high compression. Choosing the type of compression involves tradeoffs: `gzip`, `bzip2` and `xz` are successively less widely supported, need more resources for both compression and decompression, and achieve more compression (although individual files may buck the general trend). Typical experience is that `bzip2` compression is 15% better on text files than `gzip` compression, and `xz` with maximal compression 30% better. The experience with **R** `<save>` files is similar, but on some large ‘.rda’ files `xz` compression is much better than the other two. With current computers decompression times even with `compress = 9` are typically modest and reading compressed files is usually faster than uncompressed ones because of the reduction in disc activity. ### Encoding The encoding of the input/output stream of a connection can be specified by name in the same way as it would be given to `<iconv>`: see that help page for how to find out what encoding names are recognized on your platform. Additionally, `""` and `"native.enc"` both mean the ‘native’ encoding, that is the internal encoding of the current locale and hence no translation is done. When writing to a text connection, the connections code always assumes its input is in native encoding, so e.g. `[writeLines](writelines)` has to convert text to native encoding. `[writeLines](writelines)` does not do the conversion when `useBytes=TRUE` (for expert use only), but the connections code still behaves as if the text was in native encoding, so any attempt to convert encoding (`encoding` argument other than `""` and `"native.enc"`) in connections will produce incorrect results. When reading from a text connection, the connections code, after re-encoding based on the `encoding` argument, returns text that is assumed to be in native encoding; an encoding mark is only added by functions that read from the connection, so e.g. `[readLines](readlines)` can be instructed to mark the text as `"UTF-8"` or `"latin1"`, but `[readLines](readlines)` does no further conversion. To allow reading text in `"UTF-8"` on a system that cannot represent all such characters in native encoding (currently only Windows), a connection can be internally configured to return the read text in UTF-8 even though it is not the native encoding; currently `[readLines](readlines)` and `<scan>` use this feature when given a connection that is not yet open and, when using the feature, they unconditionally mark the text as `"UTF-8"`. Re-encoding only works for connections in text mode: reading from a connection with re-encoding specified in binary mode will read the stream of bytes, but mixing text and binary mode reads (e.g., mixing calls to `[readLines](readlines)` and `[readChar](readchar)`) is likely to lead to incorrect results. The encodings `"UCS-2LE"` and `"UTF-16LE"` are treated specially, as they are appropriate values for Windows ‘Unicode’ text files. If the first two bytes are the Byte Order Mark `0xFEFF` then these are removed as some implementations of `<iconv>` do not accept BOMs. Note that whereas most implementations will handle BOMs using encoding `"UCS-2"` and choose the appropriate byte order, some (including earlier versions of `glibc`) will not. There is a subtle distinction between `"UTF-16"` and `"UCS-2"` (see <https://en.wikipedia.org/wiki/UTF-16>): the use of characters in the ‘Supplementary Planes’ which need surrogate pairs is very rare so `"UCS-2LE"` is an appropriate first choice (as it is more widely implemented). As from **R** 3.0.0 the encoding `"UTF-8-BOM"` is accepted for reading and will remove a Byte Order Mark if present (which it often is for files and webpages generated by Microsoft applications). If a BOM is required (it is not recommended) when writing it should be written explicitly, e.g. by `writeChar("\ufeff", con, eos = NULL)` or `writeBin(as.raw(c(0xef, 0xbb, 0xbf)), binary_con)` Encoding names `"utf8"`, `"mac"` and `"macroman"` are not portable, and not supported on all current **R** platforms. `"UTF-8"` is portable and `"macintosh"` is the official (and most widely supported) name for ‘Mac Roman’. (As from **R** 3.4.0, **R** maps `"utf8"` to `"UTF-8"` internally.) Requesting a conversion that is not supported is an error, reported when the connection is opened. Exactly what happens when the requested translation cannot be done for invalid input is in general undocumented. On output the result is likely to be that up to the error, with a warning. On input, it will most likely be all or some of the input up to the error. It may be possible to deduce the current native encoding from `[Sys.getlocale](locales)("LC_CTYPE")`, but not all OSes record it. ### Blocking Whether or not the connection blocks can be specified for file, url (default yes), fifo and socket connections (default not). In blocking mode, functions using the connection do not return to the **R** evaluator until the read/write is complete. In non-blocking mode, operations return as soon as possible, so on input they will return with whatever input is available (possibly none) and for output they will return whether or not the write succeeded. The function `[readLines](readlines)` behaves differently in respect of incomplete last lines in the two modes: see its help page. Even when a connection is in blocking mode, attempts are made to ensure that it does not block the event loop and hence the operation of GUI parts of **R**. These do not always succeed, and the whole **R** process will be blocked during a DNS lookup on Unix, for example. Most blocking operations on HTTP/FTP URLs and on sockets are subject to the timeout set by `options("timeout")`. Note that this is a timeout for no response, not for the whole operation. The timeout is set at the time the connection is opened (more precisely, when the last connection of that type – http:, ftp: or socket – was opened). ### Fifos Fifos default to non-blocking. That follows S version 4 and is probably most natural, but it does have some implications. In particular, opening a non-blocking fifo connection for writing (only) will fail unless some other process is reading on the fifo. Opening a fifo for both reading and writing (in any mode: one can only append to fifos) connects both sides of the fifo to the **R** process, and provides an similar facility to `file()`. ### Clipboard `file` can be used with `description = "clipboard"` in mode `"r"` only. This reads the X11 primary selection (see <https://specifications.freedesktop.org/clipboards-spec/clipboards-latest.txt>), which can also be specified as `"X11_primary"` and the secondary selection as `"X11_secondary"`. On most systems the clipboard selection (that used by ‘Copy’ from an ‘Edit’ menu) can be specified as `"X11_clipboard"`. When a clipboard is opened for reading, the contents are immediately copied to internal storage in the connection. Unix users wishing to *write* to one of the X11 selections may be able to do so via `xclip` (<https://github.com/astrand/xclip>) or `xsel` (<http://www.vergenet.net/~conrad/software/xsel/>), for example by `pipe("xclip -i", "w")` for the primary selection. macOS users can use `pipe("pbpaste")` and `pipe("pbcopy", "w")` to read from and write to that system's clipboard. ### File paths In most cases these are translated to the native encoding. The exceptions are `file` and `pipe` on Windows, where a `description` which is marked as being in UTF-8 is passed to Windows as a ‘wide’ character string. This allows files with names not in the native encoding to be opened on file systems which use Unicode file names (such as NTFS but not FAT32). ### Note **R**'s connections are modelled on those in S version 4 (see Chambers, 1998). However **R** goes well beyond the S model, for example in output text connections and URL, compressed and socket connections. The default open mode in **R** is `"r"` except for socket connections. This differs from S, where it is the equivalent of `"r+"`, known as `"*"`. On (rare) platforms where `vsnprintf` does not return the needed length of output there is a 100,000 byte output limit on the length of a line for text output on `fifo`, `gzfile`, `bzfile` and `xzfile` connections: longer lines will be truncated with a warning. ### References Chambers, J. M. (1998) *Programming with Data. A Guide to the S Language.* Springer. Ripley, B. D. (2001). “Connections.” *R News*, **1**(1), 16–7. <https://www.r-project.org/doc/Rnews/Rnews_2001-1.pdf>. ### See Also `[textConnection](textconnections)`, `<seek>`, `[showConnections](showconnections)`, `[pushBack](pushback)`. Functions making direct use of connections are (text-mode) `[readLines](readlines)`, `[writeLines](writelines)`, `<cat>`, `<sink>`, `<scan>`, `<parse>`, `[read.dcf](dcf)`, `<dput>`, `<dump>` and (binary-mode) `[readBin](readbin)`, `[readChar](readchar)`, `[writeBin](readbin)`, `[writeChar](readchar)`, `<load>` and `<save>`. `<capabilities>` to see if `fifo` connections are supported by this build of **R**. `<gzcon>` to wrap `gzip` (de)compression around a connection. `<options>` `HTTPUserAgent`, `internet.info` and `timeout` are used by some of the methods for URL connections. `[memCompress](memcompress)` for more ways to (de)compress and references on data compression. `[extSoftVersion](extsoftversion)` for the versions of the `zlib` (for `gzfile`), `bzip2` and `xz` libraries in use. To flush output to the Windows and macOS consoles, see `[flush.console](../../utils/html/flush.console)`. ### Examples ``` zzfil <- tempfile(fileext=".data") zz <- file(zzfil, "w") # open an output file connection cat("TITLE extra line", "2 3 5 7", "", "11 13 17", file = zz, sep = "\n") cat("One more line\n", file = zz) close(zz) readLines(zzfil) unlink(zzfil) zzfil <- tempfile(fileext=".gz") zz <- gzfile(zzfil, "w") # compressed file cat("TITLE extra line", "2 3 5 7", "", "11 13 17", file = zz, sep = "\n") close(zz) readLines(zz <- gzfile(zzfil)) close(zz) unlink(zzfil) zz # an invalid connection zzfil <- tempfile(fileext=".bz2") zz <- bzfile(zzfil, "w") # bzip2-ed file cat("TITLE extra line", "2 3 5 7", "", "11 13 17", file = zz, sep = "\n") close(zz) zz # print() method: invalid connection print(readLines(zz <- bzfile(zzfil))) close(zz) unlink(zzfil) ## An example of a file open for reading and writing Tpath <- tempfile("test") Tfile <- file(Tpath, "w+") c(isOpen(Tfile, "r"), isOpen(Tfile, "w")) # both TRUE cat("abc\ndef\n", file = Tfile) readLines(Tfile) seek(Tfile, 0, rw = "r") # reset to beginning readLines(Tfile) cat("ghi\n", file = Tfile) readLines(Tfile) Tfile # -> print() : "valid" connection close(Tfile) Tfile # -> print() : "invalid" connection unlink(Tpath) ## We can do the same thing with an anonymous file. Tfile <- file() cat("abc\ndef\n", file = Tfile) readLines(Tfile) close(Tfile) ## Not run: ## fifo example -- may hang even with OS support for fifos if(capabilities("fifo")) { zzfil <- tempfile(fileext="-fifo") zz <- fifo(zzfil, "w+") writeLines("abc", zz) print(readLines(zz)) close(zz) unlink(zzfil) } ## End(Not run) ## Unix examples of use of pipes # read listing of current directory readLines(pipe("ls -1")) # remove trailing commas. Suppose ## Not run: % cat data2_ 450, 390, 467, 654, 30, 542, 334, 432, 421, 357, 497, 493, 550, 549, 467, 575, 578, 342, 446, 547, 534, 495, 979, 479 ## End(Not run) # Then read this by scan(pipe("sed -e s/,$// data2_"), sep = ",") # convert decimal point to comma in output: see also write.table # both R strings and (probably) the shell need \ doubled zzfil <- tempfile("outfile") zz <- pipe(paste("sed s/\\\\./,/ >", zzfil), "w") cat(format(round(stats::rnorm(48), 4)), fill = 70, file = zz) close(zz) file.show(zzfil, delete.file = TRUE) ## Not run: ## example for a machine running a finger daemon con <- socketConnection(port = 79, blocking = TRUE) writeLines(paste0(system("whoami", intern = TRUE), "\r"), con) gsub(" *$", "", readLines(con)) close(con) ## End(Not run) ## Not run: ## Two R processes communicating via non-blocking sockets # R process 1 con1 <- socketConnection(port = 6011, server = TRUE) writeLines(LETTERS, con1) close(con1) # R process 2 con2 <- socketConnection(Sys.info()["nodename"], port = 6011) # as non-blocking, may need to loop for input readLines(con2) while(isIncomplete(con2)) { Sys.sleep(1) z <- readLines(con2) if(length(z)) print(z) } close(con2) ## examples of use of encodings # write a file in UTF-8 cat(x, file = (con <- file("foo", "w", encoding = "UTF-8"))); close(con) # read a 'Windows Unicode' file A <- read.table(con <- file("students", encoding = "UCS-2LE")); close(con) ## End(Not run) ```
programming_docs
r None `try` Try an Expression Allowing Error Recovery ------------------------------------------------ ### Description `try` is a wrapper to run an expression that might fail and allow the user's code to handle error-recovery. ### Usage ``` try(expr, silent = FALSE, outFile = getOption("try.outFile", default = stderr())) ``` ### Arguments | | | | --- | --- | | `expr` | an **R** expression to try. | | `silent` | logical: should the report of error messages be suppressed? | | `outFile` | a [connection](connections), or a character string naming the file to print to (via `<cat>(*, file = outFile)`); used only if `silent` is false, as by default. | ### Details `try` evaluates an expression and traps any errors that occur during the evaluation. If an error occurs then the error message is printed to the `[stderr](showconnections)` connection unless `options("show.error.messages")` is false or the call includes `silent = TRUE`. The error message is also stored in a buffer where it can be retrieved by `geterrmessage`. (This should not be needed as the value returned in case of an error contains the error message.) `try` is implemented using `[tryCatch](conditions)`; for programming, instead of `try(expr, silent = TRUE)`, something like `tryCatch(expr, error = function(e) e)` (or other simple error handler functions) may be more efficient and flexible. It may be useful to set the default for `outFile` to `[stdout](showconnections)()`, i.e., ``` options(try.outFile = stdout()) ``` instead of the default `[stderr](showconnections)()`, notably when `try()` is used inside a `[Sweave](../../utils/html/sweave)` code chunk and the error message should appear in the resulting document. ### Value The value of the expression if `expr` is evaluated without error, but an invisible object of class `"try-error"` containing the error message, and the error condition as the `"condition"` attribute, if it fails. ### See Also `<options>` for setting error handlers and suppressing the printing of error messages; `[geterrmessage](stop)` for retrieving the last error message. The underlying `[tryCatch](conditions)` provides more flexible means of catching and handling errors. `[assertCondition](../../tools/html/assertcondition)` in package tools is related and useful for testing. ### Examples ``` ## this example will not work correctly in example(try), but ## it does work correctly if pasted in options(show.error.messages = FALSE) try(log("a")) print(.Last.value) options(show.error.messages = TRUE) ## alternatively, print(try(log("a"), TRUE)) ## run a simulation, keep only the results that worked. set.seed(123) x <- stats::rnorm(50) doit <- function(x) { x <- sample(x, replace = TRUE) if(length(unique(x)) > 30) mean(x) else stop("too few unique points") } ## alternative 1 res <- lapply(1:100, function(i) try(doit(x), TRUE)) ## alternative 2 ## Not run: res <- vector("list", 100) for(i in 1:100) res[[i]] <- try(doit(x), TRUE) ## End(Not run) unlist(res[sapply(res, function(x) !inherits(x, "try-error"))]) ``` r None `unique` Extract Unique Elements --------------------------------- ### Description `unique` returns a vector, data frame or array like `x` but with duplicate elements/rows removed. ### Usage ``` unique(x, incomparables = FALSE, ...) ## Default S3 method: unique(x, incomparables = FALSE, fromLast = FALSE, nmax = NA, ...) ## S3 method for class 'matrix' unique(x, incomparables = FALSE, MARGIN = 1, fromLast = FALSE, ...) ## S3 method for class 'array' unique(x, incomparables = FALSE, MARGIN = 1, fromLast = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `x` | a vector or a data frame or an array or `NULL`. | | `incomparables` | a vector of values that cannot be compared. `FALSE` is a special value, meaning that all values can be compared, and may be the only value accepted for methods other than the default. It will be coerced internally to the same type as `x`. | | `fromLast` | logical indicating if duplication should be considered from the last, i.e., the last (or rightmost) of identical elements will be kept. This only matters for `<names>` or `<dimnames>`. | | `nmax` | the maximum number of unique items expected (greater than one). See `<duplicated>`. | | `...` | arguments for particular methods. | | `MARGIN` | the array margin to be held fixed: a single integer. | ### Details This is a generic function with methods for vectors, data frames and arrays (including matrices). The array method calculates for each element of the dimension specified by `MARGIN` if the remaining dimensions are identical to those for an earlier element (in row-major order). This would most commonly be used for matrices to find unique rows (the default) or columns (with `MARGIN = 2`). Note that unlike the Unix command `uniq` this omits *duplicated* and not just *repeated* elements/rows. That is, an element is omitted if it is equal to any previous element and not just if it is equal the immediately previous one. (For the latter, see `<rle>`). Missing values (`"[NA](na)"`) are regarded as equal, numeric and complex ones differing from `NaN`; character strings will be compared in a “common encoding”; for details, see `<match>` (and `<duplicated>`) which use the same concept. Values in `incomparables` will never be marked as duplicated. This is intended to be used for a fairly small set of values and will not be efficient for a very large set. When used on a data frame with more than one column, or an array or matrix when comparing dimensions of length greater than one, this tests for identity of character representations. This will catch people who unwisely rely on exact equality of floating-point numbers! ### Value For a vector, an object of the same type of `x`, but with only one copy of each duplicated element. No attributes are copied (so the result has no names). For a data frame, a data frame is returned with the same columns but possibly fewer rows (and with row names from the first occurrences of the unique rows). A matrix or array is subsetted by `[, drop = FALSE]`, so dimensions and dimnames are copied appropriately, and the result always has the same number of dimensions as `x`. ### Warning Using this for lists is potentially slow, especially if the elements are not atomic vectors (see `<vector>`) or differ only in their attributes. In the worst case it is *O(n^2)*. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<duplicated>` which gives the indices of duplicated elements. `<rle>` which is the equivalent of the Unix `uniq -c` command. ### Examples ``` x <- c(3:5, 11:8, 8 + 0:5) (ux <- unique(x)) (u2 <- unique(x, fromLast = TRUE)) # different order stopifnot(identical(sort(ux), sort(u2))) length(unique(sample(100, 100, replace = TRUE))) ## approximately 100(1 - 1/e) = 63.21 unique(iris) ``` r None `Dates` Date Class ------------------- ### Description Description of the class `"Date"` representing calendar dates. ### Usage ``` ## S3 method for class 'Date' summary(object, digits = 12, ...) ## S3 method for class 'Date' print(x, max = NULL, ...) ``` ### Arguments | | | | --- | --- | | `object, x` | a `Date` object to be summarized or printed. | | `digits` | number of significant digits for the computations. | | `max` | numeric or `NULL`, specifying the maximal number of entries to be printed. By default, when `NULL`, `[getOption](options)("max.print")` used. | | `...` | further arguments to be passed from or to other methods. | ### Details Dates are represented as the number of days since 1970-01-01, with negative values for earlier dates. They are always printed following the rules of the current Gregorian calendar, even though that calendar was not in use long ago (it was adopted in 1752 in Great Britain and its colonies). It is intended that the date should be an integer, but this is not enforced in the internal representation. Fractional days will be ignored when printing. It is possible to produce fractional days via the `mean` method or by adding or subtracting (see `[Ops.Date](ops.date)`). From the many methods, see `methods(class = "Date")`, a few are documented separately, see below. ### See Also `[Sys.Date](sys.time)` for the current date. `[weekdays](weekday.posixt)` for convenience extraction functions. Methods with extra arguments and documentation: `[Ops.Date](ops.date)` for operators on `"Date"` objects. `[format.Date](as.date)` for conversion to and from character strings. `[axis.Date](../../graphics/html/axis.posixct)` and `[hist.Date](../../graphics/html/hist.posixt)` for plotting. `[seq.Date](seq.date)` , `[cut.Date](cut.posixt)`, and `[round.Date](round.posixt)` for utility operations. `[DateTimeClasses](datetimeclasses)` for date-time classes. ### Examples ``` (today <- Sys.Date()) format(today, "%d %b %Y") # with month as a word (tenweeks <- seq(today, length.out=10, by="1 week")) # next ten weeks weekdays(today) months(tenweeks) (Dls <- as.Date(.leap.seconds)) ## length(<Date>) <- n now works ls <- Dls; length(ls) <- 12 l2 <- Dls; length(l2) <- 5 + length(Dls) stopifnot(exprs = { ## length(.) <- * is compatible to subsetting/indexing: identical(ls, Dls[seq_along(ls)]) identical(l2, Dls[seq_along(l2)]) ## has filled with NA's is.na(l2[(length(Dls)+1):length(l2)]) }) ``` r None `switch` Select One of a List of Alternatives ---------------------------------------------- ### Description `switch` evaluates `EXPR` and accordingly chooses one of the further arguments (in `...`). ### Usage ``` switch(EXPR, ...) ``` ### Arguments | | | | --- | --- | | `EXPR` | an expression evaluating to a number or a character string. | | `...` | the list of alternatives. If it is intended that `EXPR` has a character-string value these will be named, perhaps except for one alternative to be used as a ‘default’ value. | ### Details `switch` works in two distinct ways depending whether the first argument evaluates to a character string or a number. If the value of `EXPR` is not a character string it is coerced to integer. Note that this also happens for `<factor>`s, with a warning, as typically the character level is meant. If the integer is between 1 and `nargs()-1` then the corresponding element of `...` is evaluated and the result returned: thus if the first argument is `3` then the fourth argument is evaluated and returned. If `EXPR` evaluates to a character string then that string is matched (exactly) to the names of the elements in `...`. If there is a match then that element is evaluated unless it is missing, in which case the next non-missing element is evaluated, so for example `switch("cc", a = 1, cc =, cd =, d = 2)` evaluates to `2`. If there is more than one match, the first matching element is used. In the case of no match, if there is an unnamed element of `...` its value is returned. (If there is more than one such argument an error is signaled.) The first argument is always taken to be `EXPR`: if it is named its name must (partially) match. A warning is signaled if no alternatives are provided, as this is usually a coding error. This is implemented as a <primitive> function that only evaluates its first argument and one other if one is selected. ### Value The value of one of the elements of `...`, or `NULL`, invisibly (whenever no element is selected). The result has the visibility (see `<invisible>`) of the element evaluated. ### Warning It is possible to write calls to `switch` that can be confusing and may not work in the same way in earlier versions of **R**. For compatibility (and clarity), always have `EXPR` as the first argument, naming it if partial matching is a possibility. For the character-string form, have a single unnamed argument as the default after the named values. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### Examples ``` require(stats) centre <- function(x, type) { switch(type, mean = mean(x), median = median(x), trimmed = mean(x, trim = .1)) } x <- rcauchy(10) centre(x, "mean") centre(x, "median") centre(x, "trimmed") ccc <- c("b","QQ","a","A","bb") # note: cat() produces no output for NULL for(ch in ccc) cat(ch,":", switch(EXPR = ch, a = 1, b = 2:3), "\n") for(ch in ccc) cat(ch,":", switch(EXPR = ch, a =, A = 1, b = 2:3, "Otherwise: last"),"\n") ## switch(f, *) with a factor f ff <- gl(3,1, labels=LETTERS[3:1]) ff[1] # C ## so one might expect " is C" here, but switch(ff[1], A = "I am A", B="Bb..", C=" is C")# -> "I am A" ## so we give a warning ## Numeric EXPR does not allow a default value to be specified ## -- it is always NULL for(i in c(-1:3, 9)) print(switch(i, 1, 2 , 3, 4)) ## visibility switch(1, invisible(pi), pi) switch(2, invisible(pi), pi) ``` r None `pmatch` Partial String Matching --------------------------------- ### Description `pmatch` seeks matches for the elements of its first argument among those of its second. ### Usage ``` pmatch(x, table, nomatch = NA_integer_, duplicates.ok = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | the values to be matched: converted to a character vector by `[as.character](character)`. [Long vectors](longvectors) are supported. | | `table` | the values to be matched against: converted to a character vector. [Long vectors](longvectors) are not supported. | | `nomatch` | the value to be returned at non-matching or multiply partially matching positions. Note that it is coerced to `integer`. | | `duplicates.ok` | should elements be in `table` be used more than once? | ### Details The behaviour differs by the value of `duplicates.ok`. Consider first the case if this is true. First exact matches are considered, and the positions of the first exact matches are recorded. Then unique partial matches are considered, and if found recorded. (A partial match occurs if the whole of the element of `x` matches the beginning of the element of `table`.) Finally, all remaining elements of `x` are regarded as unmatched. In addition, an empty string can match nothing, not even an exact match to an empty string. This is the appropriate behaviour for partial matching of character indices, for example. If `duplicates.ok` is `FALSE`, values of `table` once matched are excluded from the search for subsequent matches. This behaviour is equivalent to the **R** algorithm for argument matching, except for the consideration of empty strings (which in argument matching are matched after exact and partial matching to any remaining arguments). `<charmatch>` is similar to `pmatch` with `duplicates.ok` true, the differences being that it differentiates between no match and an ambiguous partial match, it does match empty strings, and it does not allow multiple exact matches. `NA` values are treated as if they were the string constant `"NA"`. ### Value An integer vector (possibly including `NA` if `nomatch = NA`) of the same length as `x`, giving the indices of the elements in `table` which matched, or `nomatch`. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Chambers, J. M. (1998) *Programming with Data. A Guide to the S Language*. Springer. ### See Also `<match>`, `<charmatch>` and `<match.arg>`, `<match.fun>`, `<match.call>`, for function argument matching etc., `[startsWith](startswith)` for particular checking of initial matches; `<grep>` etc for more general (regexp) matching of strings. ### Examples ``` pmatch("", "") # returns NA pmatch("m", c("mean", "median", "mode")) # returns NA pmatch("med", c("mean", "median", "mode")) # returns 2 pmatch(c("", "ab", "ab"), c("abc", "ab"), duplicates.ok = FALSE) pmatch(c("", "ab", "ab"), c("abc", "ab"), duplicates.ok = TRUE) ## compare charmatch(c("", "ab", "ab"), c("abc", "ab")) ``` r None `shQuote` Quote Strings for Use in OS Shells --------------------------------------------- ### Description Quote a string to be passed to an operating system shell. ### Usage ``` shQuote(string, type = c("sh", "csh", "cmd", "cmd2")) ``` ### Arguments | | | | --- | --- | | `string` | a character vector, usually of length one. | | `type` | character: the type of shell quoting. Partial matching is supported. `"cmd"` and `"cmd2"` refer to the Windows shell. `"cmd"` is the default under Windows. | ### Details The default type of quoting supported under Unix-alikes is that for the Bourne shell `sh`. If the string does not contain single quotes, we can just surround it with single quotes. Otherwise, the string is surrounded in double quotes, which suppresses all special meanings of metacharacters except dollar, backquote and backslash, so these (and of course double quote) are preceded by backslash. This type of quoting is also appropriate for `bash`, `ksh` and `zsh`. The other type of quoting is for the C-shell (`csh` and `tcsh`). Once again, if the string does not contain single quotes, we can just surround it with single quotes. If it does contain single quotes, we can use double quotes provided it does not contain dollar or backquote (and we need to escape backslash, exclamation mark and double quote). As a last resort, we need to split the string into pieces not containing single quotes and surround each with single quotes, and the single quotes with double quotes. In Windows, command line interpretation is done by the application as well as the shell. It may depend on the compiler used: Microsoft's rules for the C run-time are given at <https://docs.microsoft.com/en-us/previous-versions/ms880421(v=msdn.10)>. It may depend on the whim of the programmer of the application: check its documentation. The `type = "cmd"` quoting surrounds the string by double quotes and escapes internal double quotes by a backslash. As Windows path names cannot contain double quotes, this makes `shQuote` safe for use with many applications when used with `<system>` or `<system2>`. The Windows `cmd.exe` shell (used by default with `[shell](system)`) uses `type = "cmd2"` quoting: special characters are prefixed with `"^"`. In some cases, two types of quoting should be used: first for the application, and then `type = "cmd2"` for `cmd.exe`. See the examples below. ### References Loukides, M. *et al* (2002) *Unix Power Tools* Third Edition. O'Reilly. Section 27.12. Discussion in [PR#16636](https://bugs.R-project.org/bugzilla3/show_bug.cgi?id=16636). ### See Also [Quotes](quotes) for quoting **R** code. `[sQuote](squote)` for quoting English text. ### Examples ``` test <- "abc$def`gh`i\\j" cat(shQuote(test), "\n") ## Not run: system(paste("echo", shQuote(test))) test <- "don't do it!" cat(shQuote(test), "\n") tryit <- paste("use the", sQuote("-c"), "switch\nlike this") cat(shQuote(tryit), "\n") ## Not run: system(paste("echo", shQuote(tryit))) cat(shQuote(tryit, type = "csh"), "\n") ## Windows-only example, assuming cmd.exe: perlcmd <- 'print "Hello World\\n";' ## Not run: shell(shQuote(paste("perl -e", shQuote(perlcmd, type = "cmd")), type = "cmd2")) ## End(Not run) ``` r None `is.recursive` Is an Object Atomic or Recursive? ------------------------------------------------- ### Description `is.atomic` returns `TRUE` if `x` is of an atomic type (or `NULL`) and `FALSE` otherwise. `is.recursive` returns `TRUE` if `x` has a recursive (list-like) structure and `FALSE` otherwise. ### Usage ``` is.atomic(x) is.recursive(x) ``` ### Arguments | | | | --- | --- | | `x` | object to be tested. | ### Details `is.atomic` is true for the [atomic](vector) types (`"logical"`, `"integer"`, `"numeric"`, `"complex"`, `"character"` and `"raw"`) and `NULL`. Most types of objects are regarded as recursive. Exceptions are the atomic types, `NULL`, symbols (as given by `[as.name](name)`), `S4` objects with slots, external pointers, and—rarely visible from **R**—weak references and byte code, see `<typeof>`. It is common to call the atomic types ‘atomic vectors’, but note that `[is.vector](vector)` imposes further restrictions: an object can be atomic but not a vector (in that sense). These are <primitive> functions. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[is.list](list)`, `<is.language>`, etc, and the `demo("is.things")`. ### Examples ``` require(stats) is.a.r <- function(x) c(is.atomic(x), is.recursive(x)) is.a.r(c(a = 1, b = 3)) # TRUE FALSE is.a.r(list()) # FALSE TRUE - a list is a list is.a.r(list(2)) # FALSE TRUE is.a.r(lm) # FALSE TRUE is.a.r(y ~ x) # FALSE TRUE is.a.r(expression(x+1)) # FALSE TRUE is.a.r(quote(exp)) # FALSE FALSE ```
programming_docs
r None `scale` Scaling and Centering of Matrix-like Objects ----------------------------------------------------- ### Description `scale` is generic function whose default method centers and/or scales the columns of a numeric matrix. ### Usage ``` scale(x, center = TRUE, scale = TRUE) ``` ### Arguments | | | | --- | --- | | `x` | a numeric matrix(like object). | | `center` | either a logical value or numeric-alike vector of length equal to the number of columns of `x`, where ‘numeric-alike’ means that `[as.numeric](numeric)(.)` will be applied successfully if `[is.numeric](numeric)(.)` is not true. | | `scale` | either a logical value or a numeric-alike vector of length equal to the number of columns of `x`. | ### Details The value of `center` determines how column centering is performed. If `center` is a numeric-alike vector with length equal to the number of columns of `x`, then each column of `x` has the corresponding value from `center` subtracted from it. If `center` is `TRUE` then centering is done by subtracting the column means (omitting `NA`s) of `x` from their corresponding columns, and if `center` is `FALSE`, no centering is done. The value of `scale` determines how column scaling is performed (after centering). If `scale` is a numeric-alike vector with length equal to the number of columns of `x`, then each column of `x` is divided by the corresponding value from `scale`. If `scale` is `TRUE` then scaling is done by dividing the (centered) columns of `x` by their standard deviations if `center` is `TRUE`, and the root mean square otherwise. If `scale` is `FALSE`, no scaling is done. The root-mean-square for a (possibly centered) column is defined as *sqrt(sum(x^2)/(n-1))*, where *x* is a vector of the non-missing values and *n* is the number of non-missing values. In the case `center = TRUE`, this is the same as the standard deviation, but in general it is not. (To scale by the standard deviations without centering, use `scale(x, center = FALSE, scale = apply(x, 2, sd, na.rm = TRUE))`.) ### Value For `scale.default`, the centered, scaled matrix. The numeric centering and scalings used (if any) are returned as attributes `"scaled:center"` and `"scaled:scale"` ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<sweep>` which allows centering (and scaling) with arbitrary statistics. For working with the scale of a plot, see `[par](../../graphics/html/par)`. ### Examples ``` require(stats) x <- matrix(1:10, ncol = 2) (centered.x <- scale(x, scale = FALSE)) cov(centered.scaled.x <- scale(x)) # all 1 ``` r None `Foreign` Foreign Function Interface ------------------------------------- ### Description Functions to make calls to compiled code that has been loaded into **R**. ### Usage ``` .C(.NAME, ..., NAOK = FALSE, DUP = TRUE, PACKAGE, ENCODING) .Fortran(.NAME, ..., NAOK = FALSE, DUP = TRUE, PACKAGE, ENCODING) ``` ### Arguments | | | | --- | --- | | `.NAME` | a character string giving the name of a C function or Fortran subroutine, or an object of class `"[NativeSymbolInfo](getnativesymbolinfo)"`, `"[RegisteredNativeSymbol](getnativesymbolinfo)"` or `"[NativeSymbol](getnativesymbolinfo)"` referring to such a name. | | `...` | arguments to be passed to the foreign function. Up to 65. | | `NAOK` | if `TRUE` then any `[NA](na)` or `[NaN](is.finite)` or `[Inf](is.finite)` values in the arguments are passed on to the foreign function. If `FALSE`, the presence of `NA` or `NaN` or `Inf` values is regarded as an error. | | `PACKAGE` | if supplied, confine the search for a character string `.NAME` to the DLL given by this argument (plus the conventional extension, ‘.so’, ‘.dll’, ...). This is intended to add safety for packages, which can ensure by using this argument that no other package can override their external symbols, and also speeds up the search (see ‘Note’). | | `DUP, ENCODING` | For back-compatibility, accepted but ignored. | ### Details These functions can be used to make calls to compiled C and Fortran code. Later interfaces are `[.Call](callexternal)` and `[.External](callexternal)` which are more flexible and have better performance. These functions are <primitive>, and `.NAME` is always matched to the first argument supplied (which should not be named). The other named arguments follow `...` and so cannot be abbreviated. For clarity, should avoid using names in the arguments passed to `...` that match or partially match `.NAME`. ### Value A list similar to the `...` list of arguments passed in (including any names given to the arguments), but reflecting any changes made by the C or Fortran code. ### Argument types The mapping of the types of **R** arguments to C or Fortran arguments is | | | | | --- | --- | --- | | **R** | C | Fortran | | integer | int \* | integer | | numeric | double \* | double precision | | -- or -- | float \* | real | | complex | Rcomplex \* | double complex | | logical | int \* | integer | | character | char \*\* | [see below] | | raw | unsigned char \* | not allowed | | list | SEXP \* | not allowed | | other | SEXP | not allowed | | | *Note:* The C types corresponding to `integer` and `logical` are `int`, not `long` as in S. This difference matters on most 64-bit platforms, where `int` is 32-bit and `long` is 64-bit (but not on 64-bit Windows). *Note:* The Fortran type corresponding to `logical` is `integer`, not `logical`: the difference matters on some Fortran compilers. Numeric vectors in **R** will be passed as type `double *` to C (and as `double precision` to Fortran) unless the argument has attribute `Csingle` set to `TRUE` (use `[as.single](double)` or `[single](double)`). This mechanism is only intended to be used to facilitate the interfacing of existing C and Fortran code. The C type `Rcomplex` is defined in ‘Complex.h’ as a `typedef struct {double r; double i;}`. It may or may not be equivalent to the C99 `double complex` type, depending on the compiler used. Logical values are sent as `0` (`FALSE`), `1` (`TRUE`) or `INT_MIN = -2147483648` (`NA`, but only if `NAOK = TRUE`), and the compiled code should return one of these three values: however non-zero values other than `INT_MIN` are mapped to `TRUE`. Missing (`NA`) string values are passed to `.C` as the string "NA". As the C `char` type can represent all possible bit patterns there appears to be no way to distinguish missing strings from the string `"NA"`. If this distinction is important use `[.Call](callexternal)`. Using a character string with `.Fortran` is deprecated and will give a warning. It passes the first (only) character string of a character vector as a C character array to Fortran: that may be usable as `character*255` if its true length is passed separately. Only up to 255 characters of the string are passed back. (How well this works, and even if it works at all, depends on the C and Fortran compilers and the platform.) Lists, functions or other **R** objects can (for historical reasons) be passed to `.C`, but the `[.Call](callexternal)` interface is much preferred. All inputs apart from atomic vectors should be regarded as read-only, and all apart from vectors (including lists), functions and environments are now deprecated. ### Fortran symbol names All Fortran compilers known to be usable to compile **R** map symbol names to lower case, and so does `.Fortran`. Symbol names containing underscores are not valid Fortran 77 (although they are valid in Fortran 9x). Many Fortran 77 compilers will allow them but may translate them in a different way to names not containing underscores. Such names will often work with `.Fortran` (since how they are translated is detected when **R** is built and the information used by `.Fortran`), but portable code should not use Fortran names containing underscores. Use `.Fortran` with care for compiled Fortran 9x code: it may not work if the Fortran 9x compiler used differs from the Fortran compiler used when configuring **R**, especially if the subroutine name is not lower-case or includes an underscore. It is possible to use `.C` and do any necessary symbol-name translation yourself. ### Copying of arguments Character vectors are copied before calling the compiled code and to collect the results. For other atomic vectors the argument is copied before calling the compiled code if it is otherwise used in the calling code. Non-atomic-vector objects are read-only to the C code and are never copied. This behaviour can be changed by setting `<options>(CBoundsCheck = TRUE)`. In that case raw, logical, integer, double and complex vector arguments are copied both before and after calling the compiled code. The first copy made is extended at each end by guard bytes, and on return it is checked that these are unaltered. For `.C`, each element of a character vector uses guard bytes. ### Note If one of these functions is to be used frequently, do specify `PACKAGE` (to confine the search to a single DLL) or pass `.NAME` as one of the native symbol objects. Searching for symbols can take a long time, especially when many namespaces are loaded. You may see `PACKAGE = "base"` for symbols linked into **R**. Do not use this in your own code: such symbols are not part of the API and may be changed without warning. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[dyn.load](dynload)`, `[.Call](callexternal)`. The ‘Writing R Extensions’ manual. r None `Sys.time` Get Current Date and Time ------------------------------------- ### Description `Sys.time` and `Sys.Date` returns the system's idea of the current date with and without time. ### Usage ``` Sys.time() Sys.Date() ``` ### Details `Sys.time` returns an absolute date-time value which can be converted to various time zones and may return different days. `Sys.Date` returns the current day in the current [time zone](timezones). ### Value `Sys.time` returns an object of class `"POSIXct"` (see [DateTimeClasses](datetimeclasses)). On almost all systems it will have sub-second accuracy, possibly microseconds or better. On Windows it increments in clock ticks (usually 1/60 of a second) reported to millisecond accuracy. `Sys.Date` returns an object of class `"Date"` (see [Date](dates)). ### Note `Sys.time` may return fractional seconds, but they are ignored by the default conversions (e.g., printing) for class `"POSIXct"`. See the examples and `[format.POSIXct](strptime)` for ways to reveal them. ### See Also `<date>` for the system time in a fixed-format character string. `[Sys.timezone](timezones)`. `<system.time>` for measuring elapsed/CPU time of expressions. ### Examples ``` Sys.time() ## print with possibly greater accuracy: op <- options(digits.secs = 6) Sys.time() options(op) ## locale-specific version of date() format(Sys.time(), "%a %b %d %X %Y") Sys.Date() ``` r None `proportions` Express Table Entries as Fraction of Marginal Table ------------------------------------------------------------------ ### Description Returns conditional proportions given `margins`, i.e. entries of `x`, divided by the appropriate marginal sums. ### Usage ``` proportions(x, margin = NULL) prop.table(x, margin = NULL) ``` ### Arguments | | | | --- | --- | | `x` | table | | `margin` | a vector giving the margins to split by. E.g., for a matrix `1` indicates rows, `2` indicates columns, `c(1, 2)` indicates rows and columns. When `x` has named dimnames, it can be a character vector selecting dimension names. | ### Value Table like `x` expressed relative to `margin` ### Note `prop.table` is an earlier name, retained for back-compatibility. ### Author(s) Peter Dalgaard ### See Also `[marginSums](marginsums)`. `<apply>`, `<sweep>` are a more general mechanism for sweeping out marginal statistics. ### Examples ``` m <- matrix(1:4, 2) m proportions(m, 1) DF <- as.data.frame(UCBAdmissions) tbl <- xtabs(Freq ~ Gender + Admit, DF) proportions(tbl, "Gender") ``` r None `Sys.setenv` Set or Unset Environment Variables ------------------------------------------------ ### Description `Sys.setenv` sets environment variables (for other processes called from within **R** or future calls to `[Sys.getenv](sys.getenv)` from this **R** process). `Sys.unsetenv` removes environment variables. ### Usage ``` Sys.setenv(...) Sys.unsetenv(x) ``` ### Arguments | | | | --- | --- | | `...` | named arguments with values coercible to a character string. | | `x` | a character vector, or an object coercible to character. | ### Details Non-standard **R** names must be quoted in `Sys.setenv`: see the examples. Most platforms (and POSIX) do not allow names containing `"="`. Windows does, but the facilities provided by **R** may not handle these correctly so they should be avoided. Most platforms allow setting an environment variable to `""`, but Windows does not and there `Sys.setenv(FOO = "")` unsets FOO. There may be system-specific limits on the maximum length of the values of individual environment variables or of names+values of all environment variables. Recent versions of Windows have a maximum length of 32,767 characters for a environment variable; however `cmd.exe` has a limit of 8192 characters for a command line, hence `set` can only set 8188. ### Value A logical vector, with elements being true if (un)setting the corresponding variable succeeded. (For `Sys.unsetenv` this includes attempting to remove a non-existent variable.) ### Note On Unix-alikes, if `Sys.unsetenv` is not supported, it will at least try to set the value of the environment variable to `""`, with a warning. ### See Also `[Sys.getenv](sys.getenv)`, [Startup](startup) for ways to set environment variables for the **R** session. `[setwd](getwd)` for the working directory. The help for ‘[environment variables](envvar)’ lists many of the environment variables used by **R**. ### Examples ``` print(Sys.setenv(R_TEST = "testit", "A+C" = 123)) # `A+C` could also be used Sys.getenv("R_TEST") Sys.unsetenv("R_TEST") # on Unix-alike may warn and not succeed Sys.getenv("R_TEST", unset = NA) ``` r None `Constants` Built-in Constants ------------------------------- ### Description Constants built into **R**. ### Usage ``` LETTERS letters month.abb month.name pi ``` ### Details **R** has a small number of built-in constants. The following constants are available: * `LETTERS`: the 26 upper-case letters of the Roman alphabet; * `letters`: the 26 lower-case letters of the Roman alphabet; * `month.abb`: the three-letter abbreviations for the English month names; * `month.name`: the English names for the months of the year; * `pi`: the ratio of the circumference of a circle to its diameter. These are implemented as variables in the base namespace taking appropriate values. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[data](../../utils/html/data)`, `[DateTimeClasses](datetimeclasses)`. `[Quotes](quotes)` for the parsing of character constants, `[NumericConstants](numericconstants)` for numeric constants. ### Examples ``` ## John Machin (ca 1706) computed pi to over 100 decimal places ## using the Taylor series expansion of the second term of pi - 4*(4*atan(1/5) - atan(1/239)) ## months in English month.name ## months in your current locale format(ISOdate(2000, 1:12, 1), "%B") format(ISOdate(2000, 1:12, 1), "%b") ``` r None `Sys.setFileTime` Set File Time -------------------------------- ### Description Uses system calls to set the times on a file or directory. ### Usage ``` Sys.setFileTime(path, time) ``` ### Arguments | | | | --- | --- | | `path` | A character vector containing file or directory paths. | | `time` | A date-time of class `"[POSIXct](datetimeclasses)"` or an object which can be coerced to one. Fractions of a second may be ignored. Recycled along `paths`. | ### Details This attempts sets the file time to the value specified. On a Unix-alike it uses the system call `utimensat` if that is available, otherwise `utimes` or `utime`. On a POSIX file system it sets both the last-access and modification times. Fractional seconds will set as from **R** 3.4.0 on OSes with the requisite system calls and suitable filesystems. On Windows it uses the system call `SetFileTime` to set the ‘last write time’. Some Windows file systems only record the time at a resolution of two seconds. `Sys.setFileTime` has been vectorized in **R** 3.6.0. Earlier versions of **R** required `path` and `time` to be vectors of length one. ### Value A logical vector indicating if the operation succeeded for each of the files and directories attempted, returned invisibly. r None `t` Matrix Transpose --------------------- ### Description Given a matrix or `<data.frame>` `x`, `t` returns the transpose of `x`. ### Usage ``` t(x) ``` ### Arguments | | | | --- | --- | | `x` | a matrix or data frame, typically. | ### Details This is a generic function for which methods can be written. The description here applies to the default and `"data.frame"` methods. A data frame is first coerced to a matrix: see `[as.matrix](matrix)`. When `x` is a vector, it is treated as a column, i.e., the result is a 1-row matrix. ### Value A matrix, with `dim` and `dimnames` constructed appropriately from those of `x`, and other attributes except names copied across. ### Note The *conjugate* transpose of a complex matrix *A*, denoted *A^H* or *A^\**, is computed as `[Conj](complex)(t(A))`. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<aperm>` for permuting the dimensions of arrays. ### Examples ``` a <- matrix(1:30, 5, 6) ta <- t(a) ##-- i.e., a[i, j] == ta[j, i] for all i,j : for(j in seq(ncol(a))) if(! all(a[, j] == ta[j, ])) stop("wrong transpose") ``` r None `is.finite` Finite, Infinite and NaN Numbers --------------------------------------------- ### Description `is.finite` and `is.infinite` return a vector of the same length as `x`, indicating which elements are finite (not infinite and not missing) or infinite. `Inf` and `-Inf` are positive and negative infinity whereas `NaN` means ‘Not a Number’. (These apply to numeric values and real and imaginary parts of complex values but not to values of integer vectors.) `Inf` and `NaN` are <reserved> words in the **R** language. ### Usage ``` is.finite(x) is.infinite(x) is.nan(x) Inf NaN ``` ### Arguments | | | | --- | --- | | `x` | **R** object to be tested: the default methods handle atomic vectors. | ### Details `is.finite` returns a vector of the same length as `x` the jth element of which is `TRUE` if `x[j]` is finite (i.e., it is not one of the values `NA`, `NaN`, `Inf` or `-Inf`) and `FALSE` otherwise. Complex numbers are finite if both the real and imaginary parts are. `is.infinite` returns a vector of the same length as `x` the jth element of which is `TRUE` if `x[j]` is infinite (i.e., equal to one of `Inf` or `-Inf`) and `FALSE` otherwise. This will be false unless `x` is numeric or complex. Complex numbers are infinite if either the real or the imaginary part is. `is.nan` tests if a numeric value is `NaN`. Do not test equality to `NaN`, or even use `<identical>`, since systems typically have many different NaN values. One of these is used for the numeric missing value `NA`, and `is.nan` is false for that value. A complex number is regarded as `NaN` if either the real or imaginary part is `NaN` but not `NA`. All elements of logical, integer and raw vectors are considered not to be NaN. All three functions accept `NULL` as input and return a length zero result. The default methods accept character and raw vectors, and return `FALSE` for all entries. Prior to **R** version 2.14.0 they accepted all input, returning `FALSE` for most non-numeric values; cases which are not atomic vectors are now signalled as errors. All three functions are generic: you can write methods to handle specific classes of objects, see [InternalMethods](internalmethods). ### Value A logical vector of the same length as `x`: `dim`, `dimnames` and `names` attributes are preserved. ### Note In **R**, basically all mathematical functions (including basic `[Arithmetic](arithmetic)`), are supposed to work properly with `+/- Inf` and `NaN` as input or output. The basic rule should be that calls and relations with `Inf`s really are statements with a proper mathematical *limit*. Computations involving `NaN` will return `NaN` or perhaps `[NA](na)`: which of those two is not guaranteed and may depend on the **R** platform (since compilers may re-order computations). ### References The IEC 60559 standard, also known as the ANSI/IEEE 754 Floating-Point Standard. <https://en.wikipedia.org/wiki/NaN>. D. Goldberg (1991). What Every Computer Scientist Should Know about Floating-Point Arithmetic. *ACM Computing Surveys*, **23**(1), 5–48. doi: [10.1145/103162.103163](https://doi.org/10.1145/103162.103163). Also available at <https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html>. The C99 function `isfinite` is used for `is.finite`. ### See Also `[NA](na)`, ‘*Not Available*’ which is not a number as well, however usually used for missing values and applies to many modes, not just numeric and complex. `[Arithmetic](arithmetic)`, `<double>`. ### Examples ``` pi / 0 ## = Inf a non-zero number divided by zero creates infinity 0 / 0 ## = NaN 1/0 + 1/0 # Inf 1/0 - 1/0 # NaN stopifnot( 1/0 == Inf, 1/Inf == 0 ) sin(Inf) cos(Inf) tan(Inf) ```
programming_docs
r None `readRDS` Serialization Interface for Single Objects ----------------------------------------------------- ### Description Functions to write a single **R** object to a file, and to restore it. ### Usage ``` saveRDS(object, file = "", ascii = FALSE, version = NULL, compress = TRUE, refhook = NULL) readRDS(file, refhook = NULL) infoRDS(file) ``` ### Arguments | | | | --- | --- | | `object` | **R** object to serialize. | | `file` | a [connection](connections) or the name of the file where the **R** object is saved to or read from. | | `ascii` | a logical. If `TRUE` or `NA`, an ASCII representation is written; otherwise (default), a binary one is used. See the comments in the help for `<save>`. | | `version` | the workspace format version to use. `NULL` specifies the current default version (3). The only other supported value is 2, the default from **R** 1.4.0 to **R** 3.5.0. | | `compress` | a logical specifying whether saving to a named file is to use `"gzip"` compression, or one of `"gzip"`, `"bzip2"` or `"xz"` to indicate the type of compression to be used. Ignored if `file` is a connection. | | `refhook` | a hook function for handling reference objects. | ### Details `saveRDS` and `readRDS` provide the means to save a single **R** object to a connection (typically a file) and to restore the object, quite possibly under a different name. This differs from `<save>` and `<load>`, which save and restore one or more named objects into an environment. They are widely used by **R** itself, for example to store metadata for a package and to store the `[help.search](../../utils/html/help.search)` databases: the `".rds"` file extension is most often used. Functions `<serialize>` and `[unserialize](serialize)` provide a slightly lower-level interface to serialization: objects serialized to a connection by `serialize` can be read back by `readRDS` and conversely. Function `infoRDS` retrieves meta-data about serialization produced by `saveRDS` or `serialize`. `infoRDS` cannot be used to detect whether a file is a serialization nor whether it is valid. All of these interfaces use the same serialization format, but `save` writes a single line header (typically `"RDXs\n"`) before the serialization of a single object (a pairlist of all the objects to be saved). If `file` is a file name, it is opened by `[gzfile](connections)` except for `save(compress = FALSE)` which uses `[file](connections)`. Only for the exception are marked encodings of `file` which cannot be translated to the native encoding handled on Windows. Compression is handled by the connection opened when `file` is a file name, so is only possible when `file` is a connection if handled by the connection. So e.g. `[url](connections)` connections will need to be wrapped in a call to `<gzcon>`. If a connection is supplied it will be opened (in binary mode) for the duration of the function if not already open: if it is already open it must be in binary mode for `saveRDS(ascii = FALSE)` or to read non-ASCII saves. ### Value For `readRDS`, an **R** object. For `saveRDS`, `NULL` invisibly. For `infoRDS`, an **R** list with elements `version` (version number, currently 2 or 3), `writer_version` (version of **R** that produced the serialization), `min_reader_version` (minimum version of **R** that can read the serialization), `format` (data representation) and `native_encoding` (native encoding of the session that produced the serialization, available since version 3). The data representation is given as `"xdr"` for big-endian binary representation, `"ascii"` for ASCII representation (produced via `ascii = TRUE` or `ascii = NA`) or `"binary"` (binary representation with native ‘endianness’ which can be produced by `<serialize>`). ### Warning Files produced by `saveRDS` (or `serialize` to a file connection) are not suitable as an interchange format between machines, for example to download from a website. The files produced by `<save>` have a header identifying the file type and so are better protected against erroneous use. ### See Also `<serialize>`, `<save>` and `<load>`. The ‘R Internals’ manual for details of the format used. ### Examples ``` fil <- tempfile("women", fileext = ".rds") ## save a single object to file saveRDS(women, fil) ## restore it under a different name women2 <- readRDS(fil) identical(women, women2) ## or examine the object via a connection, which will be opened as needed. con <- gzfile(fil) readRDS(con) close(con) ## Less convenient ways to restore the object ## which demonstrate compatibility with unserialize() con <- gzfile(fil, "rb") identical(unserialize(con), women) close(con) con <- gzfile(fil, "rb") wm <- readBin(con, "raw", n = 1e4) # size is a guess close(con) identical(unserialize(wm), women) ## Format compatibility with serialize(): fil2 <- tempfile("women") con <- file(fil2, "w") serialize(women, con) # ASCII, uncompressed close(con) identical(women, readRDS(fil2)) fil3 <- tempfile("women") con <- bzfile(fil3, "w") serialize(women, con) # binary, bzip2-compressed close(con) identical(women, readRDS(fil3)) unlink(c(fil, fil2, fil3)) ``` r None `ns-reflect` Namespace Reflection Support ------------------------------------------ ### Description Internal functions to support reflection on namespace objects. ### Usage ``` getExportedValue(ns, name) getNamespace(name) getNamespaceExports(ns) getNamespaceImports(ns) getNamespaceName(ns) getNamespaceUsers(ns) getNamespaceVersion(ns) ``` ### Arguments | | | | --- | --- | | `ns` | string or namespace object. | | `name` | string or name. | ### Details `getExportedValue` returns the value of the exported variable `name` in namespace `ns`. `getNamespace` returns the environment representing the name space `name`. The namespace is loaded if necessary. `getNamespaceExports` returns a character vector of the names exported by `ns`. `getNamespaceImports` returns a representation of the imports used by namespace `ns`. This representation is experimental and subject to change. `getNamespaceName` and `getNamespaceVersion` return the name and version of the namespace `ns`. `getNamespaceUsers` returns a character vector of the names of the namespaces that import namespace `ns`. ### Author(s) Luke Tierney ### See Also `[loadNamespace](ns-load)` for more about namespaces. r None `setTimeLimit` Set CPU and/or Elapsed Time Limits -------------------------------------------------- ### Description Functions to set CPU and/or elapsed time limits for top-level computations or the current session. ### Usage ``` setTimeLimit(cpu = Inf, elapsed = Inf, transient = FALSE) setSessionTimeLimit(cpu = Inf, elapsed = Inf) ``` ### Arguments | | | | --- | --- | | `cpu, elapsed` | double (of length one). Set a limit on the total or elapsed cpu time in seconds, respectively. | | `transient` | logical. If `TRUE`, the limits apply only to the rest of the current computation. | ### Details `setTimeLimit` sets limits which apply to each top-level computation, that is a command line (including any continuation lines) entered at the console or from a file. If it is called from within a computation the limits apply to the rest of the computation and (unless `transient = TRUE`) to subsequent top-level computations. `setSessionTimeLimit` sets limits for the rest of the session. Once a session limit is reached it is reset to `Inf`. Setting any limit has a small overhead – well under 1% on the systems measured. Time limits are checked whenever a user interrupt could occur. This will happen frequently in **R** code and during `[Sys.sleep](sys.sleep)`, but only at points in compiled C and Fortran code identified by the code author. ‘Total cpu time’ includes that used by child processes where the latter is reported. r None `mapply` Apply a Function to Multiple List or Vector Arguments --------------------------------------------------------------- ### Description `mapply` is a multivariate version of `[sapply](lapply)`. `mapply` applies `FUN` to the first elements of each ... argument, the second elements, the third elements, and so on. Arguments are recycled if necessary. ### Usage ``` mapply(FUN, ..., MoreArgs = NULL, SIMPLIFY = TRUE, USE.NAMES = TRUE) ``` ### Arguments | | | | --- | --- | | `FUN` | function to apply, found via `<match.fun>`. | | `...` | arguments to vectorize over (vectors or lists of strictly positive length, or all of zero length). See also ‘Details’. | | `MoreArgs` | a list of other arguments to `FUN`. | | `SIMPLIFY` | logical or character string; attempt to reduce the result to a vector, matrix or higher dimensional array; see the `simplify` argument of `[sapply](lapply)`. | | `USE.NAMES` | logical; use the names of the first ... argument, or if that is an unnamed character vector, use that vector as the names. | ### Details `mapply` calls `FUN` for the values of `...` (re-cycled to the length of the longest, unless any have length zero), followed by the arguments given in `MoreArgs`. The arguments in the call will be named if `...` or `MoreArgs` are named. Arguments with classes in `...` will be accepted, and their subsetting and `length` methods will be used. ### Value A list, or for `SIMPLIFY = TRUE`, a vector, array or list. ### See Also `[sapply](lapply)`, after which `mapply()` is modelled. `<outer>`, which applies a vectorized function to all combinations of two arguments. ### Examples ``` mapply(rep, 1:4, 4:1) mapply(rep, times = 1:4, x = 4:1) mapply(rep, times = 1:4, MoreArgs = list(x = 42)) mapply(function(x, y) seq_len(x) + y, c(a = 1, b = 2, c = 3), # names from first c(A = 10, B = 0, C = -10)) word <- function(C, k) paste(rep.int(C, k), collapse = "") utils::str(mapply(word, LETTERS[1:6], 6:1, SIMPLIFY = FALSE)) ``` r None `is.object` Is an Object ‘internally classed’? ----------------------------------------------- ### Description A function rather for internal use. It returns `TRUE` if the object `x` has the **R** internal `OBJECT` bit set, and `FALSE` otherwise. The `OBJECT` bit is set when a `"class"` attribute is added and removed when that attribute is removed, so this is a very efficient way to check if an object has a class attribute. (S4 objects always should.) ### Usage ``` is.object(x) ``` ### Arguments | | | | --- | --- | | `x` | object to be tested. | ### Note This is a <primitive> function. ### See Also `<class>`, and `[methods](../../utils/html/methods)`. `[isS4](iss4)`. ### Examples ``` is.object(1) # FALSE is.object(as.factor(1:3)) # TRUE ``` r None `row.names` Get and Set Row Names for Data Frames -------------------------------------------------- ### Description All data frames have row names, a character vector of length the number of rows with no duplicates nor missing values. There are generic functions for getting and setting row names, with default methods for arrays. The description here is for the `data.frame` method. ``.rowNamesDF<-`` is a (non-generic replacement) function to set row names for data frames, with extra argument `make.names`. This function only exists as workaround as we cannot easily change the `row.names<-` generic without breaking legacy code in existing packages. ### Usage ``` row.names(x) row.names(x) <- value .rowNamesDF(x, make.names=FALSE) <- value ``` ### Arguments | | | | --- | --- | | `x` | object of class `"data.frame"`, or any other class for which a method has been defined. | | `make.names` | `<logical>`, i.e., one of `FALSE, NA, TRUE`, indicating what should happen if the specified row names, i.e., `value`, are invalid, e.g., duplicated or `NA`. The default (is back compatible), `FALSE`, will signal an error, where `NA` will “automatic” row names and `TRUE` will call `<make.names>(value, unique=TRUE)` for constructing valid names. | | `value` | an object to be coerced to character unless an integer vector. It should have (after coercion) the same length as the number of rows of `x` with no duplicated nor missing values. `NULL` is also allowed: see ‘Details’. | ### Details A data frame has (by definition) a vector of *row names* which has length the number of rows in the data frame, and contains neither missing nor duplicated values. Where a row names sequence has been added by the software to meet this requirement, they are regarded as ‘automatic’. Row names are currently allowed to be integer or character, but for backwards compatibility (with **R** <= 2.4.0) `row.names` will always return a character vector. (Use `attr(x, "row.names")` if you need to retrieve an integer-valued set of row names.) Using `NULL` for the value resets the row names to `seq_len(nrow(x))`, regarded as ‘automatic’. ### Value `row.names` returns a character vector. `row.names<-` returns a data frame with the row names changed. ### Note `row.names` is similar to `[rownames](colnames)` for arrays, and it has a method that calls `rownames` for an array argument. Row names of the form `1:n` for `n > 2` are stored internally in a compact form, which might be seen from C code or by deparsing but never via `row.names` or `<attr>(x, "row.names")`. Additionally, some names of this sort are marked as ‘automatic’ and handled differently by `[as.matrix](matrix)` and `<data.matrix>` (and potentially other functions). (All zero-row data frames are regarded as having automatic row names.) ### References Chambers, J. M. (1992) *Data for models.* Chapter 3 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<data.frame>`, `[rownames](colnames)`, `<names>`. `.row_names_info` for the internal representations. ### Examples ``` ## To illustrate the note: df <- data.frame(x = c(TRUE, FALSE, NA, NA), y = c(12, 34, 56, 78)) row.names(df) <- 1 : 4 attr(df, "row.names") #> 1:4 deparse(df) # or dput(df) ##--> c(NA, 4L) : Compact storage, *not* regarded as automatic. row.names(df) <- NULL attr(df, "row.names") #> 1:4 deparse(df) # or dput(df) -- shows ##--> c(NA, -4L) : Compact storage, regarded as automatic. ``` r None `NA` ‘Not Available’ / Missing Values -------------------------------------- ### Description `NA` is a logical constant of length 1 which contains a missing value indicator. `NA` can be coerced to any other vector type except raw. There are also constants `NA_integer_`, `NA_real_`, `NA_complex_` and `NA_character_` of the other atomic vector types which support missing values: all of these are <reserved> words in the **R** language. The generic function `is.na` indicates which elements are missing. The generic function `is.na<-` sets elements to `NA`. The generic function `anyNA` implements `any(is.na(x))` in a possibly faster way (especially for atomic vectors). ### Usage ``` NA is.na(x) anyNA(x, recursive = FALSE) ## S3 method for class 'data.frame' is.na(x) is.na(x) <- value ``` ### Arguments | | | | --- | --- | | `x` | an **R** object to be tested: the default method for `is.na` and `anyNA` handle atomic vectors, lists, pairlists, and `NULL`. | | `recursive` | logical: should `anyNA` be applied recursively to lists and pairlists? | | `value` | a suitable index vector for use with `x`. | ### Details The `NA` of character type is distinct from the string `"NA"`. Programmers who need to specify an explicit missing string should use `NA_character_` (rather than `"NA"`) or set elements to `NA` using `is.na<-`. `is.na` and `anyNA` are generic: you can write methods to handle specific classes of objects, see [InternalMethods](internalmethods). Function `is.na<-` may provide a safer way to set missingness. It behaves differently for factors, for example. Numerical computations using `NA` will normally result in `NA`: a possible exception is where `[NaN](is.finite)` is also involved, in which case either might result (which may depend on the **R** platform). However, this is not guaranteed and future CPUs and/or compilers may behave differently. Dynamic binary translation may also impact this behavior (with valgrind, computations using `NA` may result in `NaN` even when no `NaN` is involved). Logical computations treat `NA` as a missing `TRUE/FALSE` value, and so may return `TRUE` or `FALSE` if the expression does not depend on the `NA` operand. The default method for `anyNA` handles atomic vectors without a class and `NULL`. It calls `any(is.na(x))` on objects with classes and for `recursive = FALSE`, on lists and pairlists. ### Value The default method for `is.na` applied to an atomic vector returns a logical vector of the same length as its argument `x`, containing `TRUE` for those elements marked `NA` or, for numeric or complex vectors, `[NaN](is.finite)`, and `FALSE` otherwise. (A complex value is regarded as `NA` if either its real or imaginary part is `NA` or `[NaN](is.finite)`.) `dim`, `dimnames` and `names` attributes are copied to the result. The default methods also work for lists and pairlists: For `is.na`, elementwise the result is false unless that element is a length-one atomic vector and the single element of that vector is regarded as `NA` or `NaN` (note that any `is.na` method for the class of the element is ignored). `anyNA(recursive = FALSE)` works the same way as `is.na`; `anyNA(recursive = TRUE)` applies `anyNA` (with method dispatch) to each element. The data frame method for `is.na` returns a logical matrix with the same dimensions as the data frame, and with dimnames taken from the row and column names of the data frame. `anyNA(NULL)` is false; `is.na(NULL)` is `logical(0)` (no longer warning since **R** version 3.5.0). ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Chambers, J. M. (1998) *Programming with Data. A Guide to the S Language*. Springer. ### See Also `[NaN](is.finite)`, `[is.nan](is.finite)`, etc., and the utility function `[complete.cases](../../stats/html/complete.cases)`. `[na.action](../../stats/html/na.action)`, `[na.omit](../../stats/html/na.fail)`, `[na.fail](../../stats/html/na.fail)` on how methods can be tuned to deal with missing values. ### Examples ``` is.na(c(1, NA)) #> FALSE TRUE is.na(paste(c(1, NA))) #> FALSE FALSE (xx <- c(0:4)) is.na(xx) <- c(2, 4) xx #> 0 NA 2 NA 4 anyNA(xx) # TRUE # Some logical operations do not return NA c(TRUE, FALSE) & NA c(TRUE, FALSE) | NA ## Measure speed difference in a favourable case: ## the difference depends on the platform, on most ca 3x. x <- 1:10000; x[5000] <- NaN # coerces x to be double if(require("microbenchmark")) { # does not work reliably on all platforms print(microbenchmark(any(is.na(x)), anyNA(x))) } else { nSim <- 2^13 print(rbind(is.na = system.time(replicate(nSim, any(is.na(x)))), anyNA = system.time(replicate(nSim, anyNA(x))))) } ## anyNA() can work recursively with list()s: LL <- list(1:5, c(NA, 5:8), c("A","NA"), c("a", NA_character_)) L2 <- LL[c(1,3)] sapply(LL, anyNA); c(anyNA(LL), anyNA(LL, TRUE)) sapply(L2, anyNA); c(anyNA(L2), anyNA(L2, TRUE)) ## ... lists, and hence data frames, too: dN <- dd <- USJudgeRatings; dN[3,6] <- NA anyNA(dd) # FALSE anyNA(dN) # TRUE ``` r None `dump` Text Representations of R Objects ----------------------------------------- ### Description This function takes a vector of names of **R** objects and produces text representations of the objects on a file or connection. A `dump` file can usually be `<source>`d into another **R** session. ### Usage ``` dump(list, file = "dumpdata.R", append = FALSE, control = "all", envir = parent.frame(), evaluate = TRUE) ``` ### Arguments | | | | --- | --- | | `list` | character vector. The names of one or more **R** objects to be dumped. | | `file` | either a character string naming a file or a [connection](connections). `""` indicates output to the console. | | `append` | if `TRUE` and `file` is a character string, output will be appended to `file`; otherwise, it will overwrite the contents of `file`. | | `control` | character vector indicating deparsing options. See `[.deparseOpts](deparseopts)` for their description. | | `envir` | the environment to search for objects. | | `evaluate` | logical. Should promises be evaluated? | ### Details If some of the objects named do not exist (in scope), they are omitted, with a warning. If `file` is a file and no objects exist then no file is created. `<source>`ing may not produce an identical copy of `dump`ed objects. A warning is issued if it is likely that problems will arise, for example when dumping exotic or complex objects (see the Note). `dump` will also warn if fewer characters were written to a file than expected, which may indicate a full or corrupt file system. A `dump` file can be `<source>`d into another **R** (or perhaps S) session, but the functions `<save>` and `[saveRDS](readrds)` are designed to be used for transporting **R** data, and will work with **R** objects that `dump` does not handle. For maximal reproducibility use `control = c("all", "hexNumeric")`. To produce a more readable representation of an object, use `control = NULL`. This will skip attributes, and will make other simplifications that make `source` less likely to produce an identical copy. See `<deparse>` for details. To deparse the internal representation of a function rather than displaying the saved source, use `control = c("keepInteger", "warnIncomplete", "keepNA")`. This will lose all formatting and comments, but may be useful in those cases where the saved source is no longer correct. Promises will normally only be encountered by users as a result of lazy-loading (when the default `evaluate = TRUE` is essential) and after the use of `[delayedAssign](delayedassign)`, when `evaluate = FALSE` might be intended. ### Value An invisible character vector containing the names of the objects which were dumped. ### Note As `dump` is defined in the base namespace, the base package will be searched *before* the global environment unless `dump` is called from the top level prompt or the `envir` argument is given explicitly. To avoid the risk of a source attribute becoming out of sync with the actual function definition, the source attribute of a function will never be dumped as an attribute. Currently environments, external pointers, weak references and objects of type `S4` are not deparsed in a way that can be `source`d. In addition, [language objects](is.language) are deparsed in a simple way whatever the value of `control`, and this includes not dumping their attributes (which will result in a warning). ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[.deparseOpts](deparseopts)` for available `control` settings; `<dput>()`, `[dget](dput)()` and `<deparse>()` for related functions using identical internal deparsing functionality. `<write>`, `[write.table](../../utils/html/write.table)`, etc for “dumping” data to (text) files. `<save>` and `[saveRDS](readrds)` for a more reliable way to save **R** objects. ### Examples ``` x <- 1; y <- 1:10 fil <- tempfile(fileext=".Rdmped") dump(ls(pattern = '^[xyz]'), fil) print(.Last.value) unlink(fil) ```
programming_docs
r None `GammaDist` The Gamma Distribution ----------------------------------- ### Description Density, distribution function, quantile function and random generation for the Gamma distribution with parameters `shape` and `scale`. ### Usage ``` dgamma(x, shape, rate = 1, scale = 1/rate, log = FALSE) pgamma(q, shape, rate = 1, scale = 1/rate, lower.tail = TRUE, log.p = FALSE) qgamma(p, shape, rate = 1, scale = 1/rate, lower.tail = TRUE, log.p = FALSE) rgamma(n, shape, rate = 1, scale = 1/rate) ``` ### Arguments | | | | --- | --- | | `x, q` | vector of quantiles. | | `p` | vector of probabilities. | | `n` | number of observations. If `length(n) > 1`, the length is taken to be the number required. | | `rate` | an alternative way to specify the scale. | | `shape, scale` | shape and scale parameters. Must be positive, `scale` strictly. | | `log, log.p` | logical; if `TRUE`, probabilities/densities *p* are returned as *log(p)*. | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details If `scale` is omitted, it assumes the default value of `1`. The Gamma distribution with parameters `shape` *= a* and `scale` *= s* has density *f(x)= 1/(s^a Gamma(a)) x^(a-1) e^-(x/s)* for *x ≥ 0*, *a > 0* and *s > 0*. (Here *Gamma(a)* is the function implemented by **R**'s `[gamma](../../base/html/special)()` and defined in its help. Note that *a = 0* corresponds to the trivial distribution with all mass at point 0.) The mean and variance are *E(X) = a\*s* and *Var(X) = a\*s^2*. The cumulative hazard *H(t) = - log(1 - F(t))* is ``` -pgamma(t, ..., lower = FALSE, log = TRUE) ``` Note that for smallish values of `shape` (and moderate `scale`) a large parts of the mass of the Gamma distribution is on values of *x* so near zero that they will be represented as zero in computer arithmetic. So `rgamma` may well return values which will be represented as zero. (This will also happen for very large values of `scale` since the actual generation is done for `scale = 1`.) ### Value `dgamma` gives the density, `pgamma` gives the distribution function, `qgamma` gives the quantile function, and `rgamma` generates random deviates. Invalid arguments will result in return value `NaN`, with a warning. The length of the result is determined by `n` for `rgamma`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `n` are recycled to the length of the result. Only the first elements of the logical arguments are used. ### Note The S (Becker *et al*, 1988) parametrization was via `shape` and `rate`: S had no `scale` parameter. It is an error to supply both `scale` and `rate`. `pgamma` is closely related to the incomplete gamma function. As defined by Abramowitz and Stegun 6.5.1 (and by ‘Numerical Recipes’) this is *P(a,x) = 1/Gamma(a) integral\_0^x t^(a-1) exp(-t) dt* *P(a, x)* is `pgamma(x, a)`. Other authors (for example Karl Pearson in his 1922 tables) omit the normalizing factor, defining the incomplete gamma function *γ(a,x)* as *gamma(a,x) = integral\_0^x t^(a-1) exp(-t) dt,* i.e., `pgamma(x, a) * gamma(a)`. Yet other use the ‘upper’ incomplete gamma function, *Gamma(a,x) = integral\_x^Inf t^(a-1) exp(-t) dt,* which can be computed by `pgamma(x, a, lower = FALSE) * gamma(a)`. Note however that `pgamma(x, a, ..)` currently requires *a > 0*, whereas the incomplete gamma function is also defined for negative *a*. In that case, you can use `gamma_inc(a,x)` (for *Γ(a,x)*) from package [gsl](https://CRAN.R-project.org/package=gsl). See also <https://en.wikipedia.org/wiki/Incomplete_gamma_function>, or <https://dlmf.nist.gov/8.2#i>. ### Source `dgamma` is computed via the Poisson density, using code contributed by Catherine Loader (see `[dbinom](family)`). `pgamma` uses an unpublished (and not otherwise documented) algorithm ‘mainly by Morten Welinder’. `qgamma` is based on a C translation of Best, D. J. and D. E. Roberts (1975). Algorithm AS91. Percentage points of the chi-squared distribution. *Applied Statistics*, **24**, 385–388. plus a final Newton step to improve the approximation. `rgamma` for `shape >= 1` uses Ahrens, J. H. and Dieter, U. (1982). Generating gamma variates by a modified rejection technique. *Communications of the ACM*, **25**, 47–54, and for `0 < shape < 1` uses Ahrens, J. H. and Dieter, U. (1974). Computer methods for sampling from gamma, beta, Poisson and binomial distributions. *Computing*, **12**, 223–246. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988). *The New S Language*. Wadsworth & Brooks/Cole. Shea, B. L. (1988). Algorithm AS 239: Chi-squared and incomplete Gamma integral, *Applied Statistics (JRSS C)*, **37**, 466–473. doi: [10.2307/2347328](https://doi.org/10.2307/2347328). Abramowitz, M. and Stegun, I. A. (1972) *Handbook of Mathematical Functions.* New York: Dover. Chapter 6: Gamma and Related Functions. NIST Digital Library of Mathematical Functions. <https://dlmf.nist.gov/>, section 8.2. ### See Also `[gamma](../../base/html/special)` for the gamma function. [Distributions](distributions) for other standard distributions, including `[dbeta](beta)` for the Beta distribution and `[dchisq](chisquare)` for the chi-squared distribution which is a special case of the Gamma distribution. ### Examples ``` -log(dgamma(1:4, shape = 1)) p <- (1:9)/10 pgamma(qgamma(p, shape = 2), shape = 2) 1 - 1/exp(qgamma(p, shape = 1)) # even for shape = 0.001 about half the mass is on numbers # that cannot be represented accurately (and most of those as zero) pgamma(.Machine$double.xmin, 0.001) pgamma(5e-324, 0.001) # on most machines 5e-324 is the smallest # representable non-zero number table(rgamma(1e4, 0.001) == 0)/1e4 ``` r None `plot.ppr` Plot Ridge Functions for Projection Pursuit Regression Fit ---------------------------------------------------------------------- ### Description Plot the ridge functions for a projection pursuit regression (`<ppr>`) fit. ### Usage ``` ## S3 method for class 'ppr' plot(x, ask, type = "o", cex = 1/2, main = quote(bquote( "term"[.(i)]*":" ~~ hat(beta[.(i)]) == .(bet.i))), xlab = quote(bquote(bold(alpha)[.(i)]^T * bold(x))), ylab = "", ...) ``` ### Arguments | | | | --- | --- | | `x` | an **R** object of class `"ppr"` as produced by a call to `ppr`. | | `ask` | the graphics parameter `ask`: see `[par](../../graphics/html/par)` for details. If set to `TRUE` will ask between the plot of each cross-section. | | `type` | the type of line (see `[plot.default](../../graphics/html/plot.default)`) to draw. | | `cex` | plot symbol expansion factor (*relative* to `[par](../../graphics/html/par)("cex")`). | | `main, xlab, ylab` | axis annotations, see also `[title](../../graphics/html/title)`. Can be an expression (depending on `i` and `bet.i`), as by default which will be `eval()`uated. | | `...` | further graphical parameters, passed to `[plot](../../graphics/html/plot.default)()`. | ### Value None ### Side Effects A series of plots are drawn on the current graphical device, one for each term in the fit. ### See Also `<ppr>`, `[par](../../graphics/html/par)` ### Examples ``` require(graphics) rock1 <- within(rock, { area1 <- area/10000; peri1 <- peri/10000 }) par(mfrow = c(3,2)) # maybe: , pty = "s" rock.ppr <- ppr(log(perm) ~ area1 + peri1 + shape, data = rock1, nterms = 2, max.terms = 5) plot(rock.ppr, main = "ppr(log(perm)~ ., nterms=2, max.terms=5)") plot(update(rock.ppr, bass = 5), main = "update(..., bass = 5)") plot(update(rock.ppr, sm.method = "gcv", gcvpen = 2), main = "update(..., sm.method=\"gcv\", gcvpen=2)") ``` r None `makepredictcall` Utility Function for Safe Prediction ------------------------------------------------------- ### Description A utility to help `[model.frame.default](model.frame)` create the right matrices when predicting from models with terms like (univariate) `poly` or `ns`. ### Usage ``` makepredictcall(var, call) ``` ### Arguments | | | | --- | --- | | `var` | A variable. | | `call` | The term in the formula, as a call. | ### Details This is a generic function with methods for `poly`, `bs` and `ns`: the default method handles `scale`. If `model.frame.default` encounters such a term when creating a model frame, it modifies the `predvars` attribute of the terms supplied by replacing the term with one which will work for predicting new data. For example `makepredictcall.ns` adds arguments for the knots and intercept. To make use of this, have your model-fitting function return the `terms` attribute of the model frame, or copy the `predvars` attribute of the `terms` attribute of the model frame to your `terms` object. To extend this, make sure the term creates variables with a class, and write a suitable method for that class. ### Value A replacement for `call` for the `predvars` attribute of the terms. ### See Also `<model.frame>`, `<poly>`, `[scale](../../base/html/scale)`; `[bs](../../splines/html/bs)` and `[ns](../../splines/html/ns)` in package splines. `[cars](../../datasets/html/cars)` for an example of prediction from a polynomial fit. ### Examples ``` require(graphics) ## using poly: this did not work in R < 1.5.0 fm <- lm(weight ~ poly(height, 2), data = women) plot(women, xlab = "Height (in)", ylab = "Weight (lb)") ht <- seq(57, 73, length.out = 200) nD <- data.frame(height = ht) pfm <- predict(fm, nD) lines(ht, pfm) pf2 <- predict(update(fm, ~ stats::poly(height, 2)), nD) stopifnot(all.equal(pfm, pf2)) ## was off (rel.diff. 0.0766) in R <= 3.5.0 ## see also example(cars) ## see bs and ns for spline examples. ``` r None `SSasympOrig` Self-Starting Nls Asymptotic Regression Model through the Origin ------------------------------------------------------------------------------- ### Description This `[selfStart](selfstart)` model evaluates the asymptotic regression function through the origin and its gradient. It has an `initial` attribute that will evaluate initial estimates of the parameters `Asym` and `lrc` for a given set of data. ### Usage ``` SSasympOrig(input, Asym, lrc) ``` ### Arguments | | | | --- | --- | | `input` | a numeric vector of values at which to evaluate the model. | | `Asym` | a numeric parameter representing the horizontal asymptote. | | `lrc` | a numeric parameter representing the natural logarithm of the rate constant. | ### Value a numeric vector of the same length as `input`. It is the value of the expression `Asym*(1 - exp(-exp(lrc)*input))`. If all of the arguments `Asym` and `lrc` are names of objects, the gradient matrix with respect to these names is attached as an attribute named `gradient`. ### Author(s) José Pinheiro and Douglas Bates ### See Also `<nls>`, `[selfStart](selfstart)` ### Examples ``` Lob.329 <- Loblolly[ Loblolly$Seed == "329", ] SSasympOrig(Lob.329$age, 100, -3.2) # response only local({ Asym <- 100; lrc <- -3.2 SSasympOrig(Lob.329$age, Asym, lrc) # response and gradient }) getInitial(height ~ SSasympOrig(age, Asym, lrc), data = Lob.329) ## Initial values are in fact the converged values fm1 <- nls(height ~ SSasympOrig(age, Asym, lrc), data = Lob.329) summary(fm1) ## Visualize the SSasympOrig() model parametrization : xx <- seq(0, 5, length.out = 101) yy <- 5 * (1- exp(-xx * log(2))) stopifnot( all.equal(yy, SSasympOrig(xx, Asym = 5, lrc = log(log(2)))) ) require(graphics) op <- par(mar = c(0, 0, 3.5, 0)) plot(xx, yy, type = "l", axes = FALSE, ylim = c(0,5), xlim = c(-1/4, 5), xlab = "", ylab = "", lwd = 2, main = quote("Parameters in the SSasympOrig model"~~ f[phi](x))) mtext(quote(list(phi[1] == "Asym", phi[2] == "lrc"))) usr <- par("usr") arrows(usr[1], 0, usr[2], 0, length = 0.1, angle = 25) arrows(0, usr[3], 0, usr[4], length = 0.1, angle = 25) text(usr[2] - 0.2, 0.1, "x", adj = c(1, 0)) text( -0.1, usr[4], "y", adj = c(1, 1)) abline(h = 5, lty = 3) axis(2, at = 5*c(1/2,1), labels= expression(frac(phi[1],2), phi[1]), pos=0, las=1) arrows(c(.3,.7), 5/2, c(0, 1 ), 5/2, length = 0.08, angle = 25) text( 0.5, 5/2, quote(t[0.5])) text( 1 +.4, 5/2, quote({f(t[0.5]) == frac(phi[1],2)}~{} %=>% {}~~{t[0.5] == frac(log(2), e^{phi[2]})}), adj = c(0, 0.5)) par(op) ``` r None `biplot.princomp` Biplot for Principal Components -------------------------------------------------- ### Description Produces a biplot (in the strict sense) from the output of `<princomp>` or `<prcomp>` ### Usage ``` ## S3 method for class 'prcomp' biplot(x, choices = 1:2, scale = 1, pc.biplot = FALSE, ...) ## S3 method for class 'princomp' biplot(x, choices = 1:2, scale = 1, pc.biplot = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `x` | an object of class `"princomp"`. | | `choices` | length 2 vector specifying the components to plot. Only the default is a biplot in the strict sense. | | `scale` | The variables are scaled by `lambda ^ scale` and the observations are scaled by `lambda ^ (1-scale)` where `lambda` are the singular values as computed by `<princomp>`. Normally `0 <= scale <= 1`, and a warning will be issued if the specified `scale` is outside this range. | | `pc.biplot` | If true, use what Gabriel (1971) refers to as a "principal component biplot", with `lambda = 1` and observations scaled up by sqrt(n) and variables scaled down by sqrt(n). Then inner products between variables approximate covariances and distances between observations approximate Mahalanobis distance. | | `...` | optional arguments to be passed to `[biplot.default](biplot)`. | ### Details This is a method for the generic function `biplot`. There is considerable confusion over the precise definitions: those of the original paper, Gabriel (1971), are followed here. Gabriel and Odoroff (1990) use the same definitions, but their plots actually correspond to `pc.biplot = TRUE`. ### Side Effects a plot is produced on the current graphics device. ### References Gabriel, K. R. (1971). The biplot graphical display of matrices with applications to principal component analysis. *Biometrika*, **58**, 453–467. doi: [10.2307/2334381](https://doi.org/10.2307/2334381). Gabriel, K. R. and Odoroff, C. L. (1990). Biplots in biomedical research. *Statistics in Medicine*, **9**, 469–485. doi: [10.1002/sim.4780090502](https://doi.org/10.1002/sim.4780090502). ### See Also `<biplot>`, `<princomp>`. ### Examples ``` require(graphics) biplot(princomp(USArrests)) ``` r None `is.empty` Test if a Model's Formula is Empty ---------------------------------------------- ### Description **R**'s formula notation allows models with no intercept and no predictors. These require special handling internally. `is.empty.model()` checks whether an object describes an empty model. ### Usage ``` is.empty.model(x) ``` ### Arguments | | | | --- | --- | | `x` | A `terms` object or an object with a `terms` method. | ### Value `TRUE` if the model is empty ### See Also `<lm>`, `<glm>` ### Examples ``` y <- rnorm(20) is.empty.model(y ~ 0) is.empty.model(y ~ -1) is.empty.model(lm(y ~ 0)) ``` r None `glm.control` Auxiliary for Controlling GLM Fitting ---------------------------------------------------- ### Description Auxiliary function for `<glm>` fitting. Typically only used internally by `[glm.fit](glm)`, but may be used to construct a `control` argument to either function. ### Usage ``` glm.control(epsilon = 1e-8, maxit = 25, trace = FALSE) ``` ### Arguments | | | | --- | --- | | `epsilon` | positive convergence tolerance *ε*; the iterations converge when *|dev - dev\_{old}|/(|dev| + 0.1) < ε*. | | `maxit` | integer giving the maximal number of IWLS iterations. | | `trace` | logical indicating if output should be produced for each iteration. | ### Details The `control` argument of `<glm>` is by default passed to the `control` argument of `[glm.fit](glm)`, which uses its elements as arguments to `glm.control`: the latter provides defaults and sanity checking. If `epsilon` is small (less than *1e-10*) it is also used as the tolerance for the detection of collinearity in the least squares solution. When `trace` is true, calls to `[cat](../../base/html/cat)` produce the output for each IWLS iteration. Hence, `[options](../../base/html/options)(digits = *)` can be used to increase the precision, see the example. ### Value A list with components named as the arguments. ### References Hastie, T. J. and Pregibon, D. (1992) *Generalized linear models.* Chapter 6 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `[glm.fit](glm)`, the fitting procedure used by `<glm>`. ### Examples ``` ### A variation on example(glm) : ## Annette Dobson's example ... counts <- c(18,17,15,20,10,20,25,13,12) outcome <- gl(3,1,9) treatment <- gl(3,3) oo <- options(digits = 12) # to see more when tracing : glm.D93X <- glm(counts ~ outcome + treatment, family = poisson(), trace = TRUE, epsilon = 1e-14) options(oo) coef(glm.D93X) # the last two are closer to 0 than in ?glm's glm.D93 ``` r None `plot.isoreg` Plot Method for isoreg Objects --------------------------------------------- ### Description The `[plot](../../graphics/html/plot.default)` and `[lines](../../graphics/html/lines)` method for **R** objects of class `<isoreg>`. ### Usage ``` ## S3 method for class 'isoreg' plot(x, plot.type = c("single", "row.wise", "col.wise"), main = paste("Isotonic regression", deparse(x$call)), main2 = "Cumulative Data and Convex Minorant", xlab = "x0", ylab = "x$y", par.fit = list(col = "red", cex = 1.5, pch = 13, lwd = 1.5), mar = if (both) 0.1 + c(3.5, 2.5, 1, 1) else par("mar"), mgp = if (both) c(1.6, 0.7, 0) else par("mgp"), grid = length(x$x) < 12, ...) ## S3 method for class 'isoreg' lines(x, col = "red", lwd = 1.5, do.points = FALSE, cex = 1.5, pch = 13, ...) ``` ### Arguments | | | | --- | --- | | `x` | an `<isoreg>` object. | | `plot.type` | character indicating which type of plot is desired. The first (default) only draws the data and the fit, where the others add a plot of the cumulative data and fit. Can be abbreviated. | | `main` | main title of plot, see `[title](../../graphics/html/title)`. | | `main2` | title for second (cumulative) plot. | | `xlab, ylab` | x- and y- axis annotation. | | `par.fit` | a `[list](../../base/html/list)` of arguments (for `[points](../../graphics/html/points)` and `[lines](../../graphics/html/lines)`) for drawing the fit. | | `mar, mgp` | graphical parameters, see `[par](../../graphics/html/par)`, mainly for the case of two plots. | | `grid` | logical indicating if grid lines should be drawn. If true, `[grid](../../graphics/html/grid)()` is used for the first plot, where as vertical lines are drawn at ‘touching’ points for the cumulative plot. | | `do.points` | for `lines()`: logical indicating if the step points should be drawn as well (and as they are drawn in `plot()`). | | `col, lwd, cex, pch` | graphical arguments for `lines()`, where `cex` and `pch` are only used when `do.points` is `TRUE`. | | `...` | further arguments passed to and from methods. | ### See Also `<isoreg>` for computation of `isoreg` objects. ### Examples ``` require(graphics) utils::example(isoreg) # for the examples there plot(y3, main = "simple plot(.) + lines(<isoreg>)") lines(ir3) ## 'same' plot as above, "proving" that only ranks of 'x' are important plot(isoreg(2^(1:9), c(1,0,4,3,3,5,4,2,0)), plot.type = "row", log = "x") plot(ir3, plot.type = "row", ylab = "y3") plot(isoreg(y3 - 4), plot.type = "r", ylab = "y3 - 4") plot(ir4, plot.type = "ro", ylab = "y4", xlab = "x = 1:n") ## experiment a bit with these (C-c C-j): plot(isoreg(sample(9), y3), plot.type = "row") plot(isoreg(sample(9), y3), plot.type = "col.wise") plot(ir <- isoreg(sample(10), sample(10, replace = TRUE)), plot.type = "r") ```
programming_docs
r None `kmeans` K-Means Clustering ---------------------------- ### Description Perform k-means clustering on a data matrix. ### Usage ``` kmeans(x, centers, iter.max = 10, nstart = 1, algorithm = c("Hartigan-Wong", "Lloyd", "Forgy", "MacQueen"), trace=FALSE) ## S3 method for class 'kmeans' fitted(object, method = c("centers", "classes"), ...) ``` ### Arguments | | | | --- | --- | | `x` | numeric matrix of data, or an object that can be coerced to such a matrix (such as a numeric vector or a data frame with all numeric columns). | | `centers` | either the number of clusters, say *k*, or a set of initial (distinct) cluster centres. If a number, a random set of (distinct) rows in `x` is chosen as the initial centres. | | `iter.max` | the maximum number of iterations allowed. | | `nstart` | if `centers` is a number, how many random sets should be chosen? | | `algorithm` | character: may be abbreviated. Note that `"Lloyd"` and `"Forgy"` are alternative names for one algorithm. | | `object` | an **R** object of class `"kmeans"`, typically the result `ob` of `ob <- kmeans(..)`. | | `method` | character: may be abbreviated. `"centers"` causes `fitted` to return cluster centers (one for each input point) and `"classes"` causes `fitted` to return a vector of class assignments. | | `trace` | logical or integer number, currently only used in the default method (`"Hartigan-Wong"`): if positive (or true), tracing information on the progress of the algorithm is produced. Higher values may produce more tracing information. | | `...` | not used. | ### Details The data given by `x` are clustered by the *k*-means method, which aims to partition the points into *k* groups such that the sum of squares from points to the assigned cluster centres is minimized. At the minimum, all cluster centres are at the mean of their Voronoi sets (the set of data points which are nearest to the cluster centre). The algorithm of Hartigan and Wong (1979) is used by default. Note that some authors use *k*-means to refer to a specific algorithm rather than the general method: most commonly the algorithm given by MacQueen (1967) but sometimes that given by Lloyd (1957) and Forgy (1965). The Hartigan–Wong algorithm generally does a better job than either of those, but trying several random starts (`nstart`*> 1*) is often recommended. In rare cases, when some of the points (rows of `x`) are extremely close, the algorithm may not converge in the “Quick-Transfer” stage, signalling a warning (and returning `ifault = 4`). Slight rounding of the data may be advisable in that case. For ease of programmatic exploration, *k=1* is allowed, notably returning the center and `withinss`. Except for the Lloyd–Forgy method, *k* clusters will always be returned if a number is specified. If an initial matrix of centres is supplied, it is possible that no point will be closest to one or more centres, which is currently an error for the Hartigan–Wong method. ### Value `kmeans` returns an object of class `"kmeans"` which has a `print` and a `fitted` method. It is a list with at least the following components: | | | | --- | --- | | `cluster` | A vector of integers (from `1:k`) indicating the cluster to which each point is allocated. | | `centers` | A matrix of cluster centres. | | `totss` | The total sum of squares. | | `withinss` | Vector of within-cluster sum of squares, one component per cluster. | | `tot.withinss` | Total within-cluster sum of squares, i.e. `sum(withinss)`. | | `betweenss` | The between-cluster sum of squares, i.e. `totss-tot.withinss`. | | `size` | The number of points in each cluster. | | `iter` | The number of (outer) iterations. | | `ifault` | integer: indicator of a possible algorithm problem – for experts. | ### References Forgy, E. W. (1965). Cluster analysis of multivariate data: efficiency vs interpretability of classifications. *Biometrics*, **21**, 768–769. Hartigan, J. A. and Wong, M. A. (1979). Algorithm AS 136: A K-means clustering algorithm. *Applied Statistics*, **28**, 100–108. doi: [10.2307/2346830](https://doi.org/10.2307/2346830). Lloyd, S. P. (1957, 1982). Least squares quantization in PCM. Technical Note, Bell Laboratories. Published in 1982 in *IEEE Transactions on Information Theory*, **28**, 128–137. MacQueen, J. (1967). Some methods for classification and analysis of multivariate observations. In *Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability*, eds L. M. Le Cam & J. Neyman, **1**, pp. 281–297. Berkeley, CA: University of California Press. ### Examples ``` require(graphics) # a 2-dimensional example x <- rbind(matrix(rnorm(100, sd = 0.3), ncol = 2), matrix(rnorm(100, mean = 1, sd = 0.3), ncol = 2)) colnames(x) <- c("x", "y") (cl <- kmeans(x, 2)) plot(x, col = cl$cluster) points(cl$centers, col = 1:2, pch = 8, cex = 2) # sum of squares ss <- function(x) sum(scale(x, scale = FALSE)^2) ## cluster centers "fitted" to each obs.: fitted.x <- fitted(cl); head(fitted.x) resid.x <- x - fitted(cl) ## Equalities : ---------------------------------- cbind(cl[c("betweenss", "tot.withinss", "totss")], # the same two columns c(ss(fitted.x), ss(resid.x), ss(x))) stopifnot(all.equal(cl$ totss, ss(x)), all.equal(cl$ tot.withinss, ss(resid.x)), ## these three are the same: all.equal(cl$ betweenss, ss(fitted.x)), all.equal(cl$ betweenss, cl$totss - cl$tot.withinss), ## and hence also all.equal(ss(x), ss(fitted.x) + ss(resid.x)) ) kmeans(x,1)$withinss # trivial one-cluster, (its W.SS == ss(x)) ## random starts do help here with too many clusters ## (and are often recommended anyway!): (cl <- kmeans(x, 5, nstart = 25)) plot(x, col = cl$cluster) points(cl$centers, col = 1:5, pch = 8) ``` r None `model.extract` Extract Components from a Model Frame ------------------------------------------------------ ### Description Returns the response, offset, subset, weights or other special components of a model frame passed as optional arguments to `<model.frame>`. ### Usage ``` model.extract(frame, component) model.offset(x) model.response(data, type = "any") model.weights(x) ``` ### Arguments | | | | --- | --- | | `frame, x, data` | A model frame. | | `component` | literal character string or name. The name of a component to extract, such as `"weights"`, `"subset"`. | | `type` | One of `"any"`, `"numeric"`, `"double"`. Using either of latter two coerces the result to have storage mode `"double"`. | ### Details `model.extract` is provided for compatibility with S, which does not have the more specific functions. It is also useful to extract e.g. the `etastart` and `mustart` components of a `<glm>` fit. `model.offset` and `model.response` are equivalent to `model.extract(, "offset")` and `model.extract(, "response")` respectively. `model.offset` sums any terms specified by `<offset>` terms in the formula or by `offset` arguments in the call producing the model frame: it does check that the offset is numeric. `model.weights` is slightly different from `model.frame(, "weights")` in not naming the vector it returns. ### Value The specified component of the model frame, usually a vector. ### See Also `<model.frame>`, `<offset>` ### Examples ``` a <- model.frame(cbind(ncases,ncontrols) ~ agegp + tobgp + alcgp, data = esoph) model.extract(a, "response") stopifnot(model.extract(a, "response") == model.response(a)) a <- model.frame(ncases/(ncases+ncontrols) ~ agegp + tobgp + alcgp, data = esoph, weights = ncases+ncontrols) model.response(a) model.extract(a, "weights") a <- model.frame(cbind(ncases,ncontrols) ~ agegp, something = tobgp, data = esoph) names(a) stopifnot(model.extract(a, "something") == esoph$tobgp) ``` r None `lowess` Scatter Plot Smoothing -------------------------------- ### Description This function performs the computations for the *LOWESS* smoother which uses locally-weighted polynomial regression (see the references). ### Usage ``` lowess(x, y = NULL, f = 2/3, iter = 3, delta = 0.01 * diff(range(x))) ``` ### Arguments | | | | --- | --- | | `x, y` | vectors giving the coordinates of the points in the scatter plot. Alternatively a single plotting structure can be specified – see `[xy.coords](../../grdevices/html/xy.coords)`. | | `f` | the smoother span. This gives the proportion of points in the plot which influence the smooth at each value. Larger values give more smoothness. | | `iter` | the number of ‘robustifying’ iterations which should be performed. Using smaller values of `iter` will make `lowess` run faster. | | `delta` | See ‘Details’. Defaults to 1/100th of the range of `x`. | ### Details `lowess` is defined by a complex algorithm, the Ratfor original of which (by W. S. Cleveland) can be found in the **R** sources as file ‘src/library/stats/src/lowess.doc’. Normally a local linear polynomial fit is used, but under some circumstances (see the file) a local constant fit can be used. ‘Local’ is defined by the distance to the `floor(f*n)`th nearest neighbour, and tricubic weighting is used for `x` which fall within the neighbourhood. The initial fit is done using weighted least squares. If `iter > 0`, further weighted fits are done using the product of the weights from the proximity of the `x` values and case weights derived from the residuals at the previous iteration. Specifically, the case weight is Tukey's biweight, with cutoff 6 times the MAD of the residuals. (The current **R** implementation differs from the original in stopping iteration if the MAD is effectively zero since the algorithm is highly unstable in that case.) `delta` is used to speed up computation: instead of computing the local polynomial fit at each data point it is not computed for points within `delta` of the last computed point, and linear interpolation is used to fill in the fitted values for the skipped points. ### Value `lowess` returns a list containing components `x` and `y` which give the coordinates of the smooth. The smooth can be added to a plot of the original points with the function `lines`: see the examples. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988). *The New S Language*. Wadsworth & Brooks/Cole. Cleveland, W. S. (1979). Robust locally weighted regression and smoothing scatterplots. *Journal of the American Statistical Association*, **74**, 829–836. doi: [10.1080/01621459.1979.10481038](https://doi.org/10.1080/01621459.1979.10481038). Cleveland, W. S. (1981) LOWESS: A program for smoothing scatterplots by robust locally weighted regression. *The American Statistician*, **35**, 54. doi: [10.2307/2683591](https://doi.org/10.2307/2683591). ### See Also `<loess>`, a newer formula based version of `lowess` (with different defaults!). ### Examples ``` require(graphics) plot(cars, main = "lowess(cars)") lines(lowess(cars), col = 2) lines(lowess(cars, f = .2), col = 3) legend(5, 120, c(paste("f = ", c("2/3", ".2"))), lty = 1, col = 2:3) ``` r None `supsmu` Friedman's SuperSmoother ---------------------------------- ### Description Smooth the (x, y) values by Friedman's ‘super smoother’. ### Usage ``` supsmu(x, y, wt =, span = "cv", periodic = FALSE, bass = 0, trace = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | x values for smoothing | | `y` | y values for smoothing | | `wt` | case weights, by default all equal | | `span` | the fraction of the observations in the span of the running lines smoother, or `"cv"` to choose this by leave-one-out cross-validation. | | `periodic` | if `TRUE`, the x values are assumed to be in `[0, 1]` and of period 1. | | `bass` | controls the smoothness of the fitted curve. Values of up to 10 indicate increasing smoothness. | | `trace` | logical, if true, prints one line of info “per spar”, notably useful for `"cv"`. | ### Details `supsmu` is a running lines smoother which chooses between three spans for the lines. The running lines smoothers are symmetric, with `k/2` data points each side of the predicted point, and values of `k` as `0.5 * n`, `0.2 * n` and `0.05 * n`, where `n` is the number of data points. If `span` is specified, a single smoother with span `span * n` is used. The best of the three smoothers is chosen by cross-validation for each prediction. The best spans are then smoothed by a running lines smoother and the final prediction chosen by linear interpolation. The FORTRAN code says: “For small samples (`n < 40`) or if there are substantial serial correlations between observations close in x-value, then a pre-specified fixed span smoother (`span > 0`) should be used. Reasonable span values are 0.2 to 0.4.” Cases with non-finite values of `x`, `y` or `wt` are dropped, with a warning. ### Value A list with components | | | | --- | --- | | `x` | the input values in increasing order with duplicates removed. | | `y` | the corresponding y values on the fitted curve. | ### References Friedman, J. H. (1984) SMART User's Guide. Laboratory for Computational Statistics, Stanford University Technical Report No. 1. Friedman, J. H. (1984) A variable span scatterplot smoother. Laboratory for Computational Statistics, Stanford University Technical Report No. 5. ### See Also `<ppr>` ### Examples ``` require(graphics) with(cars, { plot(speed, dist) lines(supsmu(speed, dist)) lines(supsmu(speed, dist, bass = 7), lty = 2) }) ``` r None `plot.profile.nls` Plot a profile.nls Object --------------------------------------------- ### Description Displays a series of plots of the profile t function and interpolated confidence intervals for the parameters in a nonlinear regression model that has been fit with `nls` and profiled with `profile.nls`. ### Usage ``` ## S3 method for class 'profile.nls' plot(x, levels, conf = c(99, 95, 90, 80, 50)/100, absVal = TRUE, ylab = NULL, lty = 2, ...) ``` ### Arguments | | | | --- | --- | | `x` | an object of class `"profile.nls"` | | `levels` | levels, on the scale of the absolute value of a t statistic, at which to interpolate intervals. Usually `conf` is used instead of giving `levels` explicitly. | | `conf` | a numeric vector of confidence levels for profile-based confidence intervals on the parameters. Defaults to `c(0.99, 0.95, 0.90, 0.80, 0.50).` | | `absVal` | a logical value indicating whether or not the plots should be on the scale of the absolute value of the profile t. Defaults to `TRUE`. | | `lty` | the line type to be used for axis and dropped lines. | | `ylab, ...` | other arguments to the `[plot.default](../../graphics/html/plot.default)` function can be passed here (but not `xlab`, `xlim`, `ylim` nor `type`). | ### Details The plots are produced in a set of hard-coded colours, but as these are coded by number their effect can be changed by setting the `[palette](../../grdevices/html/palette)`. Colour 1 is used for the axes and 4 for the profile itself. Colours 3 and 6 are used for the axis line at zero and the horizontal/vertical lines dropping to the axes. ### Author(s) Douglas M. Bates and Saikat DebRoy ### References Bates, D.M. and Watts, D.G. (1988), *Nonlinear Regression Analysis and Its Applications*, Wiley (chapter 6) ### See Also `<nls>`, `<profile>`, `<profile.nls>` ### Examples ``` require(graphics) # obtain the fitted object fm1 <- nls(demand ~ SSasympOrig(Time, A, lrc), data = BOD) # get the profile for the fitted model pr1 <- profile(fm1, alphamax = 0.05) opar <- par(mfrow = c(2,2), oma = c(1.1, 0, 1.1, 0), las = 1) plot(pr1, conf = c(95, 90, 80, 50)/100) plot(pr1, conf = c(95, 90, 80, 50)/100, absVal = FALSE) mtext("Confidence intervals based on the profile sum of squares", side = 3, outer = TRUE) mtext("BOD data - confidence levels of 50%, 80%, 90% and 95%", side = 1, outer = TRUE) par(opar) ``` r None `plot.acf` Plot Autocovariance and Autocorrelation Functions ------------------------------------------------------------- ### Description Plot method for objects of class `"acf"`. ### Usage ``` ## S3 method for class 'acf' plot(x, ci = 0.95, type = "h", xlab = "Lag", ylab = NULL, ylim = NULL, main = NULL, ci.col = "blue", ci.type = c("white", "ma"), max.mfrow = 6, ask = Npgs > 1 && dev.interactive(), mar = if(nser > 2) c(3,2,2,0.8) else par("mar"), oma = if(nser > 2) c(1,1.2,1,1) else par("oma"), mgp = if(nser > 2) c(1.5,0.6,0) else par("mgp"), xpd = par("xpd"), cex.main = if(nser > 2) 1 else par("cex.main"), verbose = getOption("verbose"), ...) ``` ### Arguments | | | | --- | --- | | `x` | an object of class `"acf"`. | | `ci` | coverage probability for confidence interval. Plotting of the confidence interval is suppressed if `ci` is zero or negative. | | `type` | the type of plot to be drawn, default to histogram like vertical lines. | | `xlab` | the x label of the plot. | | `ylab` | the y label of the plot. | | `ylim` | numeric of length 2 giving the y limits for the plot. | | `main` | overall title for the plot. | | `ci.col` | colour to plot the confidence interval lines. | | `ci.type` | should the confidence limits assume a white noise input or for lag *k* an MA(*k-1*) input? Can be abbreviated. | | `max.mfrow` | positive integer; for multivariate `x` indicating how many rows and columns of plots should be put on one page, using `[par](../../graphics/html/par)(mfrow = c(m,m))`. | | `ask` | logical; if `TRUE`, the user is asked before a new page is started. | | `mar, oma, mgp, xpd, cex.main` | graphics parameters as in `[par](../../graphics/html/par)(*)`, by default adjusted to use smaller than default margins for multivariate `x` only. | | `verbose` | logical. Should **R** report extra information on progress? | | `...` | graphics parameters to be passed to the plotting routines. | ### Note The confidence interval plotted in `plot.acf` is based on an *uncorrelated* series and should be treated with appropriate caution. Using `ci.type = "ma"` may be less potentially misleading. ### See Also `<acf>` which calls `plot.acf` by default. ### Examples ``` require(graphics) z4 <- ts(matrix(rnorm(400), 100, 4), start = c(1961, 1), frequency = 12) z7 <- ts(matrix(rnorm(700), 100, 7), start = c(1961, 1), frequency = 12) acf(z4) acf(z7, max.mfrow = 7) # squeeze onto 1 page acf(z7) # multi-page ``` r None `prop.test` Test of Equal or Given Proportions ----------------------------------------------- ### Description `prop.test` can be used for testing the null that the proportions (probabilities of success) in several groups are the same, or that they equal certain given values. ### Usage ``` prop.test(x, n, p = NULL, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, correct = TRUE) ``` ### Arguments | | | | --- | --- | | `x` | a vector of counts of successes, a one-dimensional table with two entries, or a two-dimensional table (or matrix) with 2 columns, giving the counts of successes and failures, respectively. | | `n` | a vector of counts of trials; ignored if `x` is a matrix or a table. | | `p` | a vector of probabilities of success. The length of `p` must be the same as the number of groups specified by `x`, and its elements must be greater than 0 and less than 1. | | `alternative` | a character string specifying the alternative hypothesis, must be one of `"two.sided"` (default), `"greater"` or `"less"`. You can specify just the initial letter. Only used for testing the null that a single proportion equals a given value, or that two proportions are equal; ignored otherwise. | | `conf.level` | confidence level of the returned confidence interval. Must be a single number between 0 and 1. Only used when testing the null that a single proportion equals a given value, or that two proportions are equal; ignored otherwise. | | `correct` | a logical indicating whether Yates' continuity correction should be applied where possible. | ### Details Only groups with finite numbers of successes and failures are used. Counts of successes and failures must be nonnegative and hence not greater than the corresponding numbers of trials which must be positive. All finite counts should be integers. If `p` is `NULL` and there is more than one group, the null tested is that the proportions in each group are the same. If there are two groups, the alternatives are that the probability of success in the first group is less than, not equal to, or greater than the probability of success in the second group, as specified by `alternative`. A confidence interval for the difference of proportions with confidence level as specified by `conf.level` and clipped to *[-1,1]* is returned. Continuity correction is used only if it does not exceed the difference of the sample proportions in absolute value. Otherwise, if there are more than 2 groups, the alternative is always `"two.sided"`, the returned confidence interval is `NULL`, and continuity correction is never used. If there is only one group, then the null tested is that the underlying probability of success is `p`, or .5 if `p` is not given. The alternative is that the probability of success is less than, not equal to, or greater than `p` or 0.5, respectively, as specified by `alternative`. A confidence interval for the underlying proportion with confidence level as specified by `conf.level` and clipped to *[0,1]* is returned. Continuity correction is used only if it does not exceed the difference between sample and null proportions in absolute value. The confidence interval is computed by inverting the score test. Finally, if `p` is given and there are more than 2 groups, the null tested is that the underlying probabilities of success are those given by `p`. The alternative is always `"two.sided"`, the returned confidence interval is `NULL`, and continuity correction is never used. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the value of Pearson's chi-squared test statistic. | | `parameter` | the degrees of freedom of the approximate chi-squared distribution of the test statistic. | | `p.value` | the p-value of the test. | | `estimate` | a vector with the sample proportions `x/n`. | | `conf.int` | a confidence interval for the true proportion if there is one group, or for the difference in proportions if there are 2 groups and `p` is not given, or `NULL` otherwise. In the cases where it is not `NULL`, the returned confidence interval has an asymptotic confidence level as specified by `conf.level`, and is appropriate to the specified alternative hypothesis. | | `null.value` | the value of `p` if specified by the null, or `NULL` otherwise. | | `alternative` | a character string describing the alternative. | | `method` | a character string indicating the method used, and whether Yates' continuity correction was applied. | | `data.name` | a character string giving the names of the data. | ### References Wilson, E.B. (1927). Probable inference, the law of succession, and statistical inference. *Journal of the American Statistical Association*, **22**, 209–212. doi: [10.2307/2276774](https://doi.org/10.2307/2276774). Newcombe R.G. (1998). Two-Sided Confidence Intervals for the Single Proportion: Comparison of Seven Methods. *Statistics in Medicine*, **17**, 857–872. doi: [10.1002/(SICI)1097-0258(19980430)17:8<857::AID-SIM777>3.0.CO;2-E](https://doi.org/10.1002/(SICI)1097-0258(19980430)17:8%3C857::AID-SIM777%3E3.0.CO;2-E). Newcombe R.G. (1998). Interval Estimation for the Difference Between Independent Proportions: Comparison of Eleven Methods. *Statistics in Medicine*, **17**, 873–890. doi: [10.1002/(SICI)1097-0258(19980430)17:8<873::AID-SIM779>3.0.CO;2-I](https://doi.org/10.1002/(SICI)1097-0258(19980430)17:8%3C873::AID-SIM779%3E3.0.CO;2-I). ### See Also `<binom.test>` for an *exact* test of a binomial hypothesis. ### Examples ``` heads <- rbinom(1, size = 100, prob = .5) prop.test(heads, 100) # continuity correction TRUE by default prop.test(heads, 100, correct = FALSE) ## Data from Fleiss (1981), p. 139. ## H0: The null hypothesis is that the four populations from which ## the patients were drawn have the same true proportion of smokers. ## A: The alternative is that this proportion is different in at ## least one of the populations. smokers <- c( 83, 90, 129, 70 ) patients <- c( 86, 93, 136, 82 ) prop.test(smokers, patients) ```
programming_docs
r None `profile` Generic Function for Profiling Models ------------------------------------------------ ### Description Investigates behavior of objective function near the solution represented by `fitted`. See documentation on method functions for further details. ### Usage ``` profile(fitted, ...) ``` ### Arguments | | | | --- | --- | | `fitted` | the original fitted model object. | | `...` | additional parameters. See documentation on individual methods. | ### Value A list with an element for each parameter being profiled. See the individual methods for further details. ### See Also `<profile.nls>`, `[profile.glm](../../mass/html/confint)` in package [MASS](https://CRAN.R-project.org/package=MASS), ... For profiling R code, see `[Rprof](../../utils/html/rprof)`. r None `SSfpl` Self-Starting Nls Four-Parameter Logistic Model -------------------------------------------------------- ### Description This `selfStart` model evaluates the four-parameter logistic function and its gradient. It has an `initial` attribute computing initial estimates of the parameters `A`, `B`, `xmid`, and `scal` for a given set of data. ### Usage ``` SSfpl(input, A, B, xmid, scal) ``` ### Arguments | | | | --- | --- | | `input` | a numeric vector of values at which to evaluate the model. | | `A` | a numeric parameter representing the horizontal asymptote on the left side (very small values of `input`). | | `B` | a numeric parameter representing the horizontal asymptote on the right side (very large values of `input`). | | `xmid` | a numeric parameter representing the `input` value at the inflection point of the curve. The value of `SSfpl` will be midway between `A` and `B` at `xmid`. | | `scal` | a numeric scale parameter on the `input` axis. | ### Value a numeric vector of the same length as `input`. It is the value of the expression `A+(B-A)/(1+exp((xmid-input)/scal))`. If all of the arguments `A`, `B`, `xmid`, and `scal` are names of objects, the gradient matrix with respect to these names is attached as an attribute named `gradient`. ### Author(s) José Pinheiro and Douglas Bates ### See Also `<nls>`, `[selfStart](selfstart)` ### Examples ``` Chick.1 <- ChickWeight[ChickWeight$Chick == 1, ] SSfpl(Chick.1$Time, 13, 368, 14, 6) # response only local({ A <- 13; B <- 368; xmid <- 14; scal <- 6 SSfpl(Chick.1$Time, A, B, xmid, scal) # response _and_ gradient }) print(getInitial(weight ~ SSfpl(Time, A, B, xmid, scal), data = Chick.1), digits = 5) ## Initial values are in fact the converged values fm1 <- nls(weight ~ SSfpl(Time, A, B, xmid, scal), data = Chick.1) summary(fm1) ## Visualizing the SSfpl() parametrization xx <- seq(-0.5, 5, length.out = 101) yy <- 1 + 4 / (1 + exp((2-xx))) # == SSfpl(xx, *) : stopifnot( all.equal(yy, SSfpl(xx, A = 1, B = 5, xmid = 2, scal = 1)) ) require(graphics) op <- par(mar = c(0, 0, 3.5, 0)) plot(xx, yy, type = "l", axes = FALSE, ylim = c(0,6), xlim = c(-1, 5), xlab = "", ylab = "", lwd = 2, main = "Parameters in the SSfpl model") mtext(quote(list(phi[1] == "A", phi[2] == "B", phi[3] == "xmid", phi[4] == "scal"))) usr <- par("usr") arrows(usr[1], 0, usr[2], 0, length = 0.1, angle = 25) arrows(0, usr[3], 0, usr[4], length = 0.1, angle = 25) text(usr[2] - 0.2, 0.1, "x", adj = c(1, 0)) text( -0.1, usr[4], "y", adj = c(1, 1)) abline(h = c(1, 5), lty = 3) arrows(-0.8, c(2.1, 2.9), -0.8, c(0, 5 ), length = 0.1, angle = 25) text (-0.8, 2.5, quote(phi[1])) arrows(-0.3, c(1/4, 3/4), -0.3, c(0, 1 ), length = 0.07, angle = 25) text (-0.3, 0.5, quote(phi[2])) text(2, -.1, quote(phi[3])) segments(c(2,3,3), c(0,3,4), # SSfpl(x = xmid = 2) = 3 c(2,3,2), c(3,4,3), lty = 2, lwd = 0.75) arrows(c(2.3, 2.7), 3, c(2.0, 3 ), 3, length = 0.08, angle = 25) text( 2.5, 3, quote(phi[4])); text(3.1, 3.5, "1") par(op) ``` r None `kernapply` Apply Smoothing Kernel ----------------------------------- ### Description `kernapply` computes the convolution between an input sequence and a specific kernel. ### Usage ``` kernapply(x, ...) ## Default S3 method: kernapply(x, k, circular = FALSE, ...) ## S3 method for class 'ts' kernapply(x, k, circular = FALSE, ...) ## S3 method for class 'vector' kernapply(x, k, circular = FALSE, ...) ## S3 method for class 'tskernel' kernapply(x, k, ...) ``` ### Arguments | | | | --- | --- | | `x` | an input vector, matrix, time series or kernel to be smoothed. | | `k` | smoothing `"tskernel"` object. | | `circular` | a logical indicating whether the input sequence to be smoothed is treated as circular, i.e., periodic. | | `...` | arguments passed to or from other methods. | ### Value A smoothed version of the input sequence. ### Note This uses `<fft>` to perform the convolution, so is fastest when `NROW(x)` is a power of 2 or some other highly composite integer. ### Author(s) A. Trapletti ### See Also `<kernel>`, `<convolve>`, `<filter>`, `<spectrum>` ### Examples ``` ## see 'kernel' for examples ``` r None `cor` Correlation, Variance and Covariance (Matrices) ------------------------------------------------------ ### Description `var`, `cov` and `cor` compute the variance of `x` and the covariance or correlation of `x` and `y` if these are vectors. If `x` and `y` are matrices then the covariances (or correlations) between the columns of `x` and the columns of `y` are computed. `cov2cor` scales a covariance matrix into the corresponding correlation matrix *efficiently*. ### Usage ``` var(x, y = NULL, na.rm = FALSE, use) cov(x, y = NULL, use = "everything", method = c("pearson", "kendall", "spearman")) cor(x, y = NULL, use = "everything", method = c("pearson", "kendall", "spearman")) cov2cor(V) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector, matrix or data frame. | | `y` | `NULL` (default) or a vector, matrix or data frame with compatible dimensions to `x`. The default is equivalent to `y = x` (but more efficient). | | `na.rm` | logical. Should missing values be removed? | | `use` | an optional character string giving a method for computing covariances in the presence of missing values. This must be (an abbreviation of) one of the strings `"everything"`, `"all.obs"`, `"complete.obs"`, `"na.or.complete"`, or `"pairwise.complete.obs"`. | | `method` | a character string indicating which correlation coefficient (or covariance) is to be computed. One of `"pearson"` (default), `"kendall"`, or `"spearman"`: can be abbreviated. | | `V` | symmetric numeric matrix, usually positive definite such as a covariance matrix. | ### Details For `cov` and `cor` one must *either* give a matrix or data frame for `x` *or* give both `x` and `y`. The inputs must be numeric (as determined by `[is.numeric](../../base/html/numeric)`: logical values are also allowed for historical compatibility): the `"kendall"` and `"spearman"` methods make sense for ordered inputs but `[xtfrm](../../base/html/xtfrm)` can be used to find a suitable prior transformation to numbers. `var` is just another interface to `cov`, where `na.rm` is used to determine the default for `use` when that is unspecified. If `na.rm` is `TRUE` then the complete observations (rows) are used (`use = "na.or.complete"`) to compute the variance. Otherwise, by default `use = "everything"`. If `use` is `"everything"`, `[NA](../../base/html/na)`s will propagate conceptually, i.e., a resulting value will be `NA` whenever one of its contributing observations is `NA`. If `use` is `"all.obs"`, then the presence of missing observations will produce an error. If `use` is `"complete.obs"` then missing values are handled by casewise deletion (and if there are no complete cases, that gives an error). `"na.or.complete"` is the same unless there are no complete cases, that gives `NA`. Finally, if `use` has the value `"pairwise.complete.obs"` then the correlation or covariance between each pair of variables is computed using all complete pairs of observations on those variables. This can result in covariance or correlation matrices which are not positive semi-definite, as well as `NA` entries if there are no complete pairs for that pair of variables. For `cov` and `var`, `"pairwise.complete.obs"` only works with the `"pearson"` method. Note that (the equivalent of) `var(double(0), use = *)` gives `NA` for `use = "everything"` and `"na.or.complete"`, and gives an error in the other cases. The denominator *n - 1* is used which gives an unbiased estimator of the (co)variance for i.i.d. observations. These functions return `[NA](../../base/html/na)` when there is only one observation (whereas S-PLUS has been returning `NaN`). For `cor()`, if `method` is `"kendall"` or `"spearman"`, Kendall's *tau* or Spearman's *rho* statistic is used to estimate a rank-based measure of association. These are more robust and have been recommended if the data do not necessarily come from a bivariate normal distribution. For `cov()`, a non-Pearson method is unusual but available for the sake of completeness. Note that `"spearman"` basically computes `cor(R(x), R(y))` (or `cov(., .)`) where `R(u) := rank(u, na.last = "keep")`. In the case of missing values, the ranks are calculated depending on the value of `use`, either based on complete observations, or based on pairwise completeness with reranking for each pair. When there are ties, Kendall's *tau\_b* is computed, as proposed by Kendall (1945). Scaling a covariance matrix into a correlation one can be achieved in many ways, mathematically most appealing by multiplication with a diagonal matrix from left and right, or more efficiently by using `[sweep](../../base/html/sweep)(.., FUN = "/")` twice. The `cov2cor` function is even a bit more efficient, and provided mostly for didactical reasons. ### Value For `r <- cor(*, use = "all.obs")`, it is now guaranteed that `all(abs(r) <= 1)`. ### Note Some people have noted that the code for Kendall's tau is slow for very large datasets (many more than 1000 cases). It rarely makes sense to do such a computation, but see function `[cor.fk](../../pcapp/html/cor.fk)` in package [pcaPP](https://CRAN.R-project.org/package=pcaPP). ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988). *The New S Language*. Wadsworth & Brooks/Cole. Kendall, M. G. (1938). A new measure of rank correlation, *Biometrika*, **30**, 81–93. doi: [10.1093/biomet/30.1-2.81](https://doi.org/10.1093/biomet/30.1-2.81). Kendall, M. G. (1945). The treatment of ties in rank problems. *Biometrika*, **33** 239–251. doi: [10.1093/biomet/33.3.239](https://doi.org/10.1093/biomet/33.3.239) ### See Also `<cor.test>` for confidence intervals (and tests). `<cov.wt>` for *weighted* covariance computation. `<sd>` for standard deviation (vectors). ### Examples ``` var(1:10) # 9.166667 var(1:5, 1:5) # 2.5 ## Two simple vectors cor(1:10, 2:11) # == 1 ## Correlation Matrix of Multivariate sample: (Cl <- cor(longley)) ## Graphical Correlation Matrix: symnum(Cl) # highly correlated ## Spearman's rho and Kendall's tau symnum(clS <- cor(longley, method = "spearman")) symnum(clK <- cor(longley, method = "kendall")) ## How much do they differ? i <- lower.tri(Cl) cor(cbind(P = Cl[i], S = clS[i], K = clK[i])) ## cov2cor() scales a covariance matrix by its diagonal ## to become the correlation matrix. cov2cor # see the function definition {and learn ..} stopifnot(all.equal(Cl, cov2cor(cov(longley))), all.equal(cor(longley, method = "kendall"), cov2cor(cov(longley, method = "kendall")))) ##--- Missing value treatment: C1 <- cov(swiss) range(eigen(C1, only.values = TRUE)$values) # 6.19 1921 ## swM := "swiss" with 3 "missing"s : swM <- swiss colnames(swM) <- abbreviate(colnames(swiss), minlength=6) swM[1,2] <- swM[7,3] <- swM[25,5] <- NA # create 3 "missing" ## Consider all 5 "use" cases : (C. <- cov(swM)) # use="everything" quite a few NA's in cov.matrix try(cov(swM, use = "all")) # Error: missing obs... C2 <- cov(swM, use = "complete") stopifnot(identical(C2, cov(swM, use = "na.or.complete"))) range(eigen(C2, only.values = TRUE)$values) # 6.46 1930 C3 <- cov(swM, use = "pairwise") range(eigen(C3, only.values = TRUE)$values) # 6.19 1938 ## Kendall's tau doesn't change much: symnum(Rc <- cor(swM, method = "kendall", use = "complete")) symnum(Rp <- cor(swM, method = "kendall", use = "pairwise")) symnum(R. <- cor(swiss, method = "kendall")) ## "pairwise" is closer componentwise, summary(abs(c(1 - Rp/R.))) summary(abs(c(1 - Rc/R.))) ## but "complete" is closer in Eigen space: EV <- function(m) eigen(m, only.values=TRUE)$values summary(abs(1 - EV(Rp)/EV(R.)) / abs(1 - EV(Rc)/EV(R.))) ``` r None `nls.control` Control the Iterations in nls -------------------------------------------- ### Description Allow the user to set some characteristics of the `<nls>` nonlinear least squares algorithm. ### Usage ``` nls.control(maxiter = 50, tol = 1e-05, minFactor = 1/1024, printEval = FALSE, warnOnly = FALSE, scaleOffset = 0, nDcentral = FALSE) ``` ### Arguments | | | | --- | --- | | `maxiter` | A positive integer specifying the maximum number of iterations allowed. | | `tol` | A positive numeric value specifying the tolerance level for the relative offset convergence criterion. | | `minFactor` | A positive numeric value specifying the minimum step-size factor allowed on any step in the iteration. The increment is calculated with a Gauss-Newton algorithm and successively halved until the residual sum of squares has been decreased or until the step-size factor has been reduced below this limit. | | `printEval` | a logical specifying whether the number of evaluations (steps in the gradient direction taken each iteration) is printed. | | `warnOnly` | a logical specifying whether `<nls>()` should return instead of signalling an error in the case of termination before convergence. Termination before convergence happens upon completion of `maxiter` iterations, in the case of a singular gradient, and in the case that the step-size factor is reduced below `minFactor`. | | `scaleOffset` | a constant to be added to the denominator of the relative offset convergence criterion calculation to avoid a zero divide in the case where the fit of a model to data is very close. The default value of `0` keeps the legacy behaviour of `nls()`. A value such as `1` seems to work for problems of reasonable scale with very small residuals. | | `nDcentral` | only when *numerical* derivatives are used: `[logical](../../base/html/logical)` indicating if *central* differences should be employed, i.e., `[numericDeriv](numericderiv)(*, central=TRUE)` be used. | ### Value A `[list](../../base/html/list)` with components | | | | --- | --- | | `maxiter` | | | `tol` | | | `minFactor` | | | `printEval` | | | `warnOnly` | | | `scaleOffset` | | | `nDcentreal` | | with meanings as explained under ‘Arguments’. ### Author(s) Douglas Bates and Saikat DebRoy; John C. Nash for part of the `scaleOffset` option. ### References Bates, D. M. and Watts, D. G. (1988), *Nonlinear Regression Analysis and Its Applications*, Wiley. ### See Also `<nls>` ### Examples ``` nls.control(minFactor = 1/2048) ``` r None `summary.princomp` Summary method for Principal Components Analysis -------------------------------------------------------------------- ### Description The `[summary](../../base/html/summary)` method for class `"princomp"`. ### Usage ``` ## S3 method for class 'princomp' summary(object, loadings = FALSE, cutoff = 0.1, ...) ## S3 method for class 'summary.princomp' print(x, digits = 3, loadings = x$print.loadings, cutoff = x$cutoff, ...) ``` ### Arguments | | | | --- | --- | | `object` | an object of class `"princomp"`, as from `princomp()`. | | `loadings` | logical. Should loadings be included? | | `cutoff` | numeric. Loadings below this cutoff in absolute value are shown as blank in the output. | | `x` | an object of class "summary.princomp". | | `digits` | the number of significant digits to be used in listing loadings. | | `...` | arguments to be passed to or from other methods. | ### Value `object` with additional components `cutoff` and `print.loadings`. ### See Also `<princomp>` ### Examples ``` summary(pc.cr <- princomp(USArrests, cor = TRUE)) ## The signs of the loading columns are arbitrary print(summary(princomp(USArrests, cor = TRUE), loadings = TRUE, cutoff = 0.2), digits = 2) ``` r None `t.test` Student's t-Test -------------------------- ### Description Performs one and two sample t-tests on vectors of data. ### Usage ``` t.test(x, ...) ## Default S3 method: t.test(x, y = NULL, alternative = c("two.sided", "less", "greater"), mu = 0, paired = FALSE, var.equal = FALSE, conf.level = 0.95, ...) ## S3 method for class 'formula' t.test(formula, data, subset, na.action, ...) ``` ### Arguments | | | | --- | --- | | `x` | a (non-empty) numeric vector of data values. | | `y` | an optional (non-empty) numeric vector of data values. | | `alternative` | a character string specifying the alternative hypothesis, must be one of `"two.sided"` (default), `"greater"` or `"less"`. You can specify just the initial letter. | | `mu` | a number indicating the true value of the mean (or difference in means if you are performing a two sample test). | | `paired` | a logical indicating whether you want a paired t-test. | | `var.equal` | a logical variable indicating whether to treat the two variances as being equal. If `TRUE` then the pooled variance is used to estimate the variance otherwise the Welch (or Satterthwaite) approximation to the degrees of freedom is used. | | `conf.level` | confidence level of the interval. | | `formula` | a formula of the form `lhs ~ rhs` where `lhs` is a numeric variable giving the data values and `rhs` either `1` for a one-sample or paired test or a factor with two levels giving the corresponding groups. If `lhs` is of class `"Pair"` and `rhs` is `1`, a paired test is done | | `data` | an optional matrix or data frame (or similar: see `<model.frame>`) containing the variables in the formula `formula`. By default the variables are taken from `environment(formula)`. | | `subset` | an optional vector specifying a subset of observations to be used. | | `na.action` | a function which indicates what should happen when the data contain `NA`s. Defaults to `getOption("na.action")`. | | `...` | further arguments to be passed to or from methods. | ### Details `alternative = "greater"` is the alternative that `x` has a larger mean than `y`. For the one-sample case: that the mean is positive. If `paired` is `TRUE` then both `x` and `y` must be specified and they must be the same length. Missing values are silently removed (in pairs if `paired` is `TRUE`). If `var.equal` is `TRUE` then the pooled estimate of the variance is used. By default, if `var.equal` is `FALSE` then the variance is estimated separately for both groups and the Welch modification to the degrees of freedom is used. If the input data are effectively constant (compared to the larger of the two means) an error is generated. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the value of the t-statistic. | | `parameter` | the degrees of freedom for the t-statistic. | | `p.value` | the p-value for the test. | | `conf.int` | a confidence interval for the mean appropriate to the specified alternative hypothesis. | | `estimate` | the estimated mean or difference in means depending on whether it was a one-sample test or a two-sample test. | | `null.value` | the specified hypothesized value of the mean or mean difference depending on whether it was a one-sample test or a two-sample test. | | `stderr` | the standard error of the mean (difference), used as denominator in the t-statistic formula. | | `alternative` | a character string describing the alternative hypothesis. | | `method` | a character string indicating what type of t-test was performed. | | `data.name` | a character string giving the name(s) of the data. | ### See Also `<prop.test>` ### Examples ``` require(graphics) t.test(1:10, y = c(7:20)) # P = .00001855 t.test(1:10, y = c(7:20, 200)) # P = .1245 -- NOT significant anymore ## Classical example: Student's sleep data plot(extra ~ group, data = sleep) ## Traditional interface with(sleep, t.test(extra[group == 1], extra[group == 2])) ## Formula interface t.test(extra ~ group, data = sleep) ## Formula interface to one-sample test t.test(extra ~ 1, data = sleep) ## Formula interface to paired test ## The sleep data are actually paired, so could have been in wide format: sleep2 <- reshape(sleep, direction = "wide", idvar = "ID", timevar = "group") t.test(Pair(extra.1, extra.2) ~ 1, data = sleep2) ```
programming_docs
r None `lsfit` Find the Least Squares Fit ----------------------------------- ### Description The least squares estimate of ***b*** in the model *y = X b + e* is found. ### Usage ``` lsfit(x, y, wt = NULL, intercept = TRUE, tolerance = 1e-07, yname = NULL) ``` ### Arguments | | | | --- | --- | | `x` | a matrix whose rows correspond to cases and whose columns correspond to variables. | | `y` | the responses, possibly a matrix if you want to fit multiple left hand sides. | | `wt` | an optional vector of weights for performing weighted least squares. | | `intercept` | whether or not an intercept term should be used. | | `tolerance` | the tolerance to be used in the matrix decomposition. | | `yname` | names to be used for the response variables. | ### Details If weights are specified then a weighted least squares is performed with the weight given to the *j*th case specified by the *j*th entry in `wt`. If any observation has a missing value in any field, that observation is removed before the analysis is carried out. This can be quite inefficient if there is a lot of missing data. The implementation is via a modification of the LINPACK subroutines which allow for multiple left-hand sides. ### Value A list with the following named components: | | | | --- | --- | | `coef` | the least squares estimates of the coefficients in the model (***b*** as stated above). | | `residuals` | residuals from the fit. | | `intercept` | indicates whether an intercept was fitted. | | `qr` | the QR decomposition of the design matrix. | ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<lm>` which usually is preferable; `<ls.print>`, `<ls.diag>`. ### Examples ``` ##-- Using the same data as the lm(.) example: lsD9 <- lsfit(x = unclass(gl(2, 10)), y = weight) ls.print(lsD9) ``` r None `contrasts` Get and Set Contrast Matrices ------------------------------------------ ### Description Set and view the contrasts associated with a factor. ### Usage ``` contrasts(x, contrasts = TRUE, sparse = FALSE) contrasts(x, how.many) <- value ``` ### Arguments | | | | --- | --- | | `x` | a factor or a logical variable. | | `contrasts` | logical. See ‘Details’. | | `sparse` | logical indicating if the result should be sparse (of class `[dgCMatrix](../../matrix/html/dgcmatrix-class)`), using package [Matrix](https://CRAN.R-project.org/package=Matrix). | | `how.many` | How many contrasts should be made. Defaults to one less than the number of levels of `x`. This need not be the same as the number of columns of `value`. | | `value` | either a numeric matrix (or a sparse or dense matrix of a class extending `[dMatrix](../../matrix/html/dmatrix-class)` from package [Matrix](https://CRAN.R-project.org/package=Matrix)) whose columns give coefficients for contrasts in the levels of `x`, or the (quoted) name of a function which computes such matrices. | ### Details If contrasts are not set for a factor the default functions from `[options](../../base/html/options)("contrasts")` are used. A logical vector `x` is converted into a two-level factor with levels `c(FALSE, TRUE)` (regardless of which levels occur in the variable). The argument `contrasts` is ignored if `x` has a matrix `contrasts` attribute set. Otherwise if `contrasts = TRUE` it is passed to a contrasts function such as `[contr.treatment](contrast)` and if `contrasts = FALSE` an identity matrix is returned. Suitable functions have a first argument which is the character vector of levels, a named argument `contrasts` (always called with `contrasts = TRUE`) and optionally a logical argument `sparse`. If `value` supplies more than `how.many` contrasts, the first `how.many` are used. If too few are supplied, a suitable contrast matrix is created by extending `value` after ensuring its columns are contrasts (orthogonal to the constant term) and not collinear. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical models.* Chapter 2 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `[C](zc)`, `[contr.helmert](contrast)`, `[contr.poly](contrast)`, `[contr.sum](contrast)`, `[contr.treatment](contrast)`; `<glm>`, `<aov>`, `<lm>`. ### Examples ``` utils::example(factor) fff <- ff[, drop = TRUE] # reduce to 5 levels. contrasts(fff) # treatment contrasts by default contrasts(C(fff, sum)) contrasts(fff, contrasts = FALSE) # the 5x5 identity matrix contrasts(fff) <- contr.sum(5); contrasts(fff) # set sum contrasts contrasts(fff, 2) <- contr.sum(5); contrasts(fff) # set 2 contrasts # supply 2 contrasts, compute 2 more to make full set of 4. contrasts(fff) <- contr.sum(5)[, 1:2]; contrasts(fff) ## using sparse contrasts: % useful, once model.matrix() works with these : ffs <- fff contrasts(ffs) <- contr.sum(5, sparse = TRUE)[, 1:2]; contrasts(ffs) stopifnot(all.equal(ffs, fff)) contrasts(ffs) <- contr.sum(5, sparse = TRUE); contrasts(ffs) ``` r None `smoothEnds` End Points Smoothing (for Running Medians) -------------------------------------------------------- ### Description Smooth end points of a vector `y` using subsequently smaller medians and Tukey's end point rule at the very end. (of odd span), ### Usage ``` smoothEnds(y, k = 3) ``` ### Arguments | | | | --- | --- | | `y` | dependent variable to be smoothed (vector). | | `k` | width of largest median window; must be odd. | ### Details `smoothEnds` is used to only do the ‘end point smoothing’, i.e., change at most the observations closer to the beginning/end than half the window `k`. The first and last value are computed using *Tukey's end point rule*, i.e., `sm[1] = median(y[1], sm[2], 3*sm[2] - 2*sm[3], na.rm=TRUE)`. In **R** versions 3.6.0 and earlier, missing values (`[NA](../../base/html/na)`) in `y` typically lead to an error, whereas now the equivalent of `<median>(*, na.rm=TRUE)` is used. ### Value vector of smoothed values, the same length as `y`. ### Author(s) Martin Maechler ### References John W. Tukey (1977) *Exploratory Data Analysis*, Addison. Velleman, P.F., and Hoaglin, D.C. (1981) *ABC of EDA (Applications, Basics, and Computing of Exploratory Data Analysis)*; Duxbury. ### See Also `<runmed>(*, endrule = "median")` which calls `smoothEnds()`. ### Examples ``` require(graphics) y <- ys <- (-20:20)^2 y [c(1,10,21,41)] <- c(100, 30, 400, 470) s7k <- runmed(y, 7, endrule = "keep") s7. <- runmed(y, 7, endrule = "const") s7m <- runmed(y, 7) col3 <- c("midnightblue","blue","steelblue") plot(y, main = "Running Medians -- runmed(*, k=7, endrule = X)") lines(ys, col = "light gray") matlines(cbind(s7k, s7.,s7m), lwd = 1.5, lty = 1, col = col3) eRules <- c("keep","constant","median") legend("topleft", paste("endrule", eRules, sep = " = "), col = col3, lwd = 1.5, lty = 1, bty = "n") stopifnot(identical(s7m, smoothEnds(s7k, 7))) ## With missing values (for R >= 3.6.1): yN <- y; yN[c(2,40)] <- NA rN <- sapply(eRules, function(R) runmed(yN, 7, endrule=R)) matlines(rN, type = "b", pch = 4, lwd = 3, lty=2, col = adjustcolor(c("red", "orange4", "orange1"), 0.5)) yN[c(1, 20:21)] <- NA # additionally rN. <- sapply(eRules, function(R) runmed(yN, 7, endrule=R)) head(rN., 4); tail(rN.) # more NA's too, still not *so* many: stopifnot(exprs = { !anyNA(rN[,2:3]) identical(which(is.na(rN[,"keep"])), c(2L, 40L)) identical(which(is.na(rN.), arr.ind=TRUE, useNames=FALSE), cbind(c(1:2,40L), 1L)) identical(rN.[38:41, "median"], c(289,289, 397, 470)) }) ``` r None `smooth.spline` Fit a Smoothing Spline --------------------------------------- ### Description Fits a cubic smoothing spline to the supplied data. ### Usage ``` smooth.spline(x, y = NULL, w = NULL, df, spar = NULL, lambda = NULL, cv = FALSE, all.knots = FALSE, nknots = .nknots.smspl, keep.data = TRUE, df.offset = 0, penalty = 1, control.spar = list(), tol = 1e-6 * IQR(x), keep.stuff = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | a vector giving the values of the predictor variable, or a list or a two-column matrix specifying x and y. | | `y` | responses. If `y` is missing or `NULL`, the responses are assumed to be specified by `x`, with `x` the index vector. | | `w` | optional vector of weights of the same length as `x`; defaults to all 1. | | `df` | the desired equivalent number of degrees of freedom (trace of the smoother matrix). Must be in *(1,nx]*, *nx* the number of unique x values, see below. | | `spar` | smoothing parameter, typically (but not necessarily) in *(0,1]*. When `spar` is specified, the coefficient *λ* of the integral of the squared second derivative in the fit (penalized log likelihood) criterion is a monotone function of `spar`, see the details below. Alternatively `lambda` may be specified instead of the *scale free* `spar`=*s*. | | `lambda` | if desired, the internal (design-dependent) smoothing parameter *λ* can be specified instead of `spar`. This may be desirable for resampling algorithms such as cross validation or the bootstrap. | | `cv` | ordinary leave-one-out (`TRUE`) or ‘generalized’ cross-validation (GCV) when `FALSE`; is used for smoothing parameter computation only when both `spar` and `df` are not specified; it is used however to determine `cv.crit` in the result. Setting it to `NA` for speedup skips the evaluation of leverages and any score. | | `all.knots` | if `TRUE`, all distinct points in `x` are used as knots. If `FALSE` (default), a subset of `x[]` is used, specifically `x[j]` where the `nknots` indices are evenly spaced in `1:n`, see also the next argument `nknots`. Alternatively, a strictly increasing `[numeric](../../base/html/numeric)` vector specifying “all the knots” to be used; must be rescaled to *[0, 1]* already such that it corresponds to the `ans $ fit$knots` sequence returned, not repeating the boundary knots. | | `nknots` | integer or `[function](../../base/html/function)` giving the number of knots to use when `all.knots = FALSE`. If a function (as by default), the number of knots is `nknots(nx)`. By default for *nx > 49* this is less than *nx*, the number of unique `x` values, see the Note. | | `keep.data` | logical specifying if the input data should be kept in the result. If `TRUE` (as per default), fitted values and residuals are available from the result. | | `df.offset` | allows the degrees of freedom to be increased by `df.offset` in the GCV criterion. | | `penalty` | the coefficient of the penalty for degrees of freedom in the GCV criterion. | | `control.spar` | optional list with named components controlling the root finding when the smoothing parameter `spar` is computed, i.e., missing or `NULL`, see below. **Note** that this is partly *experimental* and may change with general spar computation improvements! low: lower bound for `spar`; defaults to -1.5 (used to implicitly default to 0 in **R** versions earlier than 1.4). high: upper bound for `spar`; defaults to +1.5. tol: the absolute precision (**tol**erance) used; defaults to 1e-4 (formerly 1e-3). eps: the relative precision used; defaults to 2e-8 (formerly 0.00244). trace: logical indicating if iterations should be traced. maxit: integer giving the maximal number of iterations; defaults to 500. Note that `spar` is only searched for in the interval *[low, high]*. | | `tol` | a tolerance for same-ness or uniqueness of the `x` values. The values are binned into bins of size `tol` and values which fall into the same bin are regarded as the same. Must be strictly positive (and finite). | | `keep.stuff` | an experimental `[logical](../../base/html/logical)` indicating if the result should keep extras from the internal computations. Should allow to reconstruct the *X* matrix and more. | ### Details Neither `x` nor `y` are allowed to containing missing or infinite values. The `x` vector should contain at least four distinct values. ‘Distinct’ here is controlled by `tol`: values which are regarded as the same are replaced by the first of their values and the corresponding `y` and `w` are pooled accordingly. Unless `lambda` has been specified instead of `spar`, the computational *λ* used (as a function of *\code{spar}*) is *λ = r \* 256^(3\*spar - 1)* where *r = tr(X' W X) / tr(Σ)*, *Σ* is the matrix given by *Σ[i,j] = Integral B''[i](t) B''[j](t) dt*, *X* is given by *X[i,j] = B[j](x[i])*, *W* is the diagonal matrix of weights (scaled such that its trace is *n*, the original number of observations) and *B[k](.)* is the *k*-th B-spline. Note that with these definitions, *f\_i = f(x\_i)*, and the B-spline basis representation *f = X c* (i.e., *c* is the vector of spline coefficients), the penalized log likelihood is *L = (y - f)' W (y - f) + λ c' Σ c*, and hence *c* is the solution of the (ridge regression) *(X' W X + λ Σ) c = X' W y*. If `spar` and `lambda` are missing or `NULL`, the value of `df` is used to determine the degree of smoothing. If `df` is missing as well, leave-one-out cross-validation (ordinary or ‘generalized’ as determined by `cv`) is used to determine *λ*. Note that from the above relation, `spar` is *spar = s0 + 0.0601 \* log(λ)*, which is intentionally *different* from the S-PLUS implementation of `smooth.spline` (where `spar` is proportional to *λ*). In **R**'s (*log λ*) scale, it makes more sense to vary `spar` linearly. Note however that currently the results may become very unreliable for `spar` values smaller than about -1 or -2. The same may happen for values larger than 2 or so. Don't think of setting `spar` or the controls `low` and `high` outside such a safe range, unless you know what you are doing! Similarly, specifying `lambda` instead of `spar` is delicate, notably as the range of “safe” values for `lambda` is not scale-invariant and hence entirely data dependent. The ‘generalized’ cross-validation method GCV will work correctly when there are duplicated points in `x`. However, it is ambiguous what leave-one-out cross-validation means with duplicated points, and the internal code uses an approximation that involves leaving out groups of duplicated points. `cv = TRUE` is best avoided in that case. ### Value An object of class `"smooth.spline"` with components | | | | --- | --- | | `x` | the *distinct* `x` values in increasing order, see the ‘Details’ above. | | `y` | the fitted values corresponding to `x`. | | `w` | the weights used at the unique values of `x`. | | `yin` | the y values used at the unique `y` values. | | `tol` | the `tol` argument (whose default depends on `x`). | | `data` | only if `keep.data = TRUE`: itself a `[list](../../base/html/list)` with components `x`, `y` and `w` of the same length. These are the original *(x\_i,y\_i,w\_i), i = 1, …, n*, values where `data$x` may have repeated values and hence be longer than the above `x` component; see details. | | `lev` | (when `cv` was not `NA`) leverages, the diagonal values of the smoother matrix. | | `cv.crit` | cross-validation score, ‘generalized’ or true, depending on `cv`. The CV score is often called “PRESS” (and labeled on `[print](../../base/html/print)()`), for ‘**PRE**diction **S**um of **S**quares’. | | `pen.crit` | the penalized criterion, a non-negative number; simply the (weighted) residual sum of squares (RSS), `sum(.$w * residuals(.)^2)` . | | `crit` | the criterion value minimized in the underlying `.Fortran` routine ‘sslvrg’. When `df` has been specified, the criterion is *3 + (tr(S[lambda]) - df)^2*, where the *3 +* is there for numerical (and historical) reasons. | | `df` | equivalent degrees of freedom used. Note that (currently) this value may become quite imprecise when the true `df` is between and 1 and 2. | | `spar` | the value of `spar` computed or given, unless it has been given as `c(lambda = *)`, when it set to `NA` here. | | `ratio` | (when `spar` above is not `NA`), the value *r*, the ratio of two matrix traces. | | `lambda` | the value of *λ* corresponding to `spar`, see the details above. | | `iparms` | named integer(3) vector where `..$ipars["iter"]` gives number of spar computing iterations used. | | `auxMat` | experimental; when `keep.stuff` was true, a “flat” numeric vector containing parts of the internal computations. | | `fit` | list for use by `<predict.smooth.spline>`, with components knot: the knot sequence (including the repeated boundary knots), scaled into *[0, 1]* (via `min` and `range`). nk: number of coefficients or number of ‘proper’ knots plus 2. coef: coefficients for the spline basis used. min, range: numbers giving the corresponding quantities of `x`. | | `call` | the matched call. | `method(class = "smooth.spline")` shows a `[hatvalues](influence.measures)()` method based on the `lev` vector above. ### Note The number of unique `x` values, *nx*, are determined by the `tol` argument, equivalently to ``` nx <- length(x) - sum(duplicated( round((x - mean(x)) / tol) )) ``` The default `all.knots = FALSE` and `nknots = .nknots.smspl`, entails using only *O(nx ^ 0.2)* knots instead of *nx* for *nx > 49*. This cuts speed and memory requirements, but not drastically anymore since **R** version 1.5.1 where it is only *O(nk) + O(n)* where *nk* is the number of knots. In this case where not all unique `x` values are used as knots, the result is not a smoothing spline in the strict sense, but very close unless a small smoothing parameter (or large `df`) is used. ### Author(s) **R** implementation by B. D. Ripley and Martin Maechler (`spar/lambda`, etc). ### Source This function is based on code in the `GAMFIT` Fortran program by T. Hastie and R. Tibshirani (originally taken from <http://lib.stat.cmu.edu/general/gamfit>) which makes use of spline code by Finbarr O'Sullivan. Its design parallels the `smooth.spline` function of Chambers & Hastie (1992). ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S*, Wadsworth & Brooks/Cole. Green, P. J. and Silverman, B. W. (1994) *Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach.* Chapman and Hall. Hastie, T. J. and Tibshirani, R. J. (1990) *Generalized Additive Models.* Chapman and Hall. ### See Also `<predict.smooth.spline>` for evaluating the spline and its derivatives. ### Examples ``` require(graphics) plot(dist ~ speed, data = cars, main = "data(cars) & smoothing splines") cars.spl <- with(cars, smooth.spline(speed, dist)) cars.spl ## This example has duplicate points, so avoid cv = TRUE lines(cars.spl, col = "blue") ss10 <- smooth.spline(cars[,"speed"], cars[,"dist"], df = 10) lines(ss10, lty = 2, col = "red") legend(5,120,c(paste("default [C.V.] => df =",round(cars.spl$df,1)), "s( * , df = 10)"), col = c("blue","red"), lty = 1:2, bg = 'bisque') ## Residual (Tukey Anscombe) plot: plot(residuals(cars.spl) ~ fitted(cars.spl)) abline(h = 0, col = "gray") ## consistency check: stopifnot(all.equal(cars$dist, fitted(cars.spl) + residuals(cars.spl))) ## The chosen inner knots in original x-scale : with(cars.spl$fit, min + range * knot[-c(1:3, nk+1 +1:3)]) # == unique(cars$speed) ## Visualize the behavior of .nknots.smspl() nKnots <- Vectorize(.nknots.smspl) ; c.. <- adjustcolor("gray20",.5) curve(nKnots, 1, 250, n=250) abline(0,1, lty=2, col=c..); text(90,90,"y = x", col=c.., adj=-.25) abline(h=100,lty=2); abline(v=200, lty=2) n <- c(1:799, seq(800, 3490, by=10), seq(3500, 10000, by = 50)) plot(n, nKnots(n), type="l", main = "Vectorize(.nknots.smspl) (n)") abline(0,1, lty=2, col=c..); text(180,180,"y = x", col=c..) n0 <- c(50, 200, 800, 3200); c0 <- adjustcolor("blue3", .5) lines(n0, nKnots(n0), type="h", col=c0) axis(1, at=n0, line=-2, col.ticks=c0, col=NA, col.axis=c0) axis(4, at=.nknots.smspl(10000), line=-.5, col=c..,col.axis=c.., las=1) ##-- artificial example y18 <- c(1:3, 5, 4, 7:3, 2*(2:5), rep(10, 4)) xx <- seq(1, length(y18), length.out = 201) (s2 <- smooth.spline(y18)) # GCV (s02 <- smooth.spline(y18, spar = 0.2)) (s02. <- smooth.spline(y18, spar = 0.2, cv = NA)) plot(y18, main = deparse(s2$call), col.main = 2) lines(s2, col = "gray"); lines(predict(s2, xx), col = 2) lines(predict(s02, xx), col = 3); mtext(deparse(s02$call), col = 3) ## Specifying 'lambda' instead of usual spar : (s2. <- smooth.spline(y18, lambda = s2$lambda, tol = s2$tol)) ## The following shows the problematic behavior of 'spar' searching: (s2 <- smooth.spline(y18, control = list(trace = TRUE, tol = 1e-6, low = -1.5))) (s2m <- smooth.spline(y18, cv = TRUE, control = list(trace = TRUE, tol = 1e-6, low = -1.5))) ## both above do quite similarly (Df = 8.5 +- 0.2) ```
programming_docs
r None `profile.nls` Method for Profiling nls Objects ----------------------------------------------- ### Description Investigates the profile log-likelihood function for a fitted model of class `"nls"`. ### Usage ``` ## S3 method for class 'nls' profile(fitted, which = 1:npar, maxpts = 100, alphamax = 0.01, delta.t = cutoff/5, ...) ``` ### Arguments | | | | --- | --- | | `fitted` | the original fitted model object. | | `which` | the original model parameters which should be profiled. This can be a numeric or character vector. By default, all non-linear parameters are profiled. | | `maxpts` | maximum number of points to be used for profiling each parameter. | | `alphamax` | highest significance level allowed for the profile t-statistics. | | `delta.t` | suggested change on the scale of the profile t-statistics. Default value chosen to allow profiling at about 10 parameter values. | | `...` | further arguments passed to or from other methods. | ### Details The profile t-statistics is defined as the square root of change in sum-of-squares divided by residual standard error with an appropriate sign. ### Value A list with an element for each parameter being profiled. The elements are data-frames with two variables | | | | --- | --- | | `par.vals` | a matrix of parameter values for each fitted model. | | `tau` | the profile t-statistics. | ### Author(s) Of the original version, Douglas M. Bates and Saikat DebRoy ### References Bates, D. M. and Watts, D. G. (1988), *Nonlinear Regression Analysis and Its Applications*, Wiley (chapter 6). ### See Also `<nls>`, `<profile>`, `<plot.profile.nls>` ### Examples ``` # obtain the fitted object fm1 <- nls(demand ~ SSasympOrig(Time, A, lrc), data = BOD) # get the profile for the fitted model: default level is too extreme pr1 <- profile(fm1, alphamax = 0.05) # profiled values for the two parameters ## IGNORE_RDIFF_BEGIN pr1$A pr1$lrc ## IGNORE_RDIFF_END # see also example(plot.profile.nls) ``` r None `StructTS` Fit Structural Time Series -------------------------------------- ### Description Fit a structural model for a time series by maximum likelihood. ### Usage ``` StructTS(x, type = c("level", "trend", "BSM"), init = NULL, fixed = NULL, optim.control = NULL) ``` ### Arguments | | | | --- | --- | | `x` | a univariate numeric time series. Missing values are allowed. | | `type` | the class of structural model. If omitted, a BSM is used for a time series with `frequency(x) > 1`, and a local trend model otherwise. Can be abbreviated. | | `init` | initial values of the variance parameters. | | `fixed` | optional numeric vector of the same length as the total number of parameters. If supplied, only `NA` entries in `fixed` will be varied. Probably most useful for setting variances to zero. | | `optim.control` | List of control parameters for `<optim>`. Method `"L-BFGS-B"` is used. | ### Details *Structural time series* models are (linear Gaussian) state-space models for (univariate) time series based on a decomposition of the series into a number of components. They are specified by a set of error variances, some of which may be zero. The simplest model is the *local level* model specified by `type = "level"`. This has an underlying level *m[t]* which evolves by *m[t+1] = m[t] + xi[t], xi[t] ~ N(0, σ^2\_ξ)* The observations are *x[t] = m[t] + eps[t], eps[t] ~ N(0, σ^2\_\eps)* There are two parameters, *σ^2\_ξ* and *σ^2\_eps*. It is an ARIMA(0,1,1) model, but with restrictions on the parameter set. The *local linear trend model*, `type = "trend"`, has the same measurement equation, but with a time-varying slope in the dynamics for *m[t]*, given by *m[t+1] = m[t] + n[t] + xi[t], xi[t] ~ N(0, σ^2\_ξ)* *n[t+1] = n[t] + ζ[t], ζ[t] ~ N(0, σ^2\_ζ)* with three variance parameters. It is not uncommon to find *σ^2\_ζ = 0* (which reduces to the local level model) or *σ^2\_ξ = 0*, which ensures a smooth trend. This is a restricted ARIMA(0,2,2) model. The *basic structural model*, `type = "BSM"`, is a local trend model with an additional seasonal component. Thus the measurement equation is *x[t] = m[t] + s[t] + eps[t], eps[t] ~ N(0, σ^2\_eps)* where *s[t]* is a seasonal component with dynamics *s[t+1] = -s[t] - … - s[t - s + 2] + w[t], w[t] ~ N(0, σ^2\_w)* The boundary case *σ^2\_w = 0* corresponds to a deterministic (but arbitrary) seasonal pattern. (This is sometimes known as the ‘dummy variable’ version of the BSM.) ### Value A list of class `"StructTS"` with components: | | | | --- | --- | | `coef` | the estimated variances of the components. | | `loglik` | the maximized log-likelihood. Note that as all these models are non-stationary this includes a diffuse prior for some observations and hence is not comparable to `<arima>` nor different types of structural models. | | `loglik0` | the maximized log-likelihood with the constant used prior to **R** 3.0.0, for backwards compatibility. | | `data` | the time series `x`. | | `residuals` | the standardized residuals. | | `fitted` | a multiple time series with one component for the level, slope and seasonal components, estimated contemporaneously (that is at time *t* and not at the end of the series). | | `call` | the matched call. | | `series` | the name of the series `x`. | | `code` | the `convergence` code returned by `<optim>`. | | `model, model0` | Lists representing the Kalman Filter used in the fitting. See `[KalmanLike](kalmanlike)`. `model0` is the initial state of the filter, `model` its final state. | | `xtsp` | the `tsp` attributes of `x`. | ### Note Optimization of structural models is a lot harder than many of the references admit. For example, the `[AirPassengers](../../datasets/html/airpassengers)` data are considered in Brockwell & Davis (1996): their solution appears to be a local maximum, but nowhere near as good a fit as that produced by `StructTS`. It is quite common to find fits with one or more variances zero, and this can include *sigma^2\_eps*. ### References Brockwell, P. J. & Davis, R. A. (1996). *Introduction to Time Series and Forecasting*. Springer, New York. Sections 8.2 and 8.5. Durbin, J. and Koopman, S. J. (2001) *Time Series Analysis by State Space Methods.* Oxford University Press. Harvey, A. C. (1989) *Forecasting, Structural Time Series Models and the Kalman Filter*. Cambridge University Press. Harvey, A. C. (1993) *Time Series Models*. 2nd Edition, Harvester Wheatsheaf. ### See Also `[KalmanLike](kalmanlike)`, `[tsSmooth](tssmooth)`; `<stl>` for different kind of (seasonal) decomposition. ### Examples ``` ## see also JohnsonJohnson, Nile and AirPassengers require(graphics) trees <- window(treering, start = 0) (fit <- StructTS(trees, type = "level")) plot(trees) lines(fitted(fit), col = "green") tsdiag(fit) (fit <- StructTS(log10(UKgas), type = "BSM")) par(mfrow = c(4, 1)) # to give appropriate aspect ratio for next plot. plot(log10(UKgas)) plot(cbind(fitted(fit), resids=resid(fit)), main = "UK gas consumption") ## keep some parameters fixed; trace optimizer: StructTS(log10(UKgas), type = "BSM", fixed = c(0.1,0.001,NA,NA), optim.control = list(trace = TRUE)) ``` r None `ts-methods` Methods for Time Series Objects --------------------------------------------- ### Description Methods for objects of class `"ts"`, typically the result of `<ts>`. ### Usage ``` ## S3 method for class 'ts' diff(x, lag = 1, differences = 1, ...) ## S3 method for class 'ts' na.omit(object, ...) ``` ### Arguments | | | | --- | --- | | `x` | an object of class `"ts"` containing the values to be differenced. | | `lag` | an integer indicating which lag to use. | | `differences` | an integer indicating the order of the difference. | | `object` | a univariate or multivariate time series. | | `...` | further arguments to be passed to or from methods. | ### Details The `na.omit` method omits initial and final segments with missing values in one or more of the series. ‘Internal’ missing values will lead to failure. ### Value For the `na.omit` method, a time series without missing values. The class of `object` will be preserved. ### See Also `[diff](../../base/html/diff)`; `[na.omit](na.fail)`, `<na.fail>`, `<na.contiguous>`. r None `identify.hclust` Identify Clusters in a Dendrogram ---------------------------------------------------- ### Description `identify.hclust` reads the position of the graphics pointer when the (first) mouse button is pressed. It then cuts the tree at the vertical position of the pointer and highlights the cluster containing the horizontal position of the pointer. Optionally a function is applied to the index of data points contained in the cluster. ### Usage ``` ## S3 method for class 'hclust' identify(x, FUN = NULL, N = 20, MAXCLUSTER = 20, DEV.FUN = NULL, ...) ``` ### Arguments | | | | --- | --- | | `x` | an object of the type produced by `hclust`. | | `FUN` | (optional) function to be applied to the index numbers of the data points in a cluster (see ‘Details’ below). | | `N` | the maximum number of clusters to be identified. | | `MAXCLUSTER` | the maximum number of clusters that can be produced by a cut (limits the effective vertical range of the pointer). | | `DEV.FUN` | (optional) integer scalar. If specified, the corresponding graphics device is made active before `FUN` is applied. | | `...` | further arguments to `FUN`. | ### Details By default clusters can be identified using the mouse and an `[invisible](../../base/html/invisible)` list of indices of the respective data points is returned. If `FUN` is not `NULL`, then the index vector of data points is passed to this function as first argument, see the examples below. The active graphics device for `FUN` can be specified using `DEV.FUN`. The identification process is terminated by pressing any mouse button other than the first, see also `[identify](../../graphics/html/identify)`. ### Value Either a list of data point index vectors or a list of return values of `FUN`. ### See Also `<hclust>`, `<rect.hclust>` ### Examples ``` ## Not run: require(graphics) hca <- hclust(dist(USArrests)) plot(hca) (x <- identify(hca)) ## Terminate with 2nd mouse button !! hci <- hclust(dist(iris[,1:4])) plot(hci) identify(hci, function(k) print(table(iris[k,5]))) # open a new device (one for dendrogram, one for bars): dev.new() # << make that narrow (& small) # and *beside* 1st one nD <- dev.cur() # to be for the barplot dev.set(dev.prev()) # old one for dendrogram plot(hci) ## select subtrees in dendrogram and "see" the species distribution: identify(hci, function(k) barplot(table(iris[k,5]), col = 2:4), DEV.FUN = nD) ## End(Not run) ``` r None `cpgram` Plot Cumulative Periodogram ------------------------------------- ### Description Plots a cumulative periodogram. ### Usage ``` cpgram(ts, taper = 0.1, main = paste("Series: ", deparse1(substitute(ts))), ci.col = "blue") ``` ### Arguments | | | | --- | --- | | `ts` | a univariate time series | | `taper` | proportion tapered in forming the periodogram | | `main` | main title | | `ci.col` | colour for confidence band. | ### Value None. ### Side Effects Plots the cumulative periodogram in a square plot. ### Note From package [MASS](https://CRAN.R-project.org/package=MASS). ### Author(s) B.D. Ripley ### Examples ``` require(graphics) par(pty = "s", mfrow = c(1,2)) cpgram(lh) lh.ar <- ar(lh, order.max = 9) cpgram(lh.ar$resid, main = "AR(3) fit to lh") cpgram(ldeaths) ``` r None `poly` Compute Orthogonal Polynomials -------------------------------------- ### Description Returns or evaluates orthogonal polynomials of degree 1 to `degree` over the specified set of points `x`: these are all orthogonal to the constant polynomial of degree 0. Alternatively, evaluate raw polynomials. ### Usage ``` poly(x, ..., degree = 1, coefs = NULL, raw = FALSE, simple = FALSE) polym (..., degree = 1, coefs = NULL, raw = FALSE) ## S3 method for class 'poly' predict(object, newdata, ...) ``` ### Arguments | | | | --- | --- | | `x, newdata` | a numeric vector at which to evaluate the polynomial. `x` can also be a matrix. Missing values are not allowed in `x`. | | `degree` | the degree of the polynomial. Must be less than the number of unique points when `raw` is false, as by default. | | `coefs` | for prediction, coefficients from a previous fit. | | `raw` | if true, use raw and not orthogonal polynomials. | | `simple` | logical indicating if a simple matrix (with no further `[attributes](../../base/html/attributes)` but `[dimnames](../../base/html/dimnames)`) should be returned. For speedup only. | | `object` | an object inheriting from class `"poly"`, normally the result of a call to `poly` with a single vector argument. | | `...` | `poly`, `polym`: further vectors. `predict.poly`: arguments to be passed to or from other methods. | ### Details Although formally `degree` should be named (as it follows `...`), an unnamed second argument of length 1 will be interpreted as the degree, such that `poly(x, 3)` can be used in formulas. The orthogonal polynomial is summarized by the coefficients, which can be used to evaluate it via the three-term recursion given in Kennedy & Gentle (1980, pp. 343–4), and used in the `predict` part of the code. `poly` using `...` is just a convenience wrapper for `polym`: `coef` is ignored. Conversely, if `polym` is called with a single argument in `...` it is a wrapper for `poly`. ### Value For `poly` and `polym()` (when `simple=FALSE` and `coefs=NULL` as per default): A matrix with rows corresponding to points in `x` and columns corresponding to the degree, with attributes `"degree"` specifying the degrees of the columns and (unless `raw = TRUE`) `"coefs"` which contains the centering and normalization constants used in constructing the orthogonal polynomials and class `c("poly", "matrix")`. For `poly(*, simple=TRUE)`, `polym(*, coefs=<non-NULL>)`, and `predict.poly()`: a matrix. ### Note This routine is intended for statistical purposes such as `contr.poly`: it does not attempt to orthogonalize to machine accuracy. ### Author(s) R Core Team. Keith Jewell (Campden BRI Group, UK) contributed improvements for correct prediction on subsets. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S*. Wadsworth & Brooks/Cole. Kennedy, W. J. Jr and Gentle, J. E. (1980) *Statistical Computing* Marcel Dekker. ### See Also `[contr.poly](contrast)`. `[cars](../../datasets/html/cars)` for an example of polynomial regression. ### Examples ``` od <- options(digits = 3) # avoid too much visual clutter (z <- poly(1:10, 3)) predict(z, seq(2, 4, 0.5)) zapsmall(poly(seq(4, 6, 0.5), 3, coefs = attr(z, "coefs"))) zm <- zapsmall(polym ( 1:4, c(1, 4:6), degree = 3)) # or just poly(): (z1 <- zapsmall(poly(cbind(1:4, c(1, 4:6)), degree = 3))) ## they are the same : stopifnot(all.equal(zm, z1, tolerance = 1e-15)) ## poly(<matrix>, df) --- used to fail till July 14 (vive la France!), 2017: m2 <- cbind(1:4, c(1, 4:6)) pm2 <- zapsmall(poly(m2, 3)) # "unnamed degree = 3" stopifnot(all.equal(pm2, zm, tolerance = 1e-15)) options(od) ``` r None `ar` Fit Autoregressive Models to Time Series ---------------------------------------------- ### Description Fit an autoregressive time series model to the data, by default selecting the complexity by AIC. ### Usage ``` ar(x, aic = TRUE, order.max = NULL, method = c("yule-walker", "burg", "ols", "mle", "yw"), na.action, series, ...) ar.burg(x, ...) ## Default S3 method: ar.burg(x, aic = TRUE, order.max = NULL, na.action = na.fail, demean = TRUE, series, var.method = 1, ...) ## S3 method for class 'mts' ar.burg(x, aic = TRUE, order.max = NULL, na.action = na.fail, demean = TRUE, series, var.method = 1, ...) ar.yw(x, ...) ## Default S3 method: ar.yw(x, aic = TRUE, order.max = NULL, na.action = na.fail, demean = TRUE, series, ...) ## S3 method for class 'mts' ar.yw(x, aic = TRUE, order.max = NULL, na.action = na.fail, demean = TRUE, series, var.method = 1, ...) ar.mle(x, aic = TRUE, order.max = NULL, na.action = na.fail, demean = TRUE, series, ...) ## S3 method for class 'ar' predict(object, newdata, n.ahead = 1, se.fit = TRUE, ...) ``` ### Arguments | | | | --- | --- | | `x` | a univariate or multivariate time series. | | `aic` | `[logical](../../base/html/logical)`. If `TRUE` then the Akaike Information Criterion is used to choose the order of the autoregressive model. If `FALSE`, the model of order `order.max` is fitted. | | `order.max` | maximum order (or order) of model to fit. Defaults to the smaller of *N-1* and *10\*log10(N)* where *N* is the number of non-missing observations except for `method = "mle"` where it is the minimum of this quantity and 12. | | `method` | character string specifying the method to fit the model. Must be one of the strings in the default argument (the first few characters are sufficient). Defaults to `"yule-walker"`. | | `na.action` | function to be called to handle missing values. Currently, via `na.action = na.pass`, only Yule-Walker method can handle missing values which must be consistent within a time point: either all variables must be missing or none. | | `demean` | should a mean be estimated during fitting? | | `series` | names for the series. Defaults to `deparse1(substitute(x))`. | | `var.method` | the method to estimate the innovations variance (see ‘Details’). | | `...` | additional arguments for specific methods. | | `object` | a fit from `ar()`. | | `newdata` | data to which to apply the prediction. | | `n.ahead` | number of steps ahead at which to predict. | | `se.fit` | logical: return estimated standard errors of the prediction error? | ### Details For definiteness, note that the AR coefficients have the sign in *x[t] - m = a[1]\*(x[t-1] - m) + … + a[p]\*(x[t-p] - m) + e[t]* `ar` is just a wrapper for the functions `ar.yw`, `ar.burg`, `<ar.ols>` and `ar.mle`. Order selection is done by AIC if `aic` is true. This is problematic, as of the methods here only `ar.mle` performs true maximum likelihood estimation. The AIC is computed as if the variance estimate were the MLE, omitting the determinant term from the likelihood. Note that this is not the same as the Gaussian likelihood evaluated at the estimated parameter values. In `ar.yw` the variance matrix of the innovations is computed from the fitted coefficients and the autocovariance of `x`. `ar.burg` allows two methods to estimate the innovations variance and hence AIC. Method 1 is to use the update given by the Levinson-Durbin recursion (Brockwell and Davis, 1991, (8.2.6) on page 242), and follows S-PLUS. Method 2 is the mean of the sum of squares of the forward and backward prediction errors (as in Brockwell and Davis, 1996, page 145). Percival and Walden (1998) discuss both. In the multivariate case the estimated coefficients will depend (slightly) on the variance estimation method. Remember that `ar` includes by default a constant in the model, by removing the overall mean of `x` before fitting the AR model, or (`ar.mle`) estimating a constant to subtract. ### Value For `ar` and its methods a list of class `"ar"` with the following elements: | | | | --- | --- | | `order` | The order of the fitted model. This is chosen by minimizing the AIC if `aic = TRUE`, otherwise it is `order.max`. | | `ar` | Estimated autoregression coefficients for the fitted model. | | `var.pred` | The prediction variance: an estimate of the portion of the variance of the time series that is not explained by the autoregressive model. | | `x.mean` | The estimated mean of the series used in fitting and for use in prediction. | | `x.intercept` | (`ar.ols` only.) The intercept in the model for `x - x.mean`. | | `aic` | The differences in AIC between each model and the best-fitting model. Note that the latter can have an AIC of `-Inf`. | | `n.used` | The number of observations in the time series, including missing. | | `n.obs` | The number of non-missing observations in the time series. | | `order.max` | The value of the `order.max` argument. | | `partialacf` | The estimate of the partial autocorrelation function up to lag `order.max`. | | `resid` | residuals from the fitted model, conditioning on the first `order` observations. The first `order` residuals are set to `NA`. If `x` is a time series, so is `resid`. | | `method` | The value of the `method` argument. | | `series` | The name(s) of the time series. | | `frequency` | The frequency of the time series. | | `call` | The matched call. | | `asy.var.coef` | (univariate case, `order > 0`.) The asymptotic-theory variance matrix of the coefficient estimates. | For `predict.ar`, a time series of predictions, or if `se.fit = TRUE`, a list with components `pred`, the predictions, and `se`, the estimated standard errors. Both components are time series. ### Note Only the univariate case of `ar.mle` is implemented. Fitting by `method="mle"` to long series can be very slow. If `x` contains missing values, see `[NA](../../base/html/na)`, also consider using `<arima>()`, possibly with `method = "ML"`. ### Author(s) Martyn Plummer. Univariate case of `ar.yw`, `ar.mle` and C code for univariate case of `ar.burg` by B. D. Ripley. ### References Brockwell, P. J. and Davis, R. A. (1991). *Time Series and Forecasting Methods*, second edition. Springer, New York. Section 11.4. Brockwell, P. J. and Davis, R. A. (1996). *Introduction to Time Series and Forecasting*. Springer, New York. Sections 5.1 and 7.6. Percival, D. P. and Walden, A. T. (1998). *Spectral Analysis for Physical Applications*. Cambridge University Press. Whittle, P. (1963). On the fitting of multivariate autoregressions and the approximate canonical factorization of a spectral density matrix. *Biometrika*, **40**, 129–134. doi: [10.2307/2333753](https://doi.org/10.2307/2333753). ### See Also `<ar.ols>`, `<arima>` for ARMA models; `[acf2AR](acf2ar)`, for AR construction from the ACF. `<arima.sim>` for simulation of AR processes. ### Examples ``` ar(lh) ar(lh, method = "burg") ar(lh, method = "ols") ar(lh, FALSE, 4) # fit ar(4) (sunspot.ar <- ar(sunspot.year)) predict(sunspot.ar, n.ahead = 25) ## try the other methods too ar(ts.union(BJsales, BJsales.lead)) ## Burg is quite different here, as is OLS (see ar.ols) ar(ts.union(BJsales, BJsales.lead), method = "burg") ```
programming_docs
r None `tsp` Tsp Attribute of Time-Series-like Objects ------------------------------------------------ ### Description `tsp` returns the `tsp` attribute (or `NULL`). It is included for compatibility with S version 2. `tsp<-` sets the `tsp` attribute. `hasTsp` ensures `x` has a `tsp` attribute, by adding one if needed. ### Usage ``` tsp(x) tsp(x) <- value hasTsp(x) ``` ### Arguments | | | | --- | --- | | `x` | a vector or matrix or univariate or multivariate time-series. | | `value` | a numeric vector of length 3 or `NULL`. | ### Details The `tsp` attribute gives the start time *in time units*, the end time and the frequency (the number of observations per unit of time, e.g. 12 for a monthly series). Assignments are checked for consistency. Assigning `NULL` which removes the `tsp` attribute *and* any `"ts"` (or `"mts"`) class of `x`. ### Value An object which differs from `x` only in the `tsp` attribute (unless `NULL` is assigned). `hasTsp` adds, if needed, an attribute with a start time and frequency of 1 and end time `[NROW](../../base/html/nrow)(x)`. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<ts>`, `<time>`, `<start>`. r None `vcov` Calculate Variance-Covariance Matrix for a Fitted Model Object ---------------------------------------------------------------------- ### Description Returns the variance-covariance matrix of the main parameters of a fitted model object. The “main” parameters of model correspond to those returned by `<coef>`, and typically do not contain a nuisance scale parameter (`<sigma>`). ### Usage ``` vcov(object, ...) ## S3 method for class 'lm' vcov(object, complete = TRUE, ...) ## and also for '[summary.]glm' and 'mlm' ## S3 method for class 'aov' vcov(object, complete = FALSE, ...) .vcov.aliased(aliased, vc, complete = TRUE) ``` ### Arguments | | | | --- | --- | | `object` | a fitted model object, typically. Sometimes also a `[summary](../../base/html/summary)()` object of such a fitted model. | | `complete` | for the `aov`, `lm`, `glm`, `mlm`, and where applicable `summary.lm` etc methods: logical indicating if the full variance-covariance matrix should be returned also in case of an over-determined system where some coefficients are undefined and `<coef>(.)` contains `NA`s correspondingly. When `complete = TRUE`, `vcov()` is compatible with `coef()` also in this singular case. | | `...` | additional arguments for method functions. For the `<glm>` method this can be used to pass a `dispersion` parameter. | | | | | --- | --- | | `aliased` | a `[logical](../../base/html/logical)` vector typically identical to `is.na(coef(.))` indicating which coefficients are ‘aliased’. | | `vc` | a variance-covariance matrix, typically “incomplete”, i.e., with no rows and columns for aliased coefficients. | ### Details `vcov()` is a generic function and functions with names beginning in `vcov.` will be methods for this function. Classes with methods for this function include: `lm`, `mlm`, `glm`, `nls`, `summary.lm`, `summary.glm`, `negbin`, `polr`, `rlm` (in package [MASS](https://CRAN.R-project.org/package=MASS)), `multinom` (in package [nnet](https://CRAN.R-project.org/package=nnet)) `gls`, `lme` (in package [nlme](https://CRAN.R-project.org/package=nlme)), `coxph` and `survreg` (in package [survival](https://CRAN.R-project.org/package=survival)). (`vcov()` methods for summary objects allow more efficient and still encapsulated access when both `summary(mod)` and `vcov(mod)` are needed.) `.vcov.aliased()` is an auxiliary function useful for `vcov` method implementations which have to deal with singular model fits encoded via NA coefficients: It augments a vcov–matrix `vc` by `[NA](../../base/html/na)` rows and columns where needed, i.e., when some entries of `aliased` are true and `vc` is of smaller dimension than `length(aliased)`. ### Value A matrix of the estimated covariances between the parameter estimates in the linear or non-linear predictor of the model. This should have row and column names corresponding to the parameter names given by the `<coef>` method. When some coefficients of the (linear) model are undetermined and hence `NA` because of linearly dependent terms (or an “over specified” model), also called “aliased”, see `<alias>`, then since **R** version 3.5.0, `vcov()` (iff `complete = TRUE`, i.e., by default for `lm` etc, but not for `aov`) contains corresponding rows and columns of `NA`s, wherever `<coef>()` has always contained such `NA`s. r None `Exponential` The Exponential Distribution ------------------------------------------- ### Description Density, distribution function, quantile function and random generation for the exponential distribution with rate `rate` (i.e., mean `1/rate`). ### Usage ``` dexp(x, rate = 1, log = FALSE) pexp(q, rate = 1, lower.tail = TRUE, log.p = FALSE) qexp(p, rate = 1, lower.tail = TRUE, log.p = FALSE) rexp(n, rate = 1) ``` ### Arguments | | | | --- | --- | | `x, q` | vector of quantiles. | | `p` | vector of probabilities. | | `n` | number of observations. If `length(n) > 1`, the length is taken to be the number required. | | `rate` | vector of rates. | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details If `rate` is not specified, it assumes the default value of `1`. The exponential distribution with rate *λ* has density *f(x) = λ {e}^{- λ x}* for *x ≥ 0*. ### Value `dexp` gives the density, `pexp` gives the distribution function, `qexp` gives the quantile function, and `rexp` generates random deviates. The length of the result is determined by `n` for `rexp`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `n` are recycled to the length of the result. Only the first elements of the logical arguments are used. ### Note The cumulative hazard *H(t) = - log(1 - F(t))* is `-pexp(t, r, lower = FALSE, log = TRUE)`. ### Source `dexp`, `pexp` and `qexp` are all calculated from numerically stable versions of the definitions. `rexp` uses Ahrens, J. H. and Dieter, U. (1972). Computer methods for sampling from the exponential and normal distributions. *Communications of the ACM*, **15**, 873–882. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Johnson, N. L., Kotz, S. and Balakrishnan, N. (1995) *Continuous Univariate Distributions*, volume 1, chapter 19. Wiley, New York. ### See Also `[exp](../../base/html/log)` for the exponential function. [Distributions](distributions) for other standard distributions, including `[dgamma](gammadist)` for the gamma distribution and `[dweibull](weibull)` for the Weibull distribution, both of which generalize the exponential. ### Examples ``` dexp(1) - exp(-1) #-> 0 ## a fast way to generate *sorted* U[0,1] random numbers: rsunif <- function(n) { n1 <- n+1 cE <- cumsum(rexp(n1)); cE[seq_len(n)]/cE[n1] } plot(rsunif(1000), ylim=0:1, pch=".") abline(0,1/(1000+1), col=adjustcolor(1, 0.5)) ``` r None `anova.mlm` Comparisons between Multivariate Linear Models ----------------------------------------------------------- ### Description Compute a (generalized) analysis of variance table for one or more multivariate linear models. ### Usage ``` ## S3 method for class 'mlm' anova(object, ..., test = c("Pillai", "Wilks", "Hotelling-Lawley", "Roy", "Spherical"), Sigma = diag(nrow = p), T = Thin.row(proj(M) - proj(X)), M = diag(nrow = p), X = ~0, idata = data.frame(index = seq_len(p)), tol = 1e-7) ``` ### Arguments | | | | --- | --- | | `object` | an object of class `"mlm"`. | | `...` | further objects of class `"mlm"`. | | `test` | choice of test statistic (see below). Can be abbreviated. | | `Sigma` | (only relevant if `test == "Spherical"`). Covariance matrix assumed proportional to `Sigma`. | | `T` | transformation matrix. By default computed from `M` and `X`. | | `M` | formula or matrix describing the outer projection (see below). | | `X` | formula or matrix describing the inner projection (see below). | | `idata` | data frame describing intra-block design. | | | | | --- | --- | | `tol` | tolerance to be used in deciding if the residuals are rank-deficient: see `[qr](../../matrix/html/qr-methods)`. | ### Details The `anova.mlm` method uses either a multivariate test statistic for the summary table, or a test based on sphericity assumptions (i.e. that the covariance is proportional to a given matrix). For the multivariate test, Wilks' statistic is most popular in the literature, but the default Pillai–Bartlett statistic is recommended by Hand and Taylor (1987). See `<summary.manova>` for further details. For the `"Spherical"` test, proportionality is usually with the identity matrix but a different matrix can be specified using `Sigma`. Corrections for asphericity known as the Greenhouse–Geisser, respectively Huynh–Feldt, epsilons are given and adjusted *F* tests are performed. It is common to transform the observations prior to testing. This typically involves transformation to intra-block differences, but more complicated within-block designs can be encountered, making more elaborate transformations necessary. A transformation matrix `T` can be given directly or specified as the difference between two projections onto the spaces spanned by `M` and `X`, which in turn can be given as matrices or as model formulas with respect to `idata` (the tests will be invariant to parametrization of the quotient space `M/X`). As with `anova.lm`, all test statistics use the SSD matrix from the largest model considered as the (generalized) denominator. Contrary to other `anova` methods, the intercept is not excluded from the display in the single-model case. When contrast transformations are involved, it often makes good sense to test for a zero intercept. ### Value An object of class `"anova"` inheriting from class `"data.frame"` ### Note The Huynh–Feldt epsilon differs from that calculated by SAS (as of v. 8.2) except when the DF is equal to the number of observations minus one. This is believed to be a bug in SAS, not in **R**. ### References Hand, D. J. and Taylor, C. C. (1987) *Multivariate Analysis of Variance and Repeated Measures.* Chapman and Hall. ### See Also `<summary.manova>` ### Examples ``` require(graphics) utils::example(SSD) # Brings in the mlmfit and reacttime objects mlmfit0 <- update(mlmfit, ~0) ### Traditional tests of intrasubj. contrasts ## Using MANOVA techniques on contrasts: anova(mlmfit, mlmfit0, X = ~1) ## Assuming sphericity anova(mlmfit, mlmfit0, X = ~1, test = "Spherical") ### tests using intra-subject 3x2 design idata <- data.frame(deg = gl(3, 1, 6, labels = c(0, 4, 8)), noise = gl(2, 3, 6, labels = c("A", "P"))) anova(mlmfit, mlmfit0, X = ~ deg + noise, idata = idata, test = "Spherical") anova(mlmfit, mlmfit0, M = ~ deg + noise, X = ~ noise, idata = idata, test = "Spherical" ) anova(mlmfit, mlmfit0, M = ~ deg + noise, X = ~ deg, idata = idata, test = "Spherical" ) f <- factor(rep(1:2, 5)) # bogus, just for illustration mlmfit2 <- update(mlmfit, ~f) anova(mlmfit2, mlmfit, mlmfit0, X = ~1, test = "Spherical") anova(mlmfit2, X = ~1, test = "Spherical") # one-model form, eqiv. to previous ### There seems to be a strong interaction in these data plot(colMeans(reacttime)) ``` r None `NLSstAsymptotic` Fit the Asymptotic Regression Model ------------------------------------------------------ ### Description Fits the asymptotic regression model, in the form `b0 + b1*(1-exp(-exp(lrc) * x))` to the `xy` data. This can be used as a building block in determining starting estimates for more complicated models. ### Usage ``` NLSstAsymptotic(xy) ``` ### Arguments | | | | --- | --- | | `xy` | a `sortedXyData` object | ### Value A numeric value of length 3 with components labelled `b0`, `b1`, and `lrc`. `b0` is the estimated intercept on the `y`-axis, `b1` is the estimated difference between the asymptote and the `y`-intercept, and `lrc` is the estimated logarithm of the rate constant. ### Author(s) José Pinheiro and Douglas Bates ### See Also `[SSasymp](ssasymp)` ### Examples ``` Lob.329 <- Loblolly[ Loblolly$Seed == "329", ] print(NLSstAsymptotic(sortedXyData(expression(age), expression(height), Lob.329)), digits = 3) ``` r None `print.power.htest` Print Methods for Hypothesis Tests and Power Calculation Objects ------------------------------------------------------------------------------------- ### Description Printing objects of class `"htest"` or `"power.htest"`, respectively, by simple `[print](../../base/html/print)` methods. ### Usage ``` ## S3 method for class 'htest' print(x, digits = getOption("digits"), prefix = "\t", ...) ## S3 method for class 'power.htest' print(x, digits = getOption("digits"), ...) ``` ### Arguments | | | | --- | --- | | `x` | object of class `"htest"` or `"power.htest"`. | | `digits` | number of significant digits to be used. | | `prefix` | string, passed to `[strwrap](../../base/html/strwrap)` for displaying the `method` component of the `htest` object. | | `...` | further arguments to be passed to or from methods. | ### Details Both `[print](../../base/html/print)` methods traditionally have not obeyed the `digits` argument properly. They now do, the `htest` method mostly in expressions like `max(1, digits - 2)`. A `power.htest` object is just a named list of numbers and character strings, supplemented with `method` and `note` elements. The `method` is displayed as a title, the `note` as a footnote, and the remaining elements are given in an aligned ‘name = value’ format. ### Value the argument `x`, invisibly, as for all `[print](../../base/html/print)` methods. ### Author(s) Peter Dalgaard ### See Also `<power.t.test>`, `<power.prop.test>` ### Examples ``` (ptt <- power.t.test(n = 20, delta = 1)) print(ptt, digits = 4) # using less digits than default print(ptt, digits = 12) # using more " " " ``` r None `shapiro.test` Shapiro-Wilk Normality Test ------------------------------------------- ### Description Performs the Shapiro-Wilk test of normality. ### Usage ``` shapiro.test(x) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector of data values. Missing values are allowed, but the number of non-missing values must be between 3 and 5000. | ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the value of the Shapiro-Wilk statistic. | | `p.value` | an approximate p-value for the test. This is said in Royston (1995) to be adequate for `p.value < 0.1`. | | `method` | the character string `"Shapiro-Wilk normality test"`. | | `data.name` | a character string giving the name(s) of the data. | ### Source The algorithm used is a C translation of the Fortran code described in Royston (1995). The calculation of the p value is exact for *n = 3*, otherwise approximations are used, separately for *4 ≤ n ≤ 11* and *n ≥ 12*. ### References Patrick Royston (1982). An extension of Shapiro and Wilk's *W* test for normality to large samples. *Applied Statistics*, **31**, 115–124. doi: [10.2307/2347973](https://doi.org/10.2307/2347973). Patrick Royston (1982). Algorithm AS 181: The *W* test for Normality. *Applied Statistics*, **31**, 176–180. doi: [10.2307/2347986](https://doi.org/10.2307/2347986). Patrick Royston (1995). Remark AS R94: A remark on Algorithm AS 181: The *W* test for normality. *Applied Statistics*, **44**, 547–551. doi: [10.2307/2986146](https://doi.org/10.2307/2986146). ### See Also `<qqnorm>` for producing a normal quantile-quantile plot. ### Examples ``` shapiro.test(rnorm(100, mean = 5, sd = 3)) shapiro.test(runif(100, min = 2, max = 4)) ``` r None `stl` Seasonal Decomposition of Time Series by Loess ----------------------------------------------------- ### Description Decompose a time series into seasonal, trend and irregular components using `loess`, acronym STL. ### Usage ``` stl(x, s.window, s.degree = 0, t.window = NULL, t.degree = 1, l.window = nextodd(period), l.degree = t.degree, s.jump = ceiling(s.window/10), t.jump = ceiling(t.window/10), l.jump = ceiling(l.window/10), robust = FALSE, inner = if(robust) 1 else 2, outer = if(robust) 15 else 0, na.action = na.fail) ``` ### Arguments | | | | --- | --- | | `x` | univariate time series to be decomposed. This should be an object of class `"ts"` with a frequency greater than one. | | `s.window` | either the character string `"periodic"` or the span (in lags) of the loess window for seasonal extraction, which should be odd and at least 7, according to Cleveland et al. This has no default. | | `s.degree` | degree of locally-fitted polynomial in seasonal extraction. Should be zero or one. | | `t.window` | the span (in lags) of the loess window for trend extraction, which should be odd. If `NULL`, the default, `nextodd(ceiling((1.5*period) / (1-(1.5/s.window))))`, is taken. | | `t.degree` | degree of locally-fitted polynomial in trend extraction. Should be zero or one. | | `l.window` | the span (in lags) of the loess window of the low-pass filter used for each subseries. Defaults to the smallest odd integer greater than or equal to `frequency(x)` which is recommended since it prevents competition between the trend and seasonal components. If not an odd integer its given value is increased to the next odd one. | | `l.degree` | degree of locally-fitted polynomial for the subseries low-pass filter. Must be 0 or 1. | | `s.jump, t.jump, l.jump` | integers at least one to increase speed of the respective smoother. Linear interpolation happens between every `*.jump`th value. | | `robust` | logical indicating if robust fitting be used in the `loess` procedure. | | `inner` | integer; the number of ‘inner’ (backfitting) iterations; usually very few (2) iterations suffice. | | `outer` | integer; the number of ‘outer’ robustness iterations. | | `na.action` | action on missing values. | ### Details The seasonal component is found by *loess* smoothing the seasonal sub-series (the series of all January values, ...); if `s.window = "periodic"` smoothing is effectively replaced by taking the mean. The seasonal values are removed, and the remainder smoothed to find the trend. The overall level is removed from the seasonal component and added to the trend component. This process is iterated a few times. The `remainder` component is the residuals from the seasonal plus trend fit. Several methods for the resulting class `"stl"` objects, see, `[plot.stl](stlmethods)`. ### Value `stl` returns an object of class `"stl"` with components | | | | --- | --- | | `time.series` | a multiple time series with columns `seasonal`, `trend` and `remainder`. | | `weights` | the final robust weights (all one if fitting is not done robustly). | | `call` | the matched call. | | `win` | integer (length 3 vector) with the spans used for the `"s"`, `"t"`, and `"l"` smoothers. | | `deg` | integer (length 3) vector with the polynomial degrees for these smoothers. | | `jump` | integer (length 3) vector with the ‘jumps’ (skips) used for these smoothers. | | `ni` | number of **i**nner iterations | | `no` | number of **o**uter robustness iterations | ### Note This is similar to but not identical to the `stl` function in S-PLUS. The `remainder` component given by S-PLUS is the sum of the `trend` and `remainder` series from this function. ### Author(s) B.D. Ripley; Fortran code by Cleveland *et al* (1990) from ‘netlib’. ### References R. B. Cleveland, W. S. Cleveland, J.E. McRae, and I. Terpenning (1990) STL: A Seasonal-Trend Decomposition Procedure Based on Loess. *Journal of Official Statistics*, **6**, 3–73. ### See Also `[plot.stl](stlmethods)` for `stl` methods; `<loess>` in package stats (which is not actually used in `stl`). `[StructTS](structts)` for different kind of decomposition. ### Examples ``` require(graphics) plot(stl(nottem, "per")) plot(stl(nottem, s.window = 7, t.window = 50, t.jump = 1)) plot(stllc <- stl(log(co2), s.window = 21)) summary(stllc) ## linear trend, strict period. plot(stl(log(co2), s.window = "per", t.window = 1000)) ## Two STL plotted side by side : stmd <- stl(mdeaths, s.window = "per") # non-robust summary(stmR <- stl(mdeaths, s.window = "per", robust = TRUE)) op <- par(mar = c(0, 4, 0, 3), oma = c(5, 0, 4, 0), mfcol = c(4, 2)) plot(stmd, set.pars = NULL, labels = NULL, main = "stl(mdeaths, s.w = \"per\", robust = FALSE / TRUE )") plot(stmR, set.pars = NULL) # mark the 'outliers' : (iO <- which(stmR $ weights < 1e-8)) # 10 were considered outliers sts <- stmR$time.series points(time(sts)[iO], 0.8* sts[,"remainder"][iO], pch = 4, col = "red") par(op) # reset ```
programming_docs
r None `tsSmooth` Use Fixed-Interval Smoothing on Time Series ------------------------------------------------------- ### Description Performs fixed-interval smoothing on a univariate time series via a state-space model. Fixed-interval smoothing gives the best estimate of the state at each time point based on the whole observed series. ### Usage ``` tsSmooth(object, ...) ``` ### Arguments | | | | --- | --- | | `object` | a time-series fit. Currently only class `"[StructTS](structts)"` is supported | | `...` | possible arguments for future methods. | ### Value A time series, with as many dimensions as the state space and results at each time point of the original series. (For seasonal models, only the current seasonal component is returned.) ### Author(s) B. D. Ripley ### References Durbin, J. and Koopman, S. J. (2001) *Time Series Analysis by State Space Methods.* Oxford University Press. ### See Also `[KalmanSmooth](kalmanlike)`, `[StructTS](structts)`. For examples consult `[AirPassengers](../../datasets/html/airpassengers)`, `[JohnsonJohnson](../../datasets/html/johnsonjohnson)` and `[Nile](../../datasets/html/nile)`. r None `delete.response` Modify Terms Objects --------------------------------------- ### Description `delete.response` returns a `terms` object for the same model but with no response variable. `drop.terms` removes variables from the right-hand side of the model. There is also a `"[.terms"` method to perform the same function (with `keep.response = TRUE`). `reformulate` creates a formula from a character vector. If `length(termlabels) > 1`, its elements are concatenated with `+`. Non-syntactic names (e.g. containing spaces or special characters; see `[make.names](../../base/html/make.names)`) must be protected with backticks (see examples). A non-`[parse](../../base/html/parse)`able `response` still works for now, back compatibly, with a deprecation warning. ### Usage ``` delete.response(termobj) reformulate(termlabels, response = NULL, intercept = TRUE, env = parent.frame()) drop.terms(termobj, dropx = NULL, keep.response = FALSE) ``` ### Arguments | | | | --- | --- | | `termobj` | A `terms` object | | `termlabels` | character vector giving the right-hand side of a model formula. Cannot be zero-length. | | `response` | character string, symbol or call giving the left-hand side of a model formula, or `NULL`. | | `intercept` | logical: should the formula have an intercept? | | `env` | the `[environment](../../base/html/environment)` of the `<formula>` returned. | | `dropx` | vector of positions of variables to drop from the right-hand side of the model. | | `keep.response` | Keep the response in the resulting object? | ### Value `delete.response` and `drop.terms` return a `terms` object. `reformulate` returns a `formula`. ### See Also `<terms>` ### Examples ``` ff <- y ~ z + x + w tt <- terms(ff) tt delete.response(tt) drop.terms(tt, 2:3, keep.response = TRUE) tt[-1] tt[2:3] reformulate(attr(tt, "term.labels")) ## keep LHS : reformulate("x*w", ff[[2]]) fS <- surv(ft, case) ~ a + b reformulate(c("a", "b*f"), fS[[2]]) ## using non-syntactic names: reformulate(c("`P/E`", "`% Growth`"), response = as.name("+-")) x <- c("a name", "another name") try( reformulate(x) ) # -> Error ..... unexpected symbol ## rather backquote the strings in x : reformulate(sprintf("`%s`", x)) stopifnot(identical( ~ var, reformulate("var")), identical(~ a + b + c, reformulate(letters[1:3])), identical( y ~ a + b, reformulate(letters[1:2], "y")) ) ``` r None `model.frame` Extracting the Model Frame from a Formula or Fit --------------------------------------------------------------- ### Description `model.frame` (a generic function) and its methods return a `[data.frame](../../base/html/data.frame)` with the variables needed to use `formula` and any `...` arguments. ### Usage ``` model.frame(formula, ...) ## Default S3 method: model.frame(formula, data = NULL, subset = NULL, na.action = na.fail, drop.unused.levels = FALSE, xlev = NULL, ...) ## S3 method for class 'aovlist' model.frame(formula, data = NULL, ...) ## S3 method for class 'glm' model.frame(formula, ...) ## S3 method for class 'lm' model.frame(formula, ...) get_all_vars(formula, data, ...) ``` ### Arguments | | | | --- | --- | | `formula` | a model `<formula>` or `<terms>` object or an **R** object. | | `data` | a data.frame, list or environment (or object coercible by `[as.data.frame](../../base/html/as.data.frame)` to a data.frame), containing the variables in `formula`. Neither a matrix nor an array will be accepted. | | `subset` | a specification of the rows to be used: defaults to all rows. This can be any valid indexing vector (see `[[.data.frame](../../base/html/extract.data.frame)`) for the rows of `data` or if that is not supplied, a data frame made up of the variables used in `formula`. | | `na.action` | how `NA`s are treated. The default is first, any `na.action` attribute of `data`, second a `na.action` setting of `[options](../../base/html/options)`, and third `<na.fail>` if that is unset. The ‘factory-fresh’ default is `[na.omit](na.fail)`. Another possible value is `NULL`. | | `drop.unused.levels` | should factors have unused levels dropped? Defaults to `FALSE`. | | `xlev` | a named list of character vectors giving the full set of levels to be assumed for each factor. | | `...` | for `model.frame` methods, a mix of further arguments such as `data`, `na.action`, `subset` to pass to the default method. Any additional arguments (such as `offset` and `weights` or other named arguments) which reach the default method are used to create further columns in the model frame, with parenthesised names such as `"(offset)"`. For `get_all_vars`, further named columns to include in the model frame. | ### Details Exactly what happens depends on the class and attributes of the object `formula`. If this is an object of fitted-model class such as `"lm"`, the method will either return the saved model frame used when fitting the model (if any, often selected by argument `model = TRUE`) or pass the call used when fitting on to the default method. The default method itself can cope with rather standard model objects such as those of class `"[lqs](../../mass/html/lqs)"` from package [MASS](https://CRAN.R-project.org/package=MASS) if no other arguments are supplied. The rest of this section applies only to the default method. If either `formula` or `data` is already a model frame (a data frame with a `"terms"` attribute) and the other is missing, the model frame is returned. Unless `formula` is a terms object, `as.formula` and then `terms` is called on it. (If you wish to use the `keep.order` argument of `terms.formula`, pass a terms object rather than a formula.) Row names for the model frame are taken from the `data` argument if present, then from the names of the response in the formula (or rownames if it is a matrix), if there is one. All the variables in `formula`, `subset` and in `...` are looked for first in `data` and then in the environment of `formula` (see the help for `<formula>()` for further details) and collected into a data frame. Then the `subset` expression is evaluated, and it is used as a row index to the data frame. Then the `na.action` function is applied to the data frame (and may well add attributes). The levels of any factors in the data frame are adjusted according to the `drop.unused.levels` and `xlev` arguments: if `xlev` specifies a factor and a character variable is found, it is converted to a factor (as from **R** 2.10.0). Unless `na.action = NULL`, time-series attributes will be removed from the variables found (since they will be wrong if `NA`s are removed). Note that *all* the variables in the formula are included in the data frame, even those preceded by `-`. Only variables whose type is raw, logical, integer, real, complex or character can be included in a model frame: this includes classed variables such as factors (whose underlying type is integer), but excludes lists. `get_all_vars` returns a `[data.frame](../../base/html/data.frame)` containing the variables used in `formula` plus those specified in `...` which are recycled to the number of data frame rows. Unlike `model.frame.default`, it returns the input variables and not those resulting from function calls in `formula`. ### Value A `[data.frame](../../base/html/data.frame)` containing the variables used in `formula` plus those specified in `...`. It will have additional attributes, including `"terms"` for an object of class `"[terms](terms.object)"` derived from `formula`, and possibly `"na.action"` giving information on the handling of `NA`s (which will not be present if no special handling was done, e.g. by `[na.pass](na.fail)`). ### References Chambers, J. M. (1992) *Data for models.* Chapter 3 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<model.matrix>` for the ‘design matrix’, `<formula>` for formulas and `<expand.model.frame>` for model.frame manipulation. ### Examples ``` data.class(model.frame(dist ~ speed, data = cars)) ## get_all_vars(): new var.s are recycled (iff length matches: 50 = 2*25) ncars <- get_all_vars(sqrt(dist) ~ I(speed/2), data = cars, newVar = 2:3) stopifnot(is.data.frame(ncars), identical(cars, ncars[,names(cars)]), ncol(ncars) == ncol(cars) + 1) ``` r None `density` Kernel Density Estimation ------------------------------------ ### Description The (S3) generic function `density` computes kernel density estimates. Its default method does so with the given kernel and bandwidth for univariate observations. ### Usage ``` density(x, ...) ## Default S3 method: density(x, bw = "nrd0", adjust = 1, kernel = c("gaussian", "epanechnikov", "rectangular", "triangular", "biweight", "cosine", "optcosine"), weights = NULL, window = kernel, width, give.Rkern = FALSE, n = 512, from, to, cut = 3, na.rm = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `x` | the data from which the estimate is to be computed. For the default method a numeric vector: long vectors are not supported. | | `bw` | the smoothing bandwidth to be used. The kernels are scaled such that this is the standard deviation of the smoothing kernel. (Note this differs from the reference books cited below, and from S-PLUS.) `bw` can also be a character string giving a rule to choose the bandwidth. See `[bw.nrd](bandwidth)`. The default, `"nrd0"`, has remained the default for historical and compatibility reasons, rather than as a general recommendation, where e.g., `"SJ"` would rather fit, see also Venables and Ripley (2002). The specified (or computed) value of `bw` is multiplied by `adjust`. | | `adjust` | the bandwidth used is actually `adjust*bw`. This makes it easy to specify values like ‘half the default’ bandwidth. | | `kernel, window` | a character string giving the smoothing kernel to be used. This must partially match one of `"gaussian"`, `"rectangular"`, `"triangular"`, `"epanechnikov"`, `"biweight"`, `"cosine"` or `"optcosine"`, with default `"gaussian"`, and may be abbreviated to a unique prefix (single letter). `"cosine"` is smoother than `"optcosine"`, which is the usual ‘cosine’ kernel in the literature and almost MSE-efficient. However, `"cosine"` is the version used by S. | | `weights` | numeric vector of non-negative observation weights, hence of same length as `x`. The default `NULL` is equivalent to `weights = rep(1/nx, nx)` where `nx` is the length of (the finite entries of) `x[]`. | | `width` | this exists for compatibility with S; if given, and `bw` is not, will set `bw` to `width` if this is a character string, or to a kernel-dependent multiple of `width` if this is numeric. | | `give.Rkern` | logical; if true, *no* density is estimated, and the ‘canonical bandwidth’ of the chosen `kernel` is returned instead. | | `n` | the number of equally spaced points at which the density is to be estimated. When `n > 512`, it is rounded up to a power of 2 during the calculations (as `<fft>` is used) and the final result is interpolated by `[approx](approxfun)`. So it almost always makes sense to specify `n` as a power of two. | | `from,to` | the left and right-most points of the grid at which the density is to be estimated; the defaults are `cut * bw` outside of `range(x)`. | | `cut` | by default, the values of `from` and `to` are `cut` bandwidths beyond the extremes of the data. This allows the estimated density to drop to approximately zero at the extremes. | | `na.rm` | logical; if `TRUE`, missing values are removed from `x`. If `FALSE` any missing values cause an error. | | `...` | further arguments for (non-default) methods. | ### Details The algorithm used in `density.default` disperses the mass of the empirical distribution function over a regular grid of at least 512 points and then uses the fast Fourier transform to convolve this approximation with a discretized version of the kernel and then uses linear approximation to evaluate the density at the specified points. The statistical properties of a kernel are determined by *sig^2 (K) = int(t^2 K(t) dt)* which is always *= 1* for our kernels (and hence the bandwidth `bw` is the standard deviation of the kernel) and *R(K) = int(K^2(t) dt)*. MSE-equivalent bandwidths (for different kernels) are proportional to *sig(K) R(K)* which is scale invariant and for our kernels equal to *R(K)*. This value is returned when `give.Rkern = TRUE`. See the examples for using exact equivalent bandwidths. Infinite values in `x` are assumed to correspond to a point mass at `+/-Inf` and the density estimate is of the sub-density on `(-Inf, +Inf)`. ### Value If `give.Rkern` is true, the number *R(K)*, otherwise an object with class `"density"` whose underlying structure is a list containing the following components. | | | | --- | --- | | `x` | the `n` coordinates of the points where the density is estimated. | | `y` | the estimated density values. These will be non-negative, but can be zero. | | `bw` | the bandwidth used. | | `n` | the sample size after elimination of missing values. | | `call` | the call which produced the result. | | `data.name` | the deparsed name of the `x` argument. | | `has.na` | logical, for compatibility (always `FALSE`). | The `print` method reports `[summary](../../base/html/summary)` values on the `x` and `y` components. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988). *The New S Language*. Wadsworth & Brooks/Cole (for S version). Scott, D. W. (1992). *Multivariate Density Estimation. Theory, Practice and Visualization*. New York: Wiley. Sheather, S. J. and Jones, M. C. (1991). A reliable data-based bandwidth selection method for kernel density estimation. *Journal of the Royal Statistical Society series B*, **53**, 683–690. doi: [10.1111/j.2517-6161.1991.tb01857.x](https://doi.org/10.1111/j.2517-6161.1991.tb01857.x). <https://www.jstor.org/stable/2345597>. Silverman, B. W. (1986). *Density Estimation*. London: Chapman and Hall. Venables, W. N. and Ripley, B. D. (2002). *Modern Applied Statistics with S*. New York: Springer. ### See Also `[bw.nrd](bandwidth)`, `<plot.density>`, `[hist](../../graphics/html/hist)`. ### Examples ``` require(graphics) plot(density(c(-20, rep(0,98), 20)), xlim = c(-4, 4)) # IQR = 0 # The Old Faithful geyser data d <- density(faithful$eruptions, bw = "sj") d plot(d) plot(d, type = "n") polygon(d, col = "wheat") ## Missing values: x <- xx <- faithful$eruptions x[i.out <- sample(length(x), 10)] <- NA doR <- density(x, bw = 0.15, na.rm = TRUE) lines(doR, col = "blue") points(xx[i.out], rep(0.01, 10)) ## Weighted observations: fe <- sort(faithful$eruptions) # has quite a few non-unique values ## use 'counts / n' as weights: dw <- density(unique(fe), weights = table(fe)/length(fe), bw = d$bw) utils::str(dw) ## smaller n: only 126, but identical estimate: stopifnot(all.equal(d[1:3], dw[1:3])) ## simulation from a density() fit: # a kernel density fit is an equally-weighted mixture. fit <- density(xx) N <- 1e6 x.new <- rnorm(N, sample(xx, size = N, replace = TRUE), fit$bw) plot(fit) lines(density(x.new), col = "blue") (kernels <- eval(formals(density.default)$kernel)) ## show the kernels in the R parametrization plot (density(0, bw = 1), xlab = "", main = "R's density() kernels with bw = 1") for(i in 2:length(kernels)) lines(density(0, bw = 1, kernel = kernels[i]), col = i) legend(1.5,.4, legend = kernels, col = seq(kernels), lty = 1, cex = .8, y.intersp = 1) ## show the kernels in the S parametrization plot(density(0, from = -1.2, to = 1.2, width = 2, kernel = "gaussian"), type = "l", ylim = c(0, 1), xlab = "", main = "R's density() kernels with width = 1") for(i in 2:length(kernels)) lines(density(0, width = 2, kernel = kernels[i]), col = i) legend(0.6, 1.0, legend = kernels, col = seq(kernels), lty = 1) ##-------- Semi-advanced theoretic from here on ------------- (RKs <- cbind(sapply(kernels, function(k) density(kernel = k, give.Rkern = TRUE)))) 100*round(RKs["epanechnikov",]/RKs, 4) ## Efficiencies bw <- bw.SJ(precip) ## sensible automatic choice plot(density(precip, bw = bw), main = "same sd bandwidths, 7 different kernels") for(i in 2:length(kernels)) lines(density(precip, bw = bw, kernel = kernels[i]), col = i) ## Bandwidth Adjustment for "Exactly Equivalent Kernels" h.f <- sapply(kernels, function(k)density(kernel = k, give.Rkern = TRUE)) (h.f <- (h.f["gaussian"] / h.f)^ .2) ## -> 1, 1.01, .995, 1.007,... close to 1 => adjustment barely visible.. plot(density(precip, bw = bw), main = "equivalent bandwidths, 7 different kernels") for(i in 2:length(kernels)) lines(density(precip, bw = bw, adjust = h.f[i], kernel = kernels[i]), col = i) legend(55, 0.035, legend = kernels, col = seq(kernels), lty = 1) ``` r None `numericDeriv` Evaluate Derivatives Numerically ------------------------------------------------ ### Description `numericDeriv` numerically evaluates the gradient of an expression. ### Usage ``` numericDeriv(expr, theta, rho = parent.frame(), dir = 1, eps = .Machine$double.eps ^ (1/if(central) 3 else 2), central = FALSE) ``` ### Arguments | | | | --- | --- | | `expr` | `[expression](../../base/html/expression)` or `[call](../../base/html/call)` to be differentiated. Should evaluate to a `[numeric](../../base/html/numeric)` vector. | | `theta` | `[character](../../base/html/character)` vector of names of numeric variables used in `expr`. | | `rho` | `[environment](../../base/html/environment)` containing all the variables needed to evaluate `expr`. | | `dir` | numeric vector of directions, typically with values in `-1, 1` to use for the finite differences; will be recycled to the length of `theta`. | | `eps` | a positive number, to be used as unit step size *h* for the approximate numerical derivative *(f(x+h)-f(x))/h* or the central version, see `central`. | | `central` | logical indicating if *central* divided differences should be computed, i.e., *(f(x+h) - f(x-h)) / 2h* . These are typically more accurate but need more evaluations of *f()*. | ### Details This is a front end to the C function `numeric_deriv`, which is described in *Writing R Extensions*. The numeric variables must be of type `double` and not `integer`. ### Value The value of `eval(expr, envir = rho)` plus a matrix attribute `"gradient"`. The columns of this matrix are the derivatives of the value with respect to the variables listed in `theta`. ### Author(s) Saikat DebRoy [[email protected]](mailto:[email protected]); tweaks and `eps`, `central` options by R Core Team. ### Examples ``` myenv <- new.env() myenv$mean <- 0. myenv$sd <- 1. myenv$x <- seq(-3., 3., length.out = 31) nD <- numericDeriv(quote(pnorm(x, mean, sd)), c("mean", "sd"), myenv) str(nD) ## Visualize : require(graphics) matplot(myenv$x, cbind(c(nD), attr(nD, "gradient")), type="l") abline(h=0, lty=3) ## "gradient" is close to the true derivatives, you don't see any diff.: curve( - dnorm(x), col=2, lty=3, lwd=2, add=TRUE) curve(-x*dnorm(x), col=3, lty=3, lwd=2, add=TRUE) ## ## IGNORE_RDIFF_BEGIN # shows 1.609e-8 on most platforms all.equal(attr(nD,"gradient"), with(myenv, cbind(-dnorm(x), -x*dnorm(x)))) ## IGNORE_RDIFF_END ```
programming_docs
r None `mad` Median Absolute Deviation -------------------------------- ### Description Compute the median absolute deviation, i.e., the (lo-/hi-) median of the absolute deviations from the median, and (by default) adjust by a factor for asymptotically normal consistency. ### Usage ``` mad(x, center = median(x), constant = 1.4826, na.rm = FALSE, low = FALSE, high = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector. | | `center` | Optionally, the centre: defaults to the median. | | `constant` | scale factor. | | `na.rm` | if `TRUE` then `NA` values are stripped from `x` before computation takes place. | | `low` | if `TRUE`, compute the ‘lo-median’, i.e., for even sample size, do not average the two middle values, but take the smaller one. | | `high` | if `TRUE`, compute the ‘hi-median’, i.e., take the larger of the two middle values for even sample size. | ### Details The actual value calculated is `constant * cMedian(abs(x - center))` with the default value of `center` being `median(x)`, and `cMedian` being the usual, the ‘low’ or ‘high’ median, see the arguments description for `low` and `high` above. The default `constant = 1.4826` (approximately *1/ Φ^(-1)(3/4)* = `1/qnorm(3/4)`) ensures consistency, i.e., *E[mad(X\_1,…,X\_n)] = σ* for *X\_i* distributed as *N(μ, σ^2)* and large *n*. If `na.rm` is `TRUE` then `NA` values are stripped from `x` before computation takes place. If this is not done then an `NA` value in `x` will cause `mad` to return `NA`. ### See Also `[IQR](iqr)` which is simpler but less robust, `<median>`, `[var](cor)`. ### Examples ``` mad(c(1:9)) print(mad(c(1:9), constant = 1)) == mad(c(1:8, 100), constant = 1) # = 2 ; TRUE x <- c(1,2,3,5,7,8) sort(abs(x - median(x))) c(mad(x, constant = 1), mad(x, constant = 1, low = TRUE), mad(x, constant = 1, high = TRUE)) ``` r None `influence.measures` Regression Deletion Diagnostics ----------------------------------------------------- ### Description This suite of functions can be used to compute some of the regression (leave-one-out deletion) diagnostics for linear and generalized linear models discussed in Belsley, Kuh and Welsch (1980), Cook and Weisberg (1982), etc. ### Usage ``` influence.measures(model, infl = influence(model)) rstandard(model, ...) ## S3 method for class 'lm' rstandard(model, infl = lm.influence(model, do.coef = FALSE), sd = sqrt(deviance(model)/df.residual(model)), type = c("sd.1", "predictive"), ...) ## S3 method for class 'glm' rstandard(model, infl = influence(model, do.coef = FALSE), type = c("deviance", "pearson"), ...) rstudent(model, ...) ## S3 method for class 'lm' rstudent(model, infl = lm.influence(model, do.coef = FALSE), res = infl$wt.res, ...) ## S3 method for class 'glm' rstudent(model, infl = influence(model, do.coef = FALSE), ...) dffits(model, infl = , res = ) dfbeta(model, ...) ## S3 method for class 'lm' dfbeta(model, infl = lm.influence(model, do.coef = TRUE), ...) dfbetas(model, ...) ## S3 method for class 'lm' dfbetas(model, infl = lm.influence(model, do.coef = TRUE), ...) covratio(model, infl = lm.influence(model, do.coef = FALSE), res = weighted.residuals(model)) cooks.distance(model, ...) ## S3 method for class 'lm' cooks.distance(model, infl = lm.influence(model, do.coef = FALSE), res = weighted.residuals(model), sd = sqrt(deviance(model)/df.residual(model)), hat = infl$hat, ...) ## S3 method for class 'glm' cooks.distance(model, infl = influence(model, do.coef = FALSE), res = infl$pear.res, dispersion = summary(model)$dispersion, hat = infl$hat, ...) hatvalues(model, ...) ## S3 method for class 'lm' hatvalues(model, infl = lm.influence(model, do.coef = FALSE), ...) hat(x, intercept = TRUE) ``` ### Arguments | | | | --- | --- | | `model` | an **R** object, typically returned by `<lm>` or `<glm>`. | | `infl` | influence structure as returned by `<lm.influence>` or `[influence](lm.influence)` (the latter only for the `glm` method of `rstudent` and `cooks.distance`). | | `res` | (possibly weighted) residuals, with proper default. | | `sd` | standard deviation to use, see default. | | `dispersion` | dispersion (for `<glm>` objects) to use, see default. | | `hat` | hat values *H[i,i]*, see default. | | `type` | type of residuals for `rstandard`, with different options and meanings for `lm` and `glm`. Can be abbreviated. | | `x` | the *X* or design matrix. | | `intercept` | should an intercept column be prepended to `x`? | | `...` | further arguments passed to or from other methods. | ### Details The primary high-level function is `influence.measures` which produces a class `"infl"` object tabular display showing the DFBETAS for each model variable, DFFITS, covariance ratios, Cook's distances and the diagonal elements of the hat matrix. Cases which are influential with respect to any of these measures are marked with an asterisk. The functions `dfbetas`, `dffits`, `covratio` and `cooks.distance` provide direct access to the corresponding diagnostic quantities. Functions `rstandard` and `rstudent` give the standardized and Studentized residuals respectively. (These re-normalize the residuals to have unit variance, using an overall and leave-one-out measure of the error variance respectively.) Note that for *multivariate* `lm()` models (of class `"mlm"`), these functions return 3d arrays instead of matrices, or matrices instead of vectors. Values for generalized linear models are approximations, as described in Williams (1987) (except that Cook's distances are scaled as *F* rather than as chi-square values). The approximations can be poor when some cases have large influence. The optional `infl`, `res` and `sd` arguments are there to encourage the use of these direct access functions, in situations where, e.g., the underlying basic influence measures (from `<lm.influence>` or the generic `[influence](lm.influence)`) are already available. Note that cases with `weights == 0` are *dropped* from all these functions, but that if a linear model has been fitted with `na.action = na.exclude`, suitable values are filled in for the cases excluded during fitting. For linear models, `rstandard(*, type = "predictive")` provides leave-one-out cross validation residuals, and the “PRESS” statistic (**PRE**dictive **S**um of **S**quares, the same as the CV score) of model `model` is ``` PRESS <- sum(rstandard(model, type="pred")^2) ``` The function `hat()` exists mainly for S (version 2) compatibility; we recommend using `hatvalues()` instead. ### Note For `hatvalues`, `dfbeta`, and `dfbetas`, the method for linear models also works for generalized linear models. ### Author(s) Several R core team members and John Fox, originally in his ‘car’ package. ### References Belsley, D. A., Kuh, E. and Welsch, R. E. (1980). *Regression Diagnostics*. New York: Wiley. Cook, R. D. and Weisberg, S. (1982). *Residuals and Influence in Regression*. London: Chapman and Hall. Williams, D. A. (1987). Generalized linear model diagnostics using the deviance and single case deletions. *Applied Statistics*, **36**, 181–191. doi: [10.2307/2347550](https://doi.org/10.2307/2347550). Fox, J. (1997). *Applied Regression, Linear Models, and Related Methods*. Sage. Fox, J. (2002) *An R and S-Plus Companion to Applied Regression*. Sage Publ. Fox, J. and Weisberg, S. (2011). *An R Companion to Applied Regression*, second edition. Sage Publ; <https://socialsciences.mcmaster.ca/jfox/Books/Companion/>. ### See Also `[influence](lm.influence)` (containing `<lm.influence>`). ‘[plotmath](../../grdevices/html/plotmath)’ for the use of `hat` in plot annotation. ### Examples ``` require(graphics) ## Analysis of the life-cycle savings data ## given in Belsley, Kuh and Welsch. lm.SR <- lm(sr ~ pop15 + pop75 + dpi + ddpi, data = LifeCycleSavings) inflm.SR <- influence.measures(lm.SR) which(apply(inflm.SR$is.inf, 1, any)) # which observations 'are' influential summary(inflm.SR) # only these inflm.SR # all plot(rstudent(lm.SR) ~ hatvalues(lm.SR)) # recommended by some plot(lm.SR, which = 5) # an enhanced version of that via plot(<lm>) ## The 'infl' argument is not needed, but avoids recomputation: rs <- rstandard(lm.SR) iflSR <- influence(lm.SR) all.equal(rs, rstandard(lm.SR, infl = iflSR), tolerance = 1e-10) ## to "see" the larger values: 1000 * round(dfbetas(lm.SR, infl = iflSR), 3) cat("PRESS :"); (PRESS <- sum( rstandard(lm.SR, type = "predictive")^2 )) stopifnot(all.equal(PRESS, sum( (residuals(lm.SR) / (1 - iflSR$hat))^2))) ## Show that "PRE-residuals" == L.O.O. Crossvalidation (CV) errors: X <- model.matrix(lm.SR) y <- model.response(model.frame(lm.SR)) ## Leave-one-out CV least-squares prediction errors (relatively fast) rCV <- vapply(seq_len(nrow(X)), function(i) y[i] - X[i,] %*% .lm.fit(X[-i,], y[-i])$coefficients, numeric(1)) ## are the same as the *faster* rstandard(*, "pred") : stopifnot(all.equal(rCV, unname(rstandard(lm.SR, type = "predictive")))) ## Huber's data [Atkinson 1985] xh <- c(-4:0, 10) yh <- c(2.48, .73, -.04, -1.44, -1.32, 0) lmH <- lm(yh ~ xh) summary(lmH) im <- influence.measures(lmH) im plot(xh,yh, main = "Huber's data: L.S. line and influential obs.") abline(lmH); points(xh[im$is.inf], yh[im$is.inf], pch = 20, col = 2) ## Irwin's data [Williams 1987] xi <- 1:5 yi <- c(0,2,14,19,30) # number of mice responding to dose xi mi <- rep(40, 5) # number of mice exposed glmI <- glm(cbind(yi, mi -yi) ~ xi, family = binomial) summary(glmI) signif(cooks.distance(glmI), 3) # ~= Ci in Table 3, p.184 imI <- influence.measures(glmI) imI stopifnot(all.equal(imI$infmat[,"cook.d"], cooks.distance(glmI))) ``` r None `chisq.test` Pearson's Chi-squared Test for Count Data ------------------------------------------------------- ### Description `chisq.test` performs chi-squared contingency table tests and goodness-of-fit tests. ### Usage ``` chisq.test(x, y = NULL, correct = TRUE, p = rep(1/length(x), length(x)), rescale.p = FALSE, simulate.p.value = FALSE, B = 2000) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector or matrix. `x` and `y` can also both be factors. | | `y` | a numeric vector; ignored if `x` is a matrix. If `x` is a factor, `y` should be a factor of the same length. | | `correct` | a logical indicating whether to apply continuity correction when computing the test statistic for 2 by 2 tables: one half is subtracted from all *|O - E|* differences; however, the correction will not be bigger than the differences themselves. No correction is done if `simulate.p.value = TRUE`. | | `p` | a vector of probabilities of the same length of `x`. An error is given if any entry of `p` is negative. | | `rescale.p` | a logical scalar; if TRUE then `p` is rescaled (if necessary) to sum to 1. If `rescale.p` is FALSE, and `p` does not sum to 1, an error is given. | | `simulate.p.value` | a logical indicating whether to compute p-values by Monte Carlo simulation. | | `B` | an integer specifying the number of replicates used in the Monte Carlo test. | ### Details If `x` is a matrix with one row or column, or if `x` is a vector and `y` is not given, then a *goodness-of-fit test* is performed (`x` is treated as a one-dimensional contingency table). The entries of `x` must be non-negative integers. In this case, the hypothesis tested is whether the population probabilities equal those in `p`, or are all equal if `p` is not given. If `x` is a matrix with at least two rows and columns, it is taken as a two-dimensional contingency table: the entries of `x` must be non-negative integers. Otherwise, `x` and `y` must be vectors or factors of the same length; cases with missing values are removed, the objects are coerced to factors, and the contingency table is computed from these. Then Pearson's chi-squared test is performed of the null hypothesis that the joint distribution of the cell counts in a 2-dimensional contingency table is the product of the row and column marginals. If `simulate.p.value` is `FALSE`, the p-value is computed from the asymptotic chi-squared distribution of the test statistic; continuity correction is only used in the 2-by-2 case (if `correct` is `TRUE`, the default). Otherwise the p-value is computed for a Monte Carlo test (Hope, 1968) with `B` replicates. In the contingency table case simulation is done by random sampling from the set of all contingency tables with given marginals, and works only if the marginals are strictly positive. Continuity correction is never used, and the statistic is quoted without it. Note that this is not the usual sampling situation assumed for the chi-squared test but rather that for Fisher's exact test. In the goodness-of-fit case simulation is done by random sampling from the discrete distribution specified by `p`, each sample being of size `n = sum(x)`. This simulation is done in **R** and may be slow. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the value the chi-squared test statistic. | | `parameter` | the degrees of freedom of the approximate chi-squared distribution of the test statistic, `NA` if the p-value is computed by Monte Carlo simulation. | | `p.value` | the p-value for the test. | | `method` | a character string indicating the type of test performed, and whether Monte Carlo simulation or continuity correction was used. | | `data.name` | a character string giving the name(s) of the data. | | `observed` | the observed counts. | | `expected` | the expected counts under the null hypothesis. | | `residuals` | the Pearson residuals, `(observed - expected) / sqrt(expected)`. | | `stdres` | standardized residuals, `(observed - expected) / sqrt(V)`, where `V` is the residual cell variance (Agresti, 2007, section 2.4.5 for the case where `x` is a matrix, `n * p * (1 - p)` otherwise). | ### Source The code for Monte Carlo simulation is a C translation of the Fortran algorithm of Patefield (1981). ### References Hope, A. C. A. (1968). A simplified Monte Carlo significance test procedure. *Journal of the Royal Statistical Society Series B*, **30**, 582–598. doi: [10.1111/j.2517-6161.1968.tb00759.x](https://doi.org/10.1111/j.2517-6161.1968.tb00759.x). <https://www.jstor.org/stable/2984263>. Patefield, W. M. (1981). Algorithm AS 159: An efficient method of generating r x c tables with given row and column totals. *Applied Statistics*, **30**, 91–97. doi: [10.2307/2346669](https://doi.org/10.2307/2346669). Agresti, A. (2007). *An Introduction to Categorical Data Analysis*, 2nd ed. New York: John Wiley & Sons. Page 38. ### See Also For goodness-of-fit testing, notably of continuous distributions, `<ks.test>`. ### Examples ``` ## From Agresti(2007) p.39 M <- as.table(rbind(c(762, 327, 468), c(484, 239, 477))) dimnames(M) <- list(gender = c("F", "M"), party = c("Democrat","Independent", "Republican")) (Xsq <- chisq.test(M)) # Prints test summary Xsq$observed # observed counts (same as M) Xsq$expected # expected counts under the null Xsq$residuals # Pearson residuals Xsq$stdres # standardized residuals ## Effect of simulating p-values x <- matrix(c(12, 5, 7, 7), ncol = 2) chisq.test(x)$p.value # 0.4233 chisq.test(x, simulate.p.value = TRUE, B = 10000)$p.value # around 0.29! ## Testing for population probabilities ## Case A. Tabulated data x <- c(A = 20, B = 15, C = 25) chisq.test(x) chisq.test(as.table(x)) # the same x <- c(89,37,30,28,2) p <- c(40,20,20,15,5) try( chisq.test(x, p = p) # gives an error ) chisq.test(x, p = p, rescale.p = TRUE) # works p <- c(0.40,0.20,0.20,0.19,0.01) # Expected count in category 5 # is 1.86 < 5 ==> chi square approx. chisq.test(x, p = p) # maybe doubtful, but is ok! chisq.test(x, p = p, simulate.p.value = TRUE) ## Case B. Raw data x <- trunc(5 * runif(100)) chisq.test(table(x)) # NOT 'chisq.test(x)'! ``` r None `listof` A Class for Lists of (Parts of) Model Fits ---------------------------------------------------- ### Description Class `"listof"` is used by `<aov>` and the `"lm"` method of `<alias>` for lists of model fits or parts thereof. It is simply a list with an assigned class to control the way methods, especially printing, act on it. It has a `<coef>` method in this package (which returns an object of this class), and `[` and `print` methods in package base. r None `window` Time (Series) Windows ------------------------------- ### Description `window` is a generic function which extracts the subset of the object `x` observed between the times `start` and `end`. If a frequency is specified, the series is then re-sampled at the new frequency. ### Usage ``` window(x, ...) ## S3 method for class 'ts' window(x, ...) ## Default S3 method: window(x, start = NULL, end = NULL, frequency = NULL, deltat = NULL, extend = FALSE, ts.eps = getOption("ts.eps"), ...) window(x, ...) <- value ## S3 replacement method for class 'ts' window(x, start, end, frequency, deltat, ...) <- value ``` ### Arguments | | | | --- | --- | | `x` | a time-series (or other object if not replacing values). | | `start` | the start time of the period of interest. | | `end` | the end time of the period of interest. | | `frequency, deltat` | the new frequency can be specified by either (or both if they are consistent). | | `extend` | logical. If true, the `start` and `end` values are allowed to extend the series. If false, attempts to extend the series give a warning and are ignored. | | `ts.eps` | time series comparison tolerance. Frequencies are considered equal if their absolute difference is less than `ts.eps`. | | `...` | further arguments passed to or from other methods. | | `value` | replacement values. | ### Details The start and end times can be specified as for `<ts>`. If there is no observation at the new `start` or `end`, the immediately following (`start`) or preceding (`end`) observation time is used. The replacement function has a method for `ts` objects, and is allowed to extend the series (with a warning). There is no default method. ### Value The value depends on the method. `window.default` will return a vector or matrix with an appropriate `<tsp>` attribute. `window.ts` differs from `window.default` only in ensuring the result is a `ts` object. If `extend = TRUE` the series will be padded with `NA`s if needed. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<time>`, `<ts>`. ### Examples ``` window(presidents, 1960, c(1969,4)) # values in the 1960's window(presidents, deltat = 1) # All Qtr1s window(presidents, start = c(1945,3), deltat = 1) # All Qtr3s window(presidents, 1944, c(1979,2), extend = TRUE) pres <- window(presidents, 1945, c(1949,4)) # values in the 1940's window(pres, 1945.25, 1945.50) <- c(60, 70) window(pres, 1944, 1944.75) <- 0 # will generate a warning window(pres, c(1945,4), c(1949,4), frequency = 1) <- 85:89 pres ``` r None `termplot` Plot Regression Terms --------------------------------- ### Description Plots regression terms against their predictors, optionally with standard errors and partial residuals added. ### Usage ``` termplot(model, data = NULL, envir = environment(formula(model)), partial.resid = FALSE, rug = FALSE, terms = NULL, se = FALSE, xlabs = NULL, ylabs = NULL, main = NULL, col.term = 2, lwd.term = 1.5, col.se = "orange", lty.se = 2, lwd.se = 1, col.res = "gray", cex = 1, pch = par("pch"), col.smth = "darkred", lty.smth = 2, span.smth = 2/3, ask = dev.interactive() && nb.fig < n.tms, use.factor.levels = TRUE, smooth = NULL, ylim = "common", plot = TRUE, transform.x = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `model` | fitted model object | | `data` | data frame in which variables in `model` can be found | | `envir` | environment in which variables in `model` can be found | | `partial.resid` | logical; should partial residuals be plotted? | | `rug` | add [rug](../../graphics/html/rug)plots (jittered 1-d histograms) to the axes? | | `terms` | which terms to plot (default `NULL` means all terms); a vector passed to `<predict>(.., type = "terms", terms = *)`. | | `se` | plot pointwise standard errors? | | `xlabs` | vector of labels for the x axes | | `ylabs` | vector of labels for the y axes | | `main` | logical, or vector of main titles; if `TRUE`, the model's call is taken as main title, `NULL` or `FALSE` mean no titles. | | `col.term, lwd.term` | color and line width for the ‘term curve’, see `[lines](../../graphics/html/lines)`. | | `col.se, lty.se, lwd.se` | color, line type and line width for the ‘twice-standard-error curve’ when `se = TRUE`. | | `col.res, cex, pch` | color, plotting character expansion and type for partial residuals, when `partial.resid = TRUE`, see `[points](../../graphics/html/points)`. | | `ask` | logical; if `TRUE`, the user is *ask*ed before each plot, see `[par](../../graphics/html/par)(ask=.)`. | | `use.factor.levels` | Should x-axis ticks use factor levels or numbers for factor terms? | | `smooth` | `NULL` or a function with the same arguments as `[panel.smooth](../../graphics/html/panel.smooth)` to draw a smooth through the partial residuals for non-factor terms | | `lty.smth, col.smth, span.smth` | Passed to `smooth` | | `ylim` | an optional range for the y axis, or `"common"` when a range sufficient for all the plot will be computed, or `"free"` when limits are computed for each plot. | | `plot` | if set to `FALSE` plots are not produced: instead a list is returned containing the data that would have been plotted. | | `transform.x` | logical vector; if an element (recycled as necessary) is `TRUE`, partial residuals for the corresponding term are plotted against transformed values. The model response is then a straight line, allowing a ready comparison against the data or against the curve obtained from `smooth-panel.smooth`. | | `...` | other graphical parameters. | ### Details The `model` object must have a `predict` method that accepts `type = "terms"`, e.g., `<glm>` in the stats package, `[coxph](../../survival/html/coxph)` and `[survreg](../../survival/html/survreg)` in the [survival](https://CRAN.R-project.org/package=survival) package. For the `partial.resid = TRUE` option `model` must have a `<residuals>` method that accepts `type = "partial"`, which `<lm>` and `<glm>` do. The `data` argument should rarely be needed, but in some cases `termplot` may be unable to reconstruct the original data frame. Using `na.action=na.exclude` makes these problems less likely. Nothing sensible happens for interaction terms, and they may cause errors. The `plot = FALSE` option is useful when some special action is needed, e.g. to overlay the results of two different models or to plot confidence bands. ### Value For `plot = FALSE`, a list with one element for each plot which would have been produced. Each element of the list is a data frame with variables `x`, `y`, and optionally the pointwise standard errors `se`. For continuous predictors `x` will contain the ordered unique values and for a factor it will be a factor containing one instance of each level. The list has attribute `"constant"` copied from the predicted terms object. Otherwise, the number of terms, invisibly. ### See Also For (generalized) linear models, `<plot.lm>` and `<predict.glm>`. ### Examples ``` require(graphics) had.splines <- "package:splines" %in% search() if(!had.splines) rs <- require(splines) x <- 1:100 z <- factor(rep(LETTERS[1:4], 25)) y <- rnorm(100, sin(x/10)+as.numeric(z)) model <- glm(y ~ ns(x, 6) + z) par(mfrow = c(2,2)) ## 2 x 2 plots for same model : termplot(model, main = paste("termplot( ", deparse(model$call)," ...)")) termplot(model, rug = TRUE) termplot(model, partial.resid = TRUE, se = TRUE, main = TRUE) termplot(model, partial.resid = TRUE, smooth = panel.smooth, span.smth = 1/4) if(!had.splines && rs) detach("package:splines") ## requires recommended package MASS hills.lm <- lm(log(time) ~ log(climb)+log(dist), data = MASS::hills) termplot(hills.lm, partial.resid = TRUE, smooth = panel.smooth, terms = "log(dist)", main = "Original") termplot(hills.lm, transform.x = TRUE, partial.resid = TRUE, smooth = panel.smooth, terms = "log(dist)", main = "Transformed") ```
programming_docs
r None `sigma` Extract Residual Standard Deviation 'Sigma' ---------------------------------------------------- ### Description Extract the estimated standard deviation of the errors, the “residual standard deviation” (misnamed also “residual standard error”, e.g., in `<summary.lm>()`'s output, from a fitted model). Many classical statistical models have a *scale parameter*, typically the standard deviation of a zero-mean normal (or Gaussian) random variable which is denoted as *σ*. `sigma(.)` extracts the *estimated* parameter from a fitted model, i.e., *sigma^*. ### Usage ``` sigma(object, ...) ## Default S3 method: sigma(object, use.fallback = TRUE, ...) ``` ### Arguments | | | | --- | --- | | `object` | an **R** object, typically resulting from a model fitting function such as `<lm>`. | | `use.fallback` | logical, passed to `<nobs>`. | | `...` | potentially further arguments passed to and from methods. Passed to `<deviance>(*, ...)` for the default method. | ### Details The stats package provides the S3 generic and a default method. The latter is correct typically for (asymptotically / approximately) generalized gaussian (“least squares”) problems, since it is defined as ``` sigma.default <- function (object, use.fallback = TRUE, ...) sqrt( deviance(object, ...) / (NN - PP) ) ``` where `NN <- <nobs>(object, use.fallback = use.fallback)` and `PP <- sum(!is.na(<coef>(object)))` – where in older **R** versions this was `length(coef(object))` which is too large in case of undetermined coefficients, e.g., for rank deficient model fits. ### Value typically a number, the estimated standard deviation of the errors (“residual standard deviation”) for Gaussian models, and—less interpretably—the square root of the residual deviance per degree of freedom in more general models. In some generalized linear modelling (`<glm>`) contexts, *sigma^2* (`sigma(.)^2`) is called “dispersion (parameter)”. Consequently, for well-fitting binomial or Poisson GLMs, `sigma` is around 1. Very strictly speaking, *σ^* (“*σ* hat”) is actually *√(hat(σ^2))*. For multivariate linear models (class `"mlm"`), a *vector* of sigmas is returned, each corresponding to one column of *Y*. ### Note The misnomer “Residual standard **error**” has been part of too many **R** (and S) outputs to be easily changed there. ### See Also `<deviance>`, `<nobs>`, `<vcov>`. ### Examples ``` ## -- lm() ------------------------------ lm1 <- lm(Fertility ~ . , data = swiss) sigma(lm1) # ~= 7.165 = "Residual standard error" printed from summary(lm1) stopifnot(all.equal(sigma(lm1), summary(lm1)$sigma, tolerance=1e-15)) ## -- nls() ----------------------------- DNase1 <- subset(DNase, Run == 1) fm.DN1 <- nls(density ~ SSlogis(log(conc), Asym, xmid, scal), DNase1) sigma(fm.DN1) # ~= 0.01919 as from summary(..) stopifnot(all.equal(sigma(fm.DN1), summary(fm.DN1)$sigma, tolerance=1e-15)) ## -- glm() ----------------------------- ## -- a) Binomial -- Example from MASS ldose <- rep(0:5, 2) numdead <- c(1, 4, 9, 13, 18, 20, 0, 2, 6, 10, 12, 16) sex <- factor(rep(c("M", "F"), c(6, 6))) SF <- cbind(numdead, numalive = 20-numdead) sigma(budworm.lg <- glm(SF ~ sex*ldose, family = binomial)) ## -- b) Poisson -- from ?glm : ## Dobson (1990) Page 93: Randomized Controlled Trial : counts <- c(18,17,15,20,10,20,25,13,12) outcome <- gl(3,1,9) treatment <- gl(3,3) sigma(glm.D93 <- glm(counts ~ outcome + treatment, family = poisson())) ## (currently) *differs* from summary(glm.D93)$dispersion # == 1 ## and the *Quasi*poisson's dispersion sigma(glm.qD93 <- update(glm.D93, family = quasipoisson())) sigma (glm.qD93)^2 # 1.282285 is close, but not the same summary(glm.qD93)$dispersion # == 1.2933 ## -- Multivariate lm() "mlm" ----------- utils::example("SSD", echo=FALSE) sigma(mlmfit) # is the same as {but more efficient than} sqrt(diag(estVar(mlmfit))) ``` r None `update` Update and Re-fit a Model Call ---------------------------------------- ### Description `update` will update and (by default) re-fit a model. It does this by extracting the call stored in the object, updating the call and (by default) evaluating that call. Sometimes it is useful to call `update` with only one argument, for example if the data frame has been corrected. “Extracting the call” in `update()` and similar functions uses `getCall()` which itself is a (S3) generic function with a default method that simply gets `x$call`. Because of this, `update()` will often work (via its default method) on new model classes, either automatically, or by providing a simple `getCall()` method for that class. ### Usage ``` update(object, ...) ## Default S3 method: update(object, formula., ..., evaluate = TRUE) getCall(x, ...) ``` ### Arguments | | | | --- | --- | | `object, x` | An existing fit from a model function such as `lm`, `glm` and many others. | | `formula.` | Changes to the formula – see `update.formula` for details. | | `...` | Additional arguments to the call, or arguments with changed values. Use `name = NULL` to remove the argument `name`. | | `evaluate` | If true evaluate the new call else return the call. | ### Value If `evaluate = TRUE` the fitted object, otherwise the updated call. ### References Chambers, J. M. (1992) *Linear models.* Chapter 4 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<update.formula>` ### Examples ``` oldcon <- options(contrasts = c("contr.treatment", "contr.poly")) ## Annette Dobson (1990) "An Introduction to Generalized Linear Models". ## Page 9: Plant Weight Data. ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14) trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69) group <- gl(2, 10, 20, labels = c("Ctl", "Trt")) weight <- c(ctl, trt) lm.D9 <- lm(weight ~ group) lm.D9 summary(lm.D90 <- update(lm.D9, . ~ . - 1)) options(contrasts = c("contr.helmert", "contr.poly")) update(lm.D9) getCall(lm.D90) # "through the origin" options(oldcon) ``` r None `Cauchy` The Cauchy Distribution --------------------------------- ### Description Density, distribution function, quantile function and random generation for the Cauchy distribution with location parameter `location` and scale parameter `scale`. ### Usage ``` dcauchy(x, location = 0, scale = 1, log = FALSE) pcauchy(q, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) qcauchy(p, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) rcauchy(n, location = 0, scale = 1) ``` ### Arguments | | | | --- | --- | | `x, q` | vector of quantiles. | | `p` | vector of probabilities. | | `n` | number of observations. If `length(n) > 1`, the length is taken to be the number required. | | `location, scale` | location and scale parameters. | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details If `location` or `scale` are not specified, they assume the default values of `0` and `1` respectively. The Cauchy distribution with location *l* and scale *s* has density *f(x) = 1 / (π s (1 + ((x-l)/s)^2))* for all *x*. ### Value `dcauchy`, `pcauchy`, and `qcauchy` are respectively the density, distribution function and quantile function of the Cauchy distribution. `rcauchy` generates random deviates from the Cauchy. The length of the result is determined by `n` for `rcauchy`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `n` are recycled to the length of the result. Only the first elements of the logical arguments are used. ### Source `dcauchy`, `pcauchy` and `qcauchy` are all calculated from numerically stable versions of the definitions. `rcauchy` uses inversion. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Johnson, N. L., Kotz, S. and Balakrishnan, N. (1995) *Continuous Univariate Distributions*, volume 1, chapter 16. Wiley, New York. ### See Also [Distributions](distributions) for other standard distributions, including `[dt](tdist)` for the t distribution which generalizes `dcauchy(*, l = 0, s = 1)`. ### Examples ``` dcauchy(-1:4) ``` r None `predict.loess` Predict Loess Curve or Surface ----------------------------------------------- ### Description Predictions from a `loess` fit, optionally with standard errors. ### Usage ``` ## S3 method for class 'loess' predict(object, newdata = NULL, se = FALSE, na.action = na.pass, ...) ``` ### Arguments | | | | --- | --- | | `object` | an object fitted by `loess`. | | `newdata` | an optional data frame in which to look for variables with which to predict, or a matrix or vector containing exactly the variables needs for prediction. If missing, the original data points are used. | | `se` | should standard errors be computed? | | `na.action` | function determining what should be done with missing values in data frame `newdata`. The default is to predict `NA`. | | `...` | arguments passed to or from other methods. | ### Details The standard errors calculation `se = TRUE` is slower than prediction, notably as it needs a relatively large workspace (memory), notably matrices of dimension *N \* Nf* where *f =* `span`, i.e., `se = TRUE` is *O(N^2)* and hence stops when the sample size *N* is larger than about 40'600 (for default `span = 0.75`). When the fit was made using `surface = "interpolate"` (the default), `predict.loess` will not extrapolate – so points outside an axis-aligned hypercube enclosing the original data will have missing (`NA`) predictions and standard errors. ### Value If `se = FALSE`, a vector giving the prediction for each row of `newdata` (or the original data). If `se = TRUE`, a list containing components | | | | --- | --- | | `fit` | the predicted values. | | `se` | an estimated standard error for each predicted value. | | `residual.scale` | the estimated scale of the residuals used in computing the standard errors. | | `df` | an estimate of the effective degrees of freedom used in estimating the residual scale, intended for use with t-based confidence intervals. | If `newdata` was the result of a call to `[expand.grid](../../base/html/expand.grid)`, the predictions (and s.e.'s if requested) will be an array of the appropriate dimensions. Predictions from infinite inputs will be `NA` since `loess` does not support extrapolation. ### Note Variables are first looked for in `newdata` and then searched for in the usual way (which will include the environment of the formula used in the fit). A warning will be given if the variables found are not of the same length as those in `newdata` if it was supplied. ### Author(s) B. D. Ripley, based on the `cloess` package of Cleveland, Grosse and Shyu. ### See Also `<loess>` ### Examples ``` cars.lo <- loess(dist ~ speed, cars) predict(cars.lo, data.frame(speed = seq(5, 30, 1)), se = TRUE) # to get extrapolation cars.lo2 <- loess(dist ~ speed, cars, control = loess.control(surface = "direct")) predict(cars.lo2, data.frame(speed = seq(5, 30, 1)), se = TRUE) ``` r None `NLSstClosestX` Inverse Interpolation -------------------------------------- ### Description Use inverse linear interpolation to approximate the `x` value at which the function represented by `xy` is equal to `yval`. ### Usage ``` NLSstClosestX(xy, yval) ``` ### Arguments | | | | --- | --- | | `xy` | a `sortedXyData` object | | `yval` | a numeric value on the `y` scale | ### Value A single numeric value on the `x` scale. ### Author(s) José Pinheiro and Douglas Bates ### See Also `[sortedXyData](sortedxydata)`, `[NLSstLfAsymptote](nlsstlfasymptote)`, `[NLSstRtAsymptote](nlsstrtasymptote)`, `[selfStart](selfstart)` ### Examples ``` DNase.2 <- DNase[ DNase$Run == "2", ] DN.srt <- sortedXyData(expression(log(conc)), expression(density), DNase.2) NLSstClosestX(DN.srt, 1.0) ``` r None `ls.diag` Compute Diagnostics for lsfit Regression Results ----------------------------------------------------------- ### Description Computes basic statistics, including standard errors, t- and p-values for the regression coefficients. ### Usage ``` ls.diag(ls.out) ``` ### Arguments | | | | --- | --- | | `ls.out` | Typically the result of `<lsfit>()` | ### Value A `list` with the following numeric components. | | | | --- | --- | | `std.dev` | The standard deviation of the errors, an estimate of *σ*. | | `hat` | diagonal entries *h\_{ii}* of the hat matrix *H* | | `std.res` | standardized residuals | | `stud.res` | studentized residuals | | `cooks` | Cook's distances | | `dfits` | DFITS statistics | | `correlation` | correlation matrix | | `std.err` | standard errors of the regression coefficients | | `cov.scaled` | Scaled covariance matrix of the coefficients | | `cov.unscaled` | Unscaled covariance matrix of the coefficients | ### References Belsley, D. A., Kuh, E. and Welsch, R. E. (1980) *Regression Diagnostics.* New York: Wiley. ### See Also `[hat](influence.measures)` for the hat matrix diagonals, `<ls.print>`, `<lm.influence>`, `<summary.lm>`, `<anova>`. ### Examples ``` ##-- Using the same data as the lm(.) example: lsD9 <- lsfit(x = as.numeric(gl(2, 10, 20)), y = weight) dlsD9 <- ls.diag(lsD9) utils::str(dlsD9, give.attr = FALSE) abs(1 - sum(dlsD9$hat) / 2) < 10*.Machine$double.eps # sum(h.ii) = p plot(dlsD9$hat, dlsD9$stud.res, xlim = c(0, 0.11)) abline(h = 0, lty = 2, col = "lightgray") ``` r None `model.tables` Compute Tables of Results from an Aov Model Fit --------------------------------------------------------------- ### Description Computes summary tables for model fits, especially complex `aov` fits. ### Usage ``` model.tables(x, ...) ## S3 method for class 'aov' model.tables(x, type = "effects", se = FALSE, cterms, ...) ## S3 method for class 'aovlist' model.tables(x, type = "effects", se = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `x` | a model object, usually produced by `aov` | | `type` | type of table: currently only `"effects"` and `"means"` are implemented. Can be abbreviated. | | `se` | should standard errors be computed? | | `cterms` | A character vector giving the names of the terms for which tables should be computed. The default is all tables. | | `...` | further arguments passed to or from other methods. | ### Details For `type = "effects"` give tables of the coefficients for each term, optionally with standard errors. For `type = "means"` give tables of the mean response for each combinations of levels of the factors in a term. The `"aov"` method cannot be applied to components of a `"aovlist"` fit. ### Value An object of class `"tables.aov"`, as list which may contain components | | | | --- | --- | | `tables` | A list of tables for each requested term. | | `n` | The replication information for each term. | | `se` | Standard error information. | ### Warning The implementation is incomplete, and only the simpler cases have been tested thoroughly. Weighted `aov` fits are not supported. ### See Also `<aov>`, `<proj>`, `<replications>`, `[TukeyHSD](tukeyhsd)`, `<se.contrast>` ### Examples ``` options(contrasts = c("contr.helmert", "contr.treatment")) npk.aov <- aov(yield ~ block + N*P*K, npk) model.tables(npk.aov, "means", se = TRUE) ## as a test, not particularly sensible statistically npk.aovE <- aov(yield ~ N*P*K + Error(block), npk) model.tables(npk.aovE, se = TRUE) model.tables(npk.aovE, "means") ``` r None `expand.model.frame` Add new variables to a model frame -------------------------------------------------------- ### Description Evaluates new variables as if they had been part of the formula of the specified model. This ensures that the same `na.action` and `subset` arguments are applied and allows, for example, `x` to be recovered for a model using `sin(x)` as a predictor. ### Usage ``` expand.model.frame(model, extras, envir = environment(formula(model)), na.expand = FALSE) ``` ### Arguments | | | | --- | --- | | `model` | a fitted model | | `extras` | one-sided formula or vector of character strings describing new variables to be added | | `envir` | an environment to evaluate things in | | `na.expand` | logical; see below | ### Details If `na.expand = FALSE` then `NA` values in the extra variables will be passed to the `na.action` function used in `model`. This may result in a shorter data frame (with `[na.omit](na.fail)`) or an error (with `<na.fail>`). If `na.expand = TRUE` the returned data frame will have precisely the same rows as `model.frame(model)`, but the columns corresponding to the extra variables may contain `NA`. ### Value A data frame. ### See Also `<model.frame>`, `<predict>` ### Examples ``` model <- lm(log(Volume) ~ log(Girth) + log(Height), data = trees) expand.model.frame(model, ~ Girth) # prints data.frame like dd <- data.frame(x = 1:5, y = rnorm(5), z = c(1,2,NA,4,5)) model <- glm(y ~ x, data = dd, subset = 1:4, na.action = na.omit) expand.model.frame(model, "z", na.expand = FALSE) # = default expand.model.frame(model, "z", na.expand = TRUE) ``` r None `stats-deprecated` Deprecated Functions in Package stats --------------------------------------------------------- ### Description These functions are provided for compatibility with older versions of **R** only, and may be defunct as soon as the next release. ### Details There are currently no deprecated functions in this package. ### See Also `[Deprecated](../../base/html/deprecated)` r None `SSfol` Self-Starting Nls First-order Compartment Model -------------------------------------------------------- ### Description This `selfStart` model evaluates the first-order compartment function and its gradient. It has an `initial` attribute that creates initial estimates of the parameters `lKe`, `lKa`, and `lCl`. ### Usage ``` SSfol(Dose, input, lKe, lKa, lCl) ``` ### Arguments | | | | --- | --- | | `Dose` | a numeric value representing the initial dose. | | `input` | a numeric vector at which to evaluate the model. | | `lKe` | a numeric parameter representing the natural logarithm of the elimination rate constant. | | `lKa` | a numeric parameter representing the natural logarithm of the absorption rate constant. | | `lCl` | a numeric parameter representing the natural logarithm of the clearance. | ### Value a numeric vector of the same length as `input`, which is the value of the expression ``` Dose * exp(lKe+lKa-lCl) * (exp(-exp(lKe)*input) - exp(-exp(lKa)*input)) / (exp(lKa) - exp(lKe)) ``` If all of the arguments `lKe`, `lKa`, and `lCl` are names of objects, the gradient matrix with respect to these names is attached as an attribute named `gradient`. ### Author(s) José Pinheiro and Douglas Bates ### See Also `<nls>`, `[selfStart](selfstart)` ### Examples ``` Theoph.1 <- Theoph[ Theoph$Subject == 1, ] with(Theoph.1, SSfol(Dose, Time, -2.5, 0.5, -3)) # response only with(Theoph.1, local({ lKe <- -2.5; lKa <- 0.5; lCl <- -3 SSfol(Dose, Time, lKe, lKa, lCl) # response _and_ gradient })) getInitial(conc ~ SSfol(Dose, Time, lKe, lKa, lCl), data = Theoph.1) ## Initial values are in fact the converged values fm1 <- nls(conc ~ SSfol(Dose, Time, lKe, lKa, lCl), data = Theoph.1) summary(fm1) ``` r None `manova` Multivariate Analysis of Variance ------------------------------------------- ### Description A class for the multivariate analysis of variance. ### Usage ``` manova(...) ``` ### Arguments | | | | --- | --- | | `...` | Arguments to be passed to `<aov>`. | ### Details Class `"manova"` differs from class `"aov"` in selecting a different `summary` method. Function `manova` calls `<aov>` and then add class `"manova"` to the result object for each stratum. ### Value See `<aov>` and the comments in ‘Details’ here. ### References Krzanowski, W. J. (1988) *Principles of Multivariate Analysis. A User's Perspective.* Oxford. Hand, D. J. and Taylor, C. C. (1987) *Multivariate Analysis of Variance and Repeated Measures.* Chapman and Hall. ### See Also `<aov>`, `<summary.manova>`, the latter containing more examples. ### Examples ``` ## Set orthogonal contrasts. op <- options(contrasts = c("contr.helmert", "contr.poly")) ## Fake a 2nd response variable npk2 <- within(npk, foo <- rnorm(24)) ( npk2.aov <- manova(cbind(yield, foo) ~ block + N*P*K, npk2) ) summary(npk2.aov) ( npk2.aovE <- manova(cbind(yield, foo) ~ N*P*K + Error(block), npk2) ) summary(npk2.aovE) ```
programming_docs
r None `asOneSidedFormula` Convert to One-Sided Formula ------------------------------------------------- ### Description Names, expressions, numeric values, and character strings are converted to one-sided formulae. If `object` is a formula, it must be one-sided, in which case it is returned unaltered. ### Usage ``` asOneSidedFormula(object) ``` ### Arguments | | | | --- | --- | | `object` | a one-sided formula, an expression, a numeric value, or a character string. | ### Value a one-sided formula representing `object` ### Author(s) José Pinheiro and Douglas Bates ### See Also `<formula>` ### Examples ``` asOneSidedFormula("age") asOneSidedFormula(~ age) ``` r None `cor.test` Test for Association/Correlation Between Paired Samples ------------------------------------------------------------------- ### Description Test for association between paired samples, using one of Pearson's product moment correlation coefficient, Kendall's *tau* or Spearman's *rho*. ### Usage ``` cor.test(x, ...) ## Default S3 method: cor.test(x, y, alternative = c("two.sided", "less", "greater"), method = c("pearson", "kendall", "spearman"), exact = NULL, conf.level = 0.95, continuity = FALSE, ...) ## S3 method for class 'formula' cor.test(formula, data, subset, na.action, ...) ``` ### Arguments | | | | --- | --- | | `x, y` | numeric vectors of data values. `x` and `y` must have the same length. | | `alternative` | indicates the alternative hypothesis and must be one of `"two.sided"`, `"greater"` or `"less"`. You can specify just the initial letter. `"greater"` corresponds to positive association, `"less"` to negative association. | | `method` | a character string indicating which correlation coefficient is to be used for the test. One of `"pearson"`, `"kendall"`, or `"spearman"`, can be abbreviated. | | `exact` | a logical indicating whether an exact p-value should be computed. Used for Kendall's *tau* and Spearman's *rho*. See ‘Details’ for the meaning of `NULL` (the default). | | `conf.level` | confidence level for the returned confidence interval. Currently only used for the Pearson product moment correlation coefficient if there are at least 4 complete pairs of observations. | | `continuity` | logical: if true, a continuity correction is used for Kendall's *tau* and Spearman's *rho* when not computed exactly. | | `formula` | a formula of the form `~ u + v`, where each of `u` and `v` are numeric variables giving the data values for one sample. The samples must be of the same length. | | `data` | an optional matrix or data frame (or similar: see `<model.frame>`) containing the variables in the formula `formula`. By default the variables are taken from `environment(formula)`. | | `subset` | an optional vector specifying a subset of observations to be used. | | `na.action` | a function which indicates what should happen when the data contain `NA`s. Defaults to `getOption("na.action")`. | | `...` | further arguments to be passed to or from methods. | ### Details The three methods each estimate the association between paired samples and compute a test of the value being zero. They use different measures of association, all in the range *[-1, 1]* with *0* indicating no association. These are sometimes referred to as tests of no *correlation*, but that term is often confined to the default method. If `method` is `"pearson"`, the test statistic is based on Pearson's product moment correlation coefficient `cor(x, y)` and follows a t distribution with `length(x)-2` degrees of freedom if the samples follow independent normal distributions. If there are at least 4 complete pairs of observation, an asymptotic confidence interval is given based on Fisher's Z transform. If `method` is `"kendall"` or `"spearman"`, Kendall's *tau* or Spearman's *rho* statistic is used to estimate a rank-based measure of association. These tests may be used if the data do not necessarily come from a bivariate normal distribution. For Kendall's test, by default (if `exact` is NULL), an exact p-value is computed if there are less than 50 paired samples containing finite values and there are no ties. Otherwise, the test statistic is the estimate scaled to zero mean and unit variance, and is approximately normally distributed. For Spearman's test, p-values are computed using algorithm AS 89 for *n < 1290* and `exact = TRUE`, otherwise via the asymptotic *t* approximation. Note that these are ‘exact’ for *n < 10*, and use an Edgeworth series approximation for larger sample sizes (the cutoff has been changed from the original paper). ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the value of the test statistic. | | `parameter` | the degrees of freedom of the test statistic in the case that it follows a t distribution. | | `p.value` | the p-value of the test. | | `estimate` | the estimated measure of association, with name `"cor"`, `"tau"`, or `"rho"` corresponding to the method employed. | | `null.value` | the value of the association measure under the null hypothesis, always `0`. | | `alternative` | a character string describing the alternative hypothesis. | | `method` | a character string indicating how the association was measured. | | `data.name` | a character string giving the names of the data. | | `conf.int` | a confidence interval for the measure of association. Currently only given for Pearson's product moment correlation coefficient in case of at least 4 complete pairs of observations. | ### References D. J. Best & D. E. Roberts (1975). Algorithm AS 89: The Upper Tail Probabilities of Spearman's *rho*. *Applied Statistics*, **24**, 377–379. doi: [10.2307/2347111](https://doi.org/10.2307/2347111). Myles Hollander & Douglas A. Wolfe (1973), *Nonparametric Statistical Methods.* New York: John Wiley & Sons. Pages 185–194 (Kendall and Spearman tests). ### See Also `[Kendall](../../kendall/html/kendall)` in package [Kendall](https://CRAN.R-project.org/package=Kendall). `[pKendall](../../suppdists/html/kendall)` and `[pSpearman](../../suppdists/html/spearman)` in package [SuppDists](https://CRAN.R-project.org/package=SuppDists), `[spearman.test](../../pspearman/html/spearman.test)` in package [pspearman](https://CRAN.R-project.org/package=pspearman), which supply different (and often more accurate) approximations. ### Examples ``` ## Hollander & Wolfe (1973), p. 187f. ## Assessment of tuna quality. We compare the Hunter L measure of ## lightness to the averages of consumer panel scores (recoded as ## integer values from 1 to 6 and averaged over 80 such values) in ## 9 lots of canned tuna. x <- c(44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 60.1) y <- c( 2.6, 3.1, 2.5, 5.0, 3.6, 4.0, 5.2, 2.8, 3.8) ## The alternative hypothesis of interest is that the ## Hunter L value is positively associated with the panel score. cor.test(x, y, method = "kendall", alternative = "greater") ## => p=0.05972 cor.test(x, y, method = "kendall", alternative = "greater", exact = FALSE) # using large sample approximation ## => p=0.04765 ## Compare this to cor.test(x, y, method = "spearm", alternative = "g") cor.test(x, y, alternative = "g") ## Formula interface. require(graphics) pairs(USJudgeRatings) cor.test(~ CONT + INTG, data = USJudgeRatings) ``` r None `printCoefmat` Print Coefficient Matrices ------------------------------------------ ### Description Utility function to be used in higher-level `[print](../../base/html/print)` methods, such as those for `<summary.lm>`, `<summary.glm>` and `<anova>`. The goal is to provide a flexible interface with smart defaults such that often, only `x` needs to be specified. ### Usage ``` printCoefmat(x, digits = max(3, getOption("digits") - 2), signif.stars = getOption("show.signif.stars"), signif.legend = signif.stars, dig.tst = max(1, min(5, digits - 1)), cs.ind = 1L:k, tst.ind = k + 1L, zap.ind = integer(), P.values = NULL, has.Pvalue = nc >= 4L && length(cn <- colnames(x)) && substr(cn[nc], 1L, 3L) %in% c("Pr(", "p-v"), eps.Pvalue = .Machine$double.eps, na.print = "NA", quote = FALSE, right = TRUE, ...) ``` ### Arguments | | | | --- | --- | | `x` | a numeric matrix like object, to be printed. | | `digits` | minimum number of significant digits to be used for most numbers. | | `signif.stars` | logical; if `TRUE`, P-values are additionally encoded visually as ‘significance stars’ in order to help scanning of long coefficient tables. It defaults to the `show.signif.stars` slot of `[options](../../base/html/options)`. | | `signif.legend` | logical; if `TRUE`, a legend for the ‘significance stars’ is printed provided `signif.stars = TRUE`. | | `dig.tst` | minimum number of significant digits for the test statistics, see `tst.ind`. | | `cs.ind` | indices (integer) of column numbers which are (like) **c**oefficients and **s**tandard errors to be formatted together. | | `tst.ind` | indices (integer) of column numbers for test statistics. | | `zap.ind` | indices (integer) of column numbers which should be formatted by `[zapsmall](../../base/html/zapsmall)`, i.e., by ‘zapping’ values close to 0. | | `P.values` | logical or `NULL`; if `TRUE`, the last column of `x` is formatted by `[format.pval](../../base/html/format.pval)` as P values. If `P.values = NULL`, the default, it is set to `TRUE` only if `[options](../../base/html/options)("show.coef.Pvalue")` is `TRUE` *and* `x` has at least 4 columns *and* the last column name of `x` starts with `"Pr("`. | | `has.Pvalue` | logical; if `TRUE`, the last column of `x` contains P values; in that case, it is printed if and only if `P.values` (above) is true. | | `eps.Pvalue` | number, .. | | `na.print` | a character string to code `[NA](../../base/html/na)` values in printed output. | | `quote, right, ...` | further arguments passed to `[print.default](../../base/html/print.default)`. | ### Value Invisibly returns its argument, `x`. ### Author(s) Martin Maechler ### See Also `[print.summary.lm](summary.lm)`, `[format.pval](../../base/html/format.pval)`, `[format](../../base/html/format)`. ### Examples ``` cmat <- cbind(rnorm(3, 10), sqrt(rchisq(3, 12))) cmat <- cbind(cmat, cmat[, 1]/cmat[, 2]) cmat <- cbind(cmat, 2*pnorm(-cmat[, 3])) colnames(cmat) <- c("Estimate", "Std.Err", "Z value", "Pr(>z)") printCoefmat(cmat[, 1:3]) printCoefmat(cmat) op <- options(show.coef.Pvalues = FALSE) printCoefmat(cmat, digits = 2) printCoefmat(cmat, digits = 2, P.values = TRUE) options(op) # restore ``` r None `Pair` Construct paired-data object ------------------------------------ ### Description Combines two vectors into an object of class `"Pair"` ### Usage ``` Pair(x, y) ``` ### Arguments | | | | --- | --- | | `x` | Vector. 1st element of pair. | | `y` | Vector. 2nd element of pair. Should be same length as `x` | ### Value A 2-column matrix of class `"Pair"` ### Note Mostly designed as part of the formula interface to paired tests. ### See Also `t.test` and `wilcox.test` r None `ts.union` Bind Two or More Time Series ---------------------------------------- ### Description Bind time series which have a common frequency. `ts.union` pads with `NA`s to the total time coverage, `ts.intersect` restricts to the time covered by all the series. ### Usage ``` ts.intersect(..., dframe = FALSE) ts.union(..., dframe = FALSE) ``` ### Arguments | | | | --- | --- | | `...` | two or more univariate or multivariate time series, or objects which can coerced to time series. | | `dframe` | logical; if `TRUE` return the result as a data frame. | ### Details As a special case, `...` can contain vectors or matrices of the same length as the combined time series of the time series present, as well as those of a single row. ### Value A time series object if `dframe` is `FALSE`, otherwise a data frame. ### See Also `[cbind](../../base/html/cbind)`. ### Examples ``` ts.union(mdeaths, fdeaths) cbind(mdeaths, fdeaths) # same as the previous line ts.intersect(window(mdeaths, 1976), window(fdeaths, 1974, 1978)) sales1 <- ts.union(BJsales, lead = BJsales.lead) ts.intersect(sales1, lead3 = lag(BJsales.lead, -3)) ``` r None `optimize` One Dimensional Optimization ---------------------------------------- ### Description The function `optimize` searches the interval from `lower` to `upper` for a minimum or maximum of the function `f` with respect to its first argument. `optimise` is an alias for `optimize`. ### Usage ``` optimize(f, interval, ..., lower = min(interval), upper = max(interval), maximum = FALSE, tol = .Machine$double.eps^0.25) optimise(f, interval, ..., lower = min(interval), upper = max(interval), maximum = FALSE, tol = .Machine$double.eps^0.25) ``` ### Arguments | | | | --- | --- | | `f` | the function to be optimized. The function is either minimized or maximized over its first argument depending on the value of `maximum`. | | `interval` | a vector containing the end-points of the interval to be searched for the minimum. | | `...` | additional named or unnamed arguments to be passed to `f`. | | `lower` | the lower end point of the interval to be searched. | | `upper` | the upper end point of the interval to be searched. | | `maximum` | logical. Should we maximize or minimize (the default)? | | `tol` | the desired accuracy. | ### Details Note that arguments after `...` must be matched exactly. The method used is a combination of golden section search and successive parabolic interpolation, and was designed for use with continuous functions. Convergence is never much slower than that for a Fibonacci search. If `f` has a continuous second derivative which is positive at the minimum (which is not at `lower` or `upper`), then convergence is superlinear, and usually of the order of about 1.324. The function `f` is never evaluated at two points closer together than *eps \** *|x\_0| + (tol/3)*, where *eps* is approximately `sqrt([.Machine](../../base/html/zmachine)$double.eps)` and *x\_0* is the final abscissa `optimize()$minimum`. If `f` is a unimodal function and the computed values of `f` are always unimodal when separated by at least *eps \** *|x| + (tol/3)*, then *x\_0* approximates the abscissa of the global minimum of `f` on the interval `lower,upper` with an error less than *eps \** *|x\_0|+ tol*. If `f` is not unimodal, then `optimize()` may approximate a local, but perhaps non-global, minimum to the same accuracy. The first evaluation of `f` is always at *x\_1 = a + (1-φ)(b-a)* where `(a,b) = (lower, upper)` and *phi = (sqrt(5) - 1)/2 = 0.61803..* is the golden section ratio. Almost always, the second evaluation is at *x\_2 = a + phi(b-a)*. Note that a local minimum inside *[x\_1,x\_2]* will be found as solution, even when `f` is constant in there, see the last example. `f` will be called as `f(x, ...)` for a numeric value of x. The argument passed to `f` has special semantics and used to be shared between calls. The function should not copy it. ### Value A list with components `minimum` (or `maximum`) and `objective` which give the location of the minimum (or maximum) and the value of the function at that point. ### Source A C translation of Fortran code <https://www.netlib.org/fmm/fmin.f> (author(s) unstated) based on the Algol 60 procedure `localmin` given in the reference. ### References Brent, R. (1973) *Algorithms for Minimization without Derivatives.* Englewood Cliffs N.J.: Prentice-Hall. ### See Also `<nlm>`, `<uniroot>`. ### Examples ``` require(graphics) f <- function (x, a) (x - a)^2 xmin <- optimize(f, c(0, 1), tol = 0.0001, a = 1/3) xmin ## See where the function is evaluated: optimize(function(x) x^2*(print(x)-1), lower = 0, upper = 10) ## "wrong" solution with unlucky interval and piecewise constant f(): f <- function(x) ifelse(x > -1, ifelse(x < 4, exp(-1/abs(x - 1)), 10), 10) fp <- function(x) { print(x); f(x) } plot(f, -2,5, ylim = 0:1, col = 2) optimize(fp, c(-4, 20)) # doesn't see the minimum optimize(fp, c(-7, 20)) # ok ``` r None `SignRank` Distribution of the Wilcoxon Signed Rank Statistic -------------------------------------------------------------- ### Description Density, distribution function, quantile function and random generation for the distribution of the Wilcoxon Signed Rank statistic obtained from a sample with size `n`. ### Usage ``` dsignrank(x, n, log = FALSE) psignrank(q, n, lower.tail = TRUE, log.p = FALSE) qsignrank(p, n, lower.tail = TRUE, log.p = FALSE) rsignrank(nn, n) ``` ### Arguments | | | | --- | --- | | `x, q` | vector of quantiles. | | `p` | vector of probabilities. | | `nn` | number of observations. If `length(nn) > 1`, the length is taken to be the number required. | | `n` | number(s) of observations in the sample(s). A positive integer, or a vector of such integers. | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details This distribution is obtained as follows. Let `x` be a sample of size `n` from a continuous distribution symmetric about the origin. Then the Wilcoxon signed rank statistic is the sum of the ranks of the absolute values `x[i]` for which `x[i]` is positive. This statistic takes values between *0* and *n(n+1)/2*, and its mean and variance are *n(n+1)/4* and *n(n+1)(2n+1)/24*, respectively. If either of the first two arguments is a vector, the recycling rule is used to do the calculations for all combinations of the two up to the length of the longer vector. ### Value `dsignrank` gives the density, `psignrank` gives the distribution function, `qsignrank` gives the quantile function, and `rsignrank` generates random deviates. The length of the result is determined by `nn` for `rsignrank`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `nn` are recycled to the length of the result. Only the first elements of the logical arguments are used. ### Author(s) Kurt Hornik; efficiency improvement by Ivo Ugrina. ### See Also `<wilcox.test>` to calculate the statistic from data, find p values and so on. [Distributions](distributions) for standard distributions, including `[dwilcox](wilcoxon)` for the distribution of *two-sample* Wilcoxon rank sum statistic. ### Examples ``` require(graphics) par(mfrow = c(2,2)) for(n in c(4:5,10,40)) { x <- seq(0, n*(n+1)/2, length.out = 501) plot(x, dsignrank(x, n = n), type = "l", main = paste0("dsignrank(x, n = ", n, ")")) } ``` r None `nafns` Adjust for Missing Values ---------------------------------- ### Description Use missing value information to adjust residuals and predictions. ### Usage ``` naresid(omit, x, ...) napredict(omit, x, ...) ``` ### Arguments | | | | --- | --- | | `omit` | an object produced by an `<na.action>` function, typically the `"na.action"` attribute of the result of `[na.omit](na.fail)` or `[na.exclude](na.fail)`. | | `x` | a vector, data frame, or matrix to be adjusted based upon the missing value information. | | `...` | further arguments passed to or from other methods. | ### Details These are utility functions used to allow `<predict>`, `[fitted](fitted.values)` and `<residuals>` methods for modelling functions to compensate for the removal of `NA`s in the fitting process. They are used by the default, `"lm"`, `"glm"` and `"nls"` methods, and by further methods in packages [MASS](https://CRAN.R-project.org/package=MASS), [rpart](https://CRAN.R-project.org/package=rpart) and [survival](https://CRAN.R-project.org/package=survival). Also used for the scores returned by `<factanal>`, `<prcomp>` and `<princomp>`. The default methods do nothing. The default method for the `na.exclude` action is to pad the object with `NA`s in the correct positions to have the same number of rows as the original data frame. Currently `naresid` and `napredict` are identical, but future methods need not be. `naresid` is used for residuals, and `napredict` for fitted values, predictions and `<weights>`. ### Value These return a similar object to `x`. ### Note In the early 2000s, packages [rpart](https://CRAN.R-project.org/package=rpart) and survival5 contained versions of these functions that had an `na.omit` action equivalent to that now used for `na.exclude`.
programming_docs
r None `kruskal.test` Kruskal-Wallis Rank Sum Test -------------------------------------------- ### Description Performs a Kruskal-Wallis rank sum test. ### Usage ``` kruskal.test(x, ...) ## Default S3 method: kruskal.test(x, g, ...) ## S3 method for class 'formula' kruskal.test(formula, data, subset, na.action, ...) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector of data values, or a list of numeric data vectors. Non-numeric elements of a list will be coerced, with a warning. | | `g` | a vector or factor object giving the group for the corresponding elements of `x`. Ignored with a warning if `x` is a list. | | `formula` | a formula of the form `response ~ group` where `response` gives the data values and `group` a vector or factor of the corresponding groups. | | `data` | an optional matrix or data frame (or similar: see `<model.frame>`) containing the variables in the formula `formula`. By default the variables are taken from `environment(formula)`. | | `subset` | an optional vector specifying a subset of observations to be used. | | `na.action` | a function which indicates what should happen when the data contain `NA`s. Defaults to `getOption("na.action")`. | | `...` | further arguments to be passed to or from methods. | ### Details `kruskal.test` performs a Kruskal-Wallis rank sum test of the null that the location parameters of the distribution of `x` are the same in each group (sample). The alternative is that they differ in at least one. If `x` is a list, its elements are taken as the samples to be compared, and hence have to be numeric data vectors. In this case, `g` is ignored, and one can simply use `kruskal.test(x)` to perform the test. If the samples are not yet contained in a list, use `kruskal.test(list(x, ...))`. Otherwise, `x` must be a numeric data vector, and `g` must be a vector or factor object of the same length as `x` giving the group for the corresponding elements of `x`. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the Kruskal-Wallis rank sum statistic. | | `parameter` | the degrees of freedom of the approximate chi-squared distribution of the test statistic. | | `p.value` | the p-value of the test. | | `method` | the character string `"Kruskal-Wallis rank sum test"`. | | `data.name` | a character string giving the names of the data. | ### References Myles Hollander and Douglas A. Wolfe (1973), *Nonparametric Statistical Methods.* New York: John Wiley & Sons. Pages 115–120. ### See Also The Wilcoxon rank sum test (`<wilcox.test>`) as the special case for two samples; `<lm>` together with `<anova>` for performing one-way location analysis under normality assumptions; with Student's t test (`<t.test>`) as the special case for two samples. `[wilcox\_test](../../coin/html/locationtests)` in package [coin](https://CRAN.R-project.org/package=coin) for exact, asymptotic and Monte Carlo *conditional* p-values, including in the presence of ties. ### Examples ``` ## Hollander & Wolfe (1973), 116. ## Mucociliary efficiency from the rate of removal of dust in normal ## subjects, subjects with obstructive airway disease, and subjects ## with asbestosis. x <- c(2.9, 3.0, 2.5, 2.6, 3.2) # normal subjects y <- c(3.8, 2.7, 4.0, 2.4) # with obstructive airway disease z <- c(2.8, 3.4, 3.7, 2.2, 2.0) # with asbestosis kruskal.test(list(x, y, z)) ## Equivalently, x <- c(x, y, z) g <- factor(rep(1:3, c(5, 4, 5)), labels = c("Normal subjects", "Subjects with obstructive airway disease", "Subjects with asbestosis")) kruskal.test(x, g) ## Formula interface. require(graphics) boxplot(Ozone ~ Month, data = airquality) kruskal.test(Ozone ~ Month, data = airquality) ``` r None `runmed` Running Medians – Robust Scatter Plot Smoothing --------------------------------------------------------- ### Description Compute running medians of odd span. This is the ‘most robust’ scatter plot smoothing possible. For efficiency (and historical reason), you can use one of two different algorithms giving identical results. ### Usage ``` runmed(x, k, endrule = c("median", "keep", "constant"), algorithm = NULL, na.action = c("+Big_alternate", "-Big_alternate", "na.omit", "fail"), print.level = 0) ``` ### Arguments | | | | --- | --- | | `x` | numeric vector, the ‘dependent’ variable to be smoothed. | | `k` | integer width of median window; must be odd. Turlach had a default of `k <- 1 + 2 * min((n-1)%/% 2, ceiling(0.1*n))`. Use `k = 3` for ‘minimal’ robust smoothing eliminating isolated outliers. | | `endrule` | character string indicating how the values at the beginning and the end (of the data) should be treated. Can be abbreviated. Possible values are: `"keep"` keeps the first and last *k2* values at both ends, where *k2* is the half-bandwidth `k2 = k %/% 2`, i.e., `y[j] = x[j]` for *j = 1, …, k2 and (n-k2+1), …, n*; `"constant"` copies `median(y[1:k2])` to the first values and analogously for the last ones making the smoothed ends *constant*; `"median"` the default, smooths the ends by using symmetrical medians of subsequently smaller bandwidth, but for the very first and last value where Tukey's robust end-point rule is applied, see `[smoothEnds](smoothends)`. | | `algorithm` | character string (partially matching `"Turlach"` or `"Stuetzle"`) or the default `NULL`, specifying which algorithm should be applied. The default choice depends on `n = length(x)` and `k` where `"Turlach"` will be used for larger problems. | | `na.action` | character string determining the behavior in the case of `[NA](../../base/html/na)` or `[NaN](../../base/html/is.finite)` in `x`, (partially matching) one of `"+Big_alternate"` Here, all the NAs in `x` are first replaced by alternating *+/- B* where *B* is a “Big” number (with *2B < M\**, where *M\*=*`[.Machine](../../base/html/zmachine) $ double.xmax`). The replacement values are “from left” *(+B, -B, +B, …)*, i.e. start with `"+"`. `"-Big_alternate"` almost the same as `"+Big_alternate"`, just starting with *-B* (`"-Big..."`). `"na.omit"` the result is the same as `runmed(x[!is.na(x)], k, ..)`. `"fail"` the presence of NAs in `x` will raise an error. | | `print.level` | integer, indicating verboseness of algorithm; should rarely be changed by average users. | ### Details Apart from the end values, the result `y = runmed(x, k)` simply has `y[j] = median(x[(j-k2):(j+k2)])` (`k = 2*k2+1`), computed very efficiently. The two algorithms are internally entirely different: `"Turlach"` is the Härdle–Steiger algorithm (see Ref.) as implemented by Berwin Turlach. A tree algorithm is used, ensuring performance *O(n \* log(k))* where `n = length(x)` which is asymptotically optimal. `"Stuetzle"` is the (older) Stuetzle–Friedman implementation which makes use of median *updating* when one observation enters and one leaves the smoothing window. While this performs as *O(n \* k)* which is slower asymptotically, it is considerably faster for small *k* or *n*. Note that, both algorithms (and the `[smoothEnds](smoothends)()` utility) now “work” also when `x` contains non-finite entries (*+/-*`[Inf](../../base/html/is.finite)`, `[NaN](../../base/html/is.finite)`, and `[NA](../../base/html/na)`): `"Turlach"` ....... `"Stuetzle"` currently simply works by applying the underlying math library (‘libm’) arithmetic for the non-finite numbers; this may optionally change in the future. Currently [long vectors](../../base/html/longvectors) are only supported for `algorithm = "Stuetzle"`. ### Value vector of smoothed values of the same length as `x` with an `[attr](../../base/html/attr)`ibute `k` containing (the ‘oddified’) `k`. ### Author(s) Martin Maechler [[email protected]](mailto:[email protected]), based on Fortran code from Werner Stuetzle and S-PLUS and C code from Berwin Turlach. ### References Härdle, W. and Steiger, W. (1995) Algorithm AS 296: Optimal median smoothing, *Applied Statistics* **44**, 258–264. doi: [10.2307/2986349](https://doi.org/10.2307/2986349). Jerome H. Friedman and Werner Stuetzle (1982) *Smoothing of Scatterplots*; Report, Dep. Statistics, Stanford U., Project Orion 003. ### See Also `[smoothEnds](smoothends)` which implements Tukey's end point rule and is called by default from `runmed(*, endrule = "median")`. `<smooth>` uses running medians of 3 for its compound smoothers. ### Examples ``` require(graphics) utils::example(nhtemp) myNHT <- as.vector(nhtemp) myNHT[20] <- 2 * nhtemp[20] plot(myNHT, type = "b", ylim = c(48, 60), main = "Running Medians Example") lines(runmed(myNHT, 7), col = "red") ## special: multiple y values for one x plot(cars, main = "'cars' data and runmed(dist, 3)") lines(cars, col = "light gray", type = "c") with(cars, lines(speed, runmed(dist, k = 3), col = 2)) ## nice quadratic with a few outliers y <- ys <- (-20:20)^2 y [c(1,10,21,41)] <- c(150, 30, 400, 450) all(y == runmed(y, 1)) # 1-neighbourhood <==> interpolation plot(y) ## lines(y, lwd = .1, col = "light gray") lines(lowess(seq(y), y, f = 0.3), col = "brown") lines(runmed(y, 7), lwd = 2, col = "blue") lines(runmed(y, 11), lwd = 2, col = "red") ## Lowess is not robust y <- ys ; y[21] <- 6666 ; x <- seq(y) col <- c("black", "brown","blue") plot(y, col = col[1]) lines(lowess(x, y, f = 0.3), col = col[2]) lines(runmed(y, 7), lwd = 2, col = col[3]) legend(length(y),max(y), c("data", "lowess(y, f = 0.3)", "runmed(y, 7)"), xjust = 1, col = col, lty = c(0, 1, 1), pch = c(1,NA,NA)) ## An example with initial NA's - used to fail badly (notably for "Turlach"): x15 <- c(rep(NA, 4), c(9, 9, 4, 22, 6, 1, 7, 5, 2, 8, 3)) rS15 <- cbind(Sk.3 = runmed(x15, k = 3, algorithm="S"), Sk.7 = runmed(x15, k = 7, algorithm="S"), Sk.11= runmed(x15, k =11, algorithm="S")) rT15 <- cbind(Tk.3 = runmed(x15, k = 3, algorithm="T", print.level=1), Tk.7 = runmed(x15, k = 7, algorithm="T", print.level=1), Tk.9 = runmed(x15, k = 9, algorithm="T", print.level=1), Tk.11= runmed(x15, k =11, algorithm="T", print.level=1)) cbind(x15, rS15, rT15) # result for k=11 maybe a bit surprising .. Tv <- rT15[-(1:3),] stopifnot(3 <= Tv, Tv <= 9, 5 <= Tv[1:10,]) matplot(y = cbind(x15, rT15), type = "b", ylim = c(1,9), pch=1:5, xlab = NA, main = "runmed(x15, k, algo = \"Turlach\")") mtext(paste("x15 <-", deparse(x15))) points(x15, cex=2) legend("bottomleft", legend=c("data", paste("k = ", c(3,7,9,11))), bty="n", col=1:5, lty=1:5, pch=1:5) ``` r None `fft` Fast Discrete Fourier Transform (FFT) -------------------------------------------- ### Description Computes the Discrete Fourier Transform (DFT) of an array with a fast algorithm, the “Fast Fourier Transform” (FFT). ### Usage ``` fft(z, inverse = FALSE) mvfft(z, inverse = FALSE) ``` ### Arguments | | | | --- | --- | | `z` | a real or complex array containing the values to be transformed. Long vectors are not supported. | | `inverse` | if `TRUE`, the unnormalized inverse transform is computed (the inverse has a `+` in the exponent of *e*, but here, we do *not* divide by `1/length(x)`). | ### Value When `z` is a vector, the value computed and returned by `fft` is the unnormalized univariate discrete Fourier transform of the sequence of values in `z`. Specifically, `y <- fft(z)` returns *y[h] = sum\_{k=1}^n z[k]\*exp(-2\*pi\*1i\*(k-1)\*(h-1)/n)* for *h = 1, ..., n* where n = `length(y)`. If `inverse` is `TRUE`, *exp(-2\*pi...)* is replaced with *exp(2\*pi...)*. When `z` contains an array, `fft` computes and returns the multivariate (spatial) transform. If `inverse` is `TRUE`, the (unnormalized) inverse Fourier transform is returned, i.e., if `y <- fft(z)`, then `z` is `fft(y, inverse = TRUE) / length(y)`. By contrast, `mvfft` takes a real or complex matrix as argument, and returns a similar shaped matrix, but with each column replaced by its discrete Fourier transform. This is useful for analyzing vector-valued series. The FFT is fastest when the length of the series being transformed is highly composite (i.e., has many factors). If this is not the case, the transform may take a long time to compute and will use a large amount of memory. ### Source Uses C translation of Fortran code in Singleton (1979). ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988). *The New S Language*. Wadsworth & Brooks/Cole. Singleton, R. C. (1979). Mixed Radix Fast Fourier Transforms, in *Programs for Digital Signal Processing*, IEEE Digital Signal Processing Committee eds. IEEE Press. Cooley, James W., and Tukey, John W. (1965). An algorithm for the machine calculation of complex Fourier series, *Mathematics of Computation*, **19**(90), 297–301. doi: [10.2307/2003354](https://doi.org/10.2307/2003354). ### See Also `<convolve>`, `<nextn>`. ### Examples ``` x <- 1:4 fft(x) fft(fft(x), inverse = TRUE)/length(x) ## Slow Discrete Fourier Transform (DFT) - e.g., for checking the formula fft0 <- function(z, inverse=FALSE) { n <- length(z) if(n == 0) return(z) k <- 0:(n-1) ff <- (if(inverse) 1 else -1) * 2*pi * 1i * k/n vapply(1:n, function(h) sum(z * exp(ff*(h-1))), complex(1)) } relD <- function(x,y) 2* abs(x - y) / abs(x + y) n <- 2^8 z <- complex(n, rnorm(n), rnorm(n)) ## relative differences in the order of 4*10^{-14} : summary(relD(fft(z), fft0(z))) summary(relD(fft(z, inverse=TRUE), fft0(z, inverse=TRUE))) ``` r None `splinefun` Interpolating Splines ---------------------------------- ### Description Perform cubic (or Hermite) spline interpolation of given data points, returning either a list of points obtained by the interpolation or a *function* performing the interpolation. ### Usage ``` splinefun(x, y = NULL, method = c("fmm", "periodic", "natural", "monoH.FC", "hyman"), ties = mean) spline(x, y = NULL, n = 3*length(x), method = "fmm", xmin = min(x), xmax = max(x), xout, ties = mean) splinefunH(x, y, m) ``` ### Arguments | | | | --- | --- | | `x, y` | vectors giving the coordinates of the points to be interpolated. Alternatively a single plotting structure can be specified: see `[xy.coords](../../grdevices/html/xy.coords)`. `y` must be increasing or decreasing for `method = "hyman"`. | | `m` | (for `splinefunH()`): vector of *slopes* *m[i]* at the points *(x[i],y[i])*; these together determine the **H**ermite “spline” which is piecewise cubic, (only) *once* differentiable continuously. | | `method` | specifies the type of spline to be used. Possible values are `"fmm"`, `"natural"`, `"periodic"`, `"monoH.FC"` and `"hyman"`. Can be abbreviated. | | `n` | if `xout` is left unspecified, interpolation takes place at `n` equally spaced points spanning the interval [`xmin`, `xmax`]. | | `xmin, xmax` | left-hand and right-hand endpoint of the interpolation interval (when `xout` is unspecified). | | `xout` | an optional set of values specifying where interpolation is to take place. | | `ties` | handling of tied `x` values. The string `"ordered"` or a function (or the name of a function) taking a single vector argument and returning a single number or a length-2 `[list](../../base/html/list)` of both, see `[approx](approxfun)` and its ‘Details’ section, and the example below. | ### Details The inputs can contain missing values which are deleted, so at least one complete `(x, y)` pair is required. If `method = "fmm"`, the spline used is that of Forsythe, Malcolm and Moler (an exact cubic is fitted through the four points at each end of the data, and this is used to determine the end conditions). Natural splines are used when `method = "natural"`, and periodic splines when `method = "periodic"`. The method `"monoH.FC"` computes a *monotone* Hermite spline according to the method of Fritsch and Carlson. It does so by determining slopes such that the Hermite spline, determined by *(x[i],y[i],m[i])*, is monotone (increasing or decreasing) **iff** the data are. Method `"hyman"` computes a *monotone* cubic spline using Hyman filtering of an `method = "fmm"` fit for strictly monotonic inputs. These interpolation splines can also be used for extrapolation, that is prediction at points outside the range of `x`. Extrapolation makes little sense for `method = "fmm"`; for natural splines it is linear using the slope of the interpolating curve at the nearest data point. ### Value `spline` returns a list containing components `x` and `y` which give the ordinates where interpolation took place and the interpolated values. `splinefun` returns a function with formal arguments `x` and `deriv`, the latter defaulting to zero. This function can be used to evaluate the interpolating cubic spline (`deriv` = 0), or its derivatives (`deriv` = 1, 2, 3) at the points `x`, where the spline function interpolates the data points originally specified. It uses data stored in its environment when it was created, the details of which are subject to change. ### Warning The value returned by `splinefun` contains references to the code in the current version of **R**: it is not intended to be saved and loaded into a different **R** session. This is safer in **R** >= 3.0.0. ### Author(s) R Core Team. Simon Wood for the original code for Hyman filtering. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988). *The New S Language*. Wadsworth & Brooks/Cole. Dougherty, R. L., Edelman, A. and Hyman, J. M. (1989) Positivity-, monotonicity-, or convexity-preserving cubic and quintic Hermite interpolation. *Mathematics of Computation*, **52**, 471–494. doi: [10.1090/S0025-5718-1989-0962209-1](https://doi.org/10.1090/S0025-5718-1989-0962209-1). Forsythe, G. E., Malcolm, M. A. and Moler, C. B. (1977). *Computer Methods for Mathematical Computations*. Wiley. Fritsch, F. N. and Carlson, R. E. (1980). Monotone piecewise cubic interpolation. *SIAM Journal on Numerical Analysis*, **17**, 238–246. doi: [10.1137/0717021](https://doi.org/10.1137/0717021). Hyman, J. M. (1983). Accurate monotonicity preserving cubic interpolation. *SIAM Journal on Scientific and Statistical Computing*, **4**, 645–654. doi: [10.1137/0904045](https://doi.org/10.1137/0904045). ### See Also `[approx](approxfun)` and `<approxfun>` for constant and linear interpolation. Package splines, especially `[interpSpline](../../splines/html/interpspline)` and `[periodicSpline](../../splines/html/periodicspline)` for interpolation splines. That package also generates spline bases that can be used for regression splines. `<smooth.spline>` for smoothing splines. ### Examples ``` require(graphics) op <- par(mfrow = c(2,1), mgp = c(2,.8,0), mar = 0.1+c(3,3,3,1)) n <- 9 x <- 1:n y <- rnorm(n) plot(x, y, main = paste("spline[fun](.) through", n, "points")) lines(spline(x, y)) lines(spline(x, y, n = 201), col = 2) y <- (x-6)^2 plot(x, y, main = "spline(.) -- 3 methods") lines(spline(x, y, n = 201), col = 2) lines(spline(x, y, n = 201, method = "natural"), col = 3) lines(spline(x, y, n = 201, method = "periodic"), col = 4) legend(6, 25, c("fmm","natural","periodic"), col = 2:4, lty = 1) y <- sin((x-0.5)*pi) f <- splinefun(x, y) ls(envir = environment(f)) splinecoef <- get("z", envir = environment(f)) curve(f(x), 1, 10, col = "green", lwd = 1.5) points(splinecoef, col = "purple", cex = 2) curve(f(x, deriv = 1), 1, 10, col = 2, lwd = 1.5) curve(f(x, deriv = 2), 1, 10, col = 2, lwd = 1.5, n = 401) curve(f(x, deriv = 3), 1, 10, col = 2, lwd = 1.5, n = 401) par(op) ## Manual spline evaluation --- demo the coefficients : .x <- splinecoef$x u <- seq(3, 6, by = 0.25) (ii <- findInterval(u, .x)) dx <- u - .x[ii] f.u <- with(splinecoef, y[ii] + dx*(b[ii] + dx*(c[ii] + dx* d[ii]))) stopifnot(all.equal(f(u), f.u)) ## An example with ties (non-unique x values): set.seed(1); x <- round(rnorm(30), 1); y <- sin(pi * x) + rnorm(30)/10 plot(x, y, main = "spline(x,y) when x has ties") lines(spline(x, y, n = 201), col = 2) ## visualizes the non-unique ones: tx <- table(x); mx <- as.numeric(names(tx[tx > 1])) ry <- matrix(unlist(tapply(y, match(x, mx), range, simplify = FALSE)), ncol = 2, byrow = TRUE) segments(mx, ry[, 1], mx, ry[, 2], col = "blue", lwd = 2) ## Another example with sorted x, but ties: set.seed(8); x <- sort(round(rnorm(30), 1)); y <- round(sin(pi * x) + rnorm(30)/10, 3) summary(diff(x) == 0) # -> 7 duplicated x-values str(spline(x, y, n = 201, ties="ordered")) # all '$y' entries are NaN ## The default (ties=mean) is ok, but most efficient to use instead is sxyo <- spline(x, y, n = 201, ties= list("ordered", mean)) sapply(sxyo, summary)# all fine now plot(x, y, main = "spline(x,y, ties=list(\"ordered\", mean)) for when x has ties") lines(sxyo, col="blue") ## An example of monotone interpolation n <- 20 set.seed(11) x. <- sort(runif(n)) ; y. <- cumsum(abs(rnorm(n))) plot(x., y.) curve(splinefun(x., y.)(x), add = TRUE, col = 2, n = 1001) curve(splinefun(x., y., method = "monoH.FC")(x), add = TRUE, col = 3, n = 1001) curve(splinefun(x., y., method = "hyman") (x), add = TRUE, col = 4, n = 1001) legend("topleft", paste0("splinefun( \"", c("fmm", "monoH.FC", "hyman"), "\" )"), col = 2:4, lty = 1, bty = "n") ## and one from Fritsch and Carlson (1980), Dougherty et al (1989) x. <- c(7.09, 8.09, 8.19, 8.7, 9.2, 10, 12, 15, 20) f <- c(0, 2.76429e-5, 4.37498e-2, 0.169183, 0.469428, 0.943740, 0.998636, 0.999919, 0.999994) s0 <- splinefun(x., f) s1 <- splinefun(x., f, method = "monoH.FC") s2 <- splinefun(x., f, method = "hyman") plot(x., f, ylim = c(-0.2, 1.2)) curve(s0(x), add = TRUE, col = 2, n = 1001) -> m0 curve(s1(x), add = TRUE, col = 3, n = 1001) curve(s2(x), add = TRUE, col = 4, n = 1001) legend("right", paste0("splinefun( \"", c("fmm", "monoH.FC", "hyman"), "\" )"), col = 2:4, lty = 1, bty = "n") ## they seem identical, but are not quite: xx <- m0$x plot(xx, s1(xx) - s2(xx), type = "l", col = 2, lwd = 2, main = "Difference monoH.FC - hyman"); abline(h = 0, lty = 3) x <- xx[xx < 10.2] ## full range: x <- xx .. does not show enough ccol <- adjustcolor(2:4, 0.8) matplot(x, cbind(s0(x, deriv = 2), s1(x, deriv = 2), s2(x, deriv = 2))^2, lwd = 2, col = ccol, type = "l", ylab = quote({{f*second}(x)}^2), main = expression({{f*second}(x)}^2 ~" for the three 'splines'")) legend("topright", paste0("splinefun( \"", c("fmm", "monoH.FC", "hyman"), "\" )"), lwd = 2, col = ccol, lty = 1:3, bty = "n") ## --> "hyman" has slightly smaller Integral f''(x)^2 dx than "FC", ## here, and both are 'much worse' than the regular fmm spline. ```
programming_docs
r None `alias` Find Aliases (Dependencies) in a Model ----------------------------------------------- ### Description Find aliases (linearly dependent terms) in a linear model specified by a formula. ### Usage ``` alias(object, ...) ## S3 method for class 'formula' alias(object, data, ...) ## S3 method for class 'lm' alias(object, complete = TRUE, partial = FALSE, partial.pattern = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `object` | A fitted model object, for example from `lm` or `aov`, or a formula for `alias.formula`. | | `data` | Optionally, a data frame to search for the objects in the formula. | | `complete` | Should information on complete aliasing be included? | | `partial` | Should information on partial aliasing be included? | | `partial.pattern` | Should partial aliasing be presented in a schematic way? If this is done, the results are presented in a more compact way, usually giving the deciles of the coefficients. | | `...` | further arguments passed to or from other methods. | ### Details Although the main method is for class `"lm"`, `alias` is most useful for experimental designs and so is used with fits from `aov`. Complete aliasing refers to effects in linear models that cannot be estimated independently of the terms which occur earlier in the model and so have their coefficients omitted from the fit. Partial aliasing refers to effects that can be estimated less precisely because of correlations induced by the design. Some parts of the `"lm"` method require recommended package [MASS](https://CRAN.R-project.org/package=MASS) to be installed. ### Value A list (of `[class](../../base/html/class)` `"<listof>"`) containing components | | | | --- | --- | | `Model` | Description of the model; usually the formula. | | `Complete` | A matrix with columns corresponding to effects that are linearly dependent on the rows. | | `Partial` | The correlations of the estimable effects, with a zero diagonal. An object of class `"mtable"` which has its own `[print](../../base/html/print)` method. | ### Note The aliasing pattern may depend on the contrasts in use: Helmert contrasts are probably most useful. The defaults are different from those in S. ### Author(s) The design was inspired by the S function of the same name described in Chambers *et al* (1992). ### References Chambers, J. M., Freeny, A and Heiberger, R. M. (1992) *Analysis of variance; designed experiments.* Chapter 5 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### Examples ``` op <- options(contrasts = c("contr.helmert", "contr.poly")) npk.aov <- aov(yield ~ block + N*P*K, npk) alias(npk.aov) options(op) # reset ``` r None `deriv` Symbolic and Algorithmic Derivatives of Simple Expressions ------------------------------------------------------------------- ### Description Compute derivatives of simple expressions, symbolically and algorithmically. ### Usage ``` D (expr, name) deriv(expr, ...) deriv3(expr, ...) ## Default S3 method: deriv(expr, namevec, function.arg = NULL, tag = ".expr", hessian = FALSE, ...) ## S3 method for class 'formula' deriv(expr, namevec, function.arg = NULL, tag = ".expr", hessian = FALSE, ...) ## Default S3 method: deriv3(expr, namevec, function.arg = NULL, tag = ".expr", hessian = TRUE, ...) ## S3 method for class 'formula' deriv3(expr, namevec, function.arg = NULL, tag = ".expr", hessian = TRUE, ...) ``` ### Arguments | | | | --- | --- | | `expr` | a `[expression](../../base/html/expression)` or `[call](../../base/html/call)` or (except `D`) a formula with no lhs. | | `name,namevec` | character vector, giving the variable names (only one for `D()`) with respect to which derivatives will be computed. | | `function.arg` | if specified and non-`NULL`, a character vector of arguments for a function return, or a function (with empty body) or `TRUE`, the latter indicating that a function with argument names `namevec` should be used. | | `tag` | character; the prefix to be used for the locally created variables in result. | | `hessian` | a logical value indicating whether the second derivatives should be calculated and incorporated in the return value. | | `...` | arguments to be passed to or from methods. | ### Details `D` is modelled after its S namesake for taking simple symbolic derivatives. `deriv` is a *generic* function with a default and a `<formula>` method. It returns a `[call](../../base/html/call)` for computing the `expr` and its (partial) derivatives, simultaneously. It uses so-called *algorithmic derivatives*. If `function.arg` is a function, its arguments can have default values, see the `fx` example below. Currently, `deriv.formula` just calls `deriv.default` after extracting the expression to the right of `~`. `deriv3` and its methods are equivalent to `deriv` and its methods except that `hessian` defaults to `TRUE` for `deriv3`. The internal code knows about the arithmetic operators `+`, `-`, `*`, `/` and `^`, and the single-variable functions `exp`, `log`, `sin`, `cos`, `tan`, `sinh`, `cosh`, `sqrt`, `pnorm`, `dnorm`, `asin`, `acos`, `atan`, `gamma`, `lgamma`, `digamma` and `trigamma`, as well as `psigamma` for one or two arguments (but derivative only with respect to the first). (Note that only the standard normal distribution is considered.) Since **R** 3.4.0, the single-variable functions `[log1p](../../base/html/log)`, `expm1`, `log2`, `log10`, `[cospi](../../base/html/trig)`, `sinpi`, `tanpi`, `[factorial](../../base/html/special)`, and `lfactorial` are supported as well. ### Value `D` returns a call and therefore can easily be iterated for higher derivatives. `deriv` and `deriv3` normally return an `[expression](../../base/html/expression)` object whose evaluation returns the function values with a `"gradient"` attribute containing the gradient matrix. If `hessian` is `TRUE` the evaluation also returns a `"hessian"` attribute containing the Hessian array. If `function.arg` is not `NULL`, `deriv` and `deriv3` return a function with those arguments rather than an expression. ### References Griewank, A. and Corliss, G. F. (1991) *Automatic Differentiation of Algorithms: Theory, Implementation, and Application*. SIAM proceedings, Philadelphia. Bates, D. M. and Chambers, J. M. (1992) *Nonlinear models.* Chapter 10 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<nlm>` and `<optim>` for numeric minimization which could make use of derivatives, ### Examples ``` ## formula argument : dx2x <- deriv(~ x^2, "x") ; dx2x ## Not run: expression({ .value <- x^2 .grad <- array(0, c(length(.value), 1), list(NULL, c("x"))) .grad[, "x"] <- 2 * x attr(.value, "gradient") <- .grad .value }) ## End(Not run) mode(dx2x) x <- -1:2 eval(dx2x) ## Something 'tougher': trig.exp <- expression(sin(cos(x + y^2))) ( D.sc <- D(trig.exp, "x") ) all.equal(D(trig.exp[[1]], "x"), D.sc) ( dxy <- deriv(trig.exp, c("x", "y")) ) y <- 1 eval(dxy) eval(D.sc) ## function returned: deriv((y ~ sin(cos(x) * y)), c("x","y"), function.arg = TRUE) ## function with defaulted arguments: (fx <- deriv(y ~ b0 + b1 * 2^(-x/th), c("b0", "b1", "th"), function(b0, b1, th, x = 1:7){} ) ) fx(2, 3, 4) ## First derivative D(expression(x^2), "x") stopifnot(D(as.name("x"), "x") == 1) ## Higher derivatives deriv3(y ~ b0 + b1 * 2^(-x/th), c("b0", "b1", "th"), c("b0", "b1", "th", "x") ) ## Higher derivatives: DD <- function(expr, name, order = 1) { if(order < 1) stop("'order' must be >= 1") if(order == 1) D(expr, name) else DD(D(expr, name), name, order - 1) } DD(expression(sin(x^2)), "x", 3) ## showing the limits of the internal "simplify()" : ## Not run: -sin(x^2) * (2 * x) * 2 + ((cos(x^2) * (2 * x) * (2 * x) + sin(x^2) * 2) * (2 * x) + sin(x^2) * (2 * x) * 2) ## End(Not run) ## New (R 3.4.0, 2017): D(quote(log1p(x^2)), "x") ## log1p(x) = log(1 + x) stopifnot(identical( D(quote(log1p(x^2)), "x"), D(quote(log(1+x^2)), "x"))) D(quote(expm1(x^2)), "x") ## expm1(x) = exp(x) - 1 stopifnot(identical( D(quote(expm1(x^2)), "x") -> Dex1, D(quote(exp(x^2)-1), "x")), identical(Dex1, quote(exp(x^2) * (2 * x)))) D(quote(sinpi(x^2)), "x") ## sinpi(x) = sin(pi*x) D(quote(cospi(x^2)), "x") ## cospi(x) = cos(pi*x) D(quote(tanpi(x^2)), "x") ## tanpi(x) = tan(pi*x) stopifnot(identical(D(quote(log2 (x^2)), "x"), quote(2 * x/(x^2 * log(2)))), identical(D(quote(log10(x^2)), "x"), quote(2 * x/(x^2 * log(10))))) ``` r None `monthplot` Plot a Seasonal or other Subseries from a Time Series ------------------------------------------------------------------ ### Description These functions plot seasonal (or other) subseries of a time series. For each season (or other category), a time series is plotted. ### Usage ``` monthplot(x, ...) ## S3 method for class 'stl' monthplot(x, labels = NULL, ylab = choice, choice = "seasonal", ...) ## S3 method for class 'StructTS' monthplot(x, labels = NULL, ylab = choice, choice = "sea", ...) ## S3 method for class 'ts' monthplot(x, labels = NULL, times = time(x), phase = cycle(x), ylab = deparse1(substitute(x)), ...) ## Default S3 method: monthplot(x, labels = 1L:12L, ylab = deparse1(substitute(x)), times = seq_along(x), phase = (times - 1L)%%length(labels) + 1L, base = mean, axes = TRUE, type = c("l", "h"), box = TRUE, add = FALSE, col = par("col"), lty = par("lty"), lwd = par("lwd"), col.base = col, lty.base = lty, lwd.base = lwd, ...) ``` ### Arguments | | | | --- | --- | | `x` | Time series or related object. | | `labels` | Labels to use for each ‘season’. | | `ylab` | y label. | | `times` | Time of each observation. | | `phase` | Indicator for each ‘season’. | | `base` | Function to use for reference line for subseries. | | `choice` | Which series of an `stl` or `StructTS` object? | | `...` | Arguments to be passed to the default method or graphical parameters. | | `axes` | Should axes be drawn (ignored if `add = TRUE`)? | | `type` | Type of plot. The default is to join the points with lines, and `"h"` is for histogram-like vertical lines. | | `box` | Should a box be drawn (ignored if `add = TRUE`)? | | `add` | Should thus just add on an existing plot. | | `col, lty, lwd` | Graphics parameters for the series. | | `col.base, lty.base, lwd.base` | Graphics parameters for the segments used for the reference lines. | ### Details These functions extract subseries from a time series and plot them all in one frame. The `<ts>`, `<stl>`, and `[StructTS](structts)` methods use the internally recorded frequency and start and finish times to set the scale and the seasons. The default method assumes observations come in groups of 12 (though this can be changed). If the `labels` are not given but the `phase` is given, then the `labels` default to the unique values of the `phase`. If both are given, then the `phase` values are assumed to be indices into the `labels` array, i.e., they should be in the range from 1 to `length(labels)`. ### Value These functions are executed for their side effect of drawing a seasonal subseries plot on the current graphical window. ### Author(s) Duncan Murdoch ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<ts>`, `<stl>`, `[StructTS](structts)` ### Examples ``` require(graphics) ## The CO2 data fit <- stl(log(co2), s.window = 20, t.window = 20) plot(fit) op <- par(mfrow = c(2,2)) monthplot(co2, ylab = "data", cex.axis = 0.8) monthplot(fit, choice = "seasonal", cex.axis = 0.8) monthplot(fit, choice = "trend", cex.axis = 0.8) monthplot(fit, choice = "remainder", type = "h", cex.axis = 0.8) par(op) ## The CO2 data, grouped quarterly quarter <- (cycle(co2) - 1) %/% 3 monthplot(co2, phase = quarter) ## see also JohnsonJohnson ``` r None `nlminb` Optimization using PORT routines ------------------------------------------ ### Description Unconstrained and box-constrained optimization using PORT routines. For historical compatibility. ### Usage ``` nlminb(start, objective, gradient = NULL, hessian = NULL, ..., scale = 1, control = list(), lower = -Inf, upper = Inf) ``` ### Arguments | | | | --- | --- | | `start` | numeric vector, initial values for the parameters to be optimized. | | `objective` | Function to be minimized. Must return a scalar value. The first argument to `objective` is the vector of parameters to be optimized, whose initial values are supplied through `start`. Further arguments (fixed during the course of the optimization) to `objective` may be specified as well (see `...`). | | `gradient` | Optional function that takes the same arguments as `objective` and evaluates the gradient of `objective` at its first argument. Must return a vector as long as `start`. | | `hessian` | Optional function that takes the same arguments as `objective` and evaluates the hessian of `objective` at its first argument. Must return a square matrix of order `length(start)`. Only the lower triangle is used. | | `...` | Further arguments to be supplied to `objective`. | | `scale` | See PORT documentation (or leave alone). | | `control` | A list of control parameters. See below for details. | | `lower, upper` | vectors of lower and upper bounds, replicated to be as long as `start`. If unspecified, all parameters are assumed to be unconstrained. | ### Details Any names of `start` are passed on to `objective` and where applicable, `gradient` and `hessian`. The parameter vector will be coerced to double. If any of the functions returns `NA` or `NaN` this is an error for the gradient and Hessian, and such values for function evaluation are replaced by `+Inf` with a warning. ### Value A list with components: | | | | --- | --- | | `par` | The best set of parameters found. | | `objective` | The value of `objective` corresponding to `par`. | | `convergence` | An integer code. `0` indicates successful convergence. | | `message` | A character string giving any additional information returned by the optimizer, or `NULL`. For details, see PORT documentation. | | `iterations` | Number of iterations performed. | | `evaluations` | Number of objective function and gradient function evaluations | ### Control parameters Possible names in the `control` list and their default values are: `eval.max` Maximum number of evaluations of the objective function allowed. Defaults to 200. `iter.max` Maximum number of iterations allowed. Defaults to 150. `trace` The value of the objective function and the parameters is printed every trace'th iteration. Defaults to 0 which indicates no trace information is to be printed. `abs.tol` Absolute tolerance. Defaults to 0 so the absolute convergence test is not used. If the objective function is known to be non-negative, the previous default of `1e-20` would be more appropriate. `rel.tol` Relative tolerance. Defaults to `1e-10`. `x.tol` X tolerance. Defaults to `1.5e-8`. `xf.tol` false convergence tolerance. Defaults to `2.2e-14`. `step.min, step.max` Minimum and maximum step size. Both default to `1.`. sing.tol singular convergence tolerance; defaults to `rel.tol`. scale.init ... diff.g an estimated bound on the relative error in the objective function value. ### Author(s) **R** port: Douglas Bates and Deepayan Sarkar. Underlying Fortran code by David M. Gay ### Source <https://www.netlib.org/port/> ### References David M. Gay (1990), Usage summary for selected optimization routines. Computing Science Technical Report 153, AT&T Bell Laboratories, Murray Hill. ### See Also `<optim>` (which is preferred) and `<nlm>`. `<optimize>` for one-dimensional minimization and `[constrOptim](constroptim)` for constrained optimization. ### Examples ``` x <- rnbinom(100, mu = 10, size = 10) hdev <- function(par) -sum(dnbinom(x, mu = par[1], size = par[2], log = TRUE)) nlminb(c(9, 12), hdev) nlminb(c(20, 20), hdev, lower = 0, upper = Inf) nlminb(c(20, 20), hdev, lower = 0.001, upper = Inf) ## slightly modified from the S-PLUS help page for nlminb # this example minimizes a sum of squares with known solution y sumsq <- function( x, y) {sum((x-y)^2)} y <- rep(1,5) x0 <- rnorm(length(y)) nlminb(start = x0, sumsq, y = y) # now use bounds with a y that has some components outside the bounds y <- c( 0, 2, 0, -2, 0) nlminb(start = x0, sumsq, lower = -1, upper = 1, y = y) # try using the gradient sumsq.g <- function(x, y) 2*(x-y) nlminb(start = x0, sumsq, sumsq.g, lower = -1, upper = 1, y = y) # now use the hessian, too sumsq.h <- function(x, y) diag(2, nrow = length(x)) nlminb(start = x0, sumsq, sumsq.g, sumsq.h, lower = -1, upper = 1, y = y) ## Rest lifted from optim help page fr <- function(x) { ## Rosenbrock Banana function x1 <- x[1] x2 <- x[2] 100 * (x2 - x1 * x1)^2 + (1 - x1)^2 } grr <- function(x) { ## Gradient of 'fr' x1 <- x[1] x2 <- x[2] c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1), 200 * (x2 - x1 * x1)) } nlminb(c(-1.2,1), fr) nlminb(c(-1.2,1), fr, grr) flb <- function(x) { p <- length(x); sum(c(1, rep(4, p-1)) * (x - c(1, x[-p])^2)^2) } ## 25-dimensional box constrained ## par[24] is *not* at boundary nlminb(rep(3, 25), flb, lower = rep(2, 25), upper = rep(4, 25)) ## trying to use a too small tolerance: r <- nlminb(rep(3, 25), flb, control = list(rel.tol = 1e-16)) stopifnot(grepl("rel.tol", r$message)) ``` r None `arima0` ARIMA Modelling of Time Series – Preliminary Version -------------------------------------------------------------- ### Description Fit an ARIMA model to a univariate time series, and forecast from the fitted model. ### Usage ``` arima0(x, order = c(0, 0, 0), seasonal = list(order = c(0, 0, 0), period = NA), xreg = NULL, include.mean = TRUE, delta = 0.01, transform.pars = TRUE, fixed = NULL, init = NULL, method = c("ML", "CSS"), n.cond, optim.control = list()) ## S3 method for class 'arima0' predict(object, n.ahead = 1, newxreg, se.fit = TRUE, ...) ``` ### Arguments | | | | --- | --- | | `x` | a univariate time series | | `order` | A specification of the non-seasonal part of the ARIMA model: the three components *(p, d, q)* are the AR order, the degree of differencing, and the MA order. | | `seasonal` | A specification of the seasonal part of the ARIMA model, plus the period (which defaults to `frequency(x)`). This should be a list with components `order` and `period`, but a specification of just a numeric vector of length 3 will be turned into a suitable list with the specification as the `order`. | | `xreg` | Optionally, a vector or matrix of external regressors, which must have the same number of rows as `x`. | | `include.mean` | Should the ARIMA model include a mean term? The default is `TRUE` for undifferenced series, `FALSE` for differenced ones (where a mean would not affect the fit nor predictions). | | `delta` | A value to indicate at which point ‘fast recursions’ should be used. See the ‘Details’ section. | | `transform.pars` | Logical. If true, the AR parameters are transformed to ensure that they remain in the region of stationarity. Not used for `method = "CSS"`. | | `fixed` | optional numeric vector of the same length as the total number of parameters. If supplied, only `NA` entries in `fixed` will be varied. `transform.pars = TRUE` will be overridden (with a warning) if any ARMA parameters are fixed. | | `init` | optional numeric vector of initial parameter values. Missing values will be filled in, by zeroes except for regression coefficients. Values already specified in `fixed` will be ignored. | | `method` | Fitting method: maximum likelihood or minimize conditional sum-of-squares. Can be abbreviated. | | `n.cond` | Only used if fitting by conditional-sum-of-squares: the number of initial observations to ignore. It will be ignored if less than the maximum lag of an AR term. | | `optim.control` | List of control parameters for `<optim>`. | | `object` | The result of an `arima0` fit. | | `newxreg` | New values of `xreg` to be used for prediction. Must have at least `n.ahead` rows. | | `n.ahead` | The number of steps ahead for which prediction is required. | | `se.fit` | Logical: should standard errors of prediction be returned? | | `...` | arguments passed to or from other methods. | ### Details Different definitions of ARMA models have different signs for the AR and/or MA coefficients. The definition here has *X[t] = a[1]X[t-1] + … + a[p]X[t-p] + e[t] + b[1]e[t-1] + … + b[q]e[t-q]* and so the MA coefficients differ in sign from those of S-PLUS. Further, if `include.mean` is true, this formula applies to *X-m* rather than *X*. For ARIMA models with differencing, the differenced series follows a zero-mean ARMA model. The variance matrix of the estimates is found from the Hessian of the log-likelihood, and so may only be a rough guide, especially for fits close to the boundary of invertibility. Optimization is done by `<optim>`. It will work best if the columns in `xreg` are roughly scaled to zero mean and unit variance, but does attempt to estimate suitable scalings. Finite-history prediction is used. This is only statistically efficient if the MA part of the fit is invertible, so `predict.arima0` will give a warning for non-invertible MA models. ### Value For `arima0`, a list of class `"arima0"` with components: | | | | --- | --- | | `coef` | a vector of AR, MA and regression coefficients, | | `sigma2` | the MLE of the innovations variance. | | `var.coef` | the estimated variance matrix of the coefficients `coef`. | | `loglik` | the maximized log-likelihood (of the differenced data), or the approximation to it used. | | `arma` | A compact form of the specification, as a vector giving the number of AR, MA, seasonal AR and seasonal MA coefficients, plus the period and the number of non-seasonal and seasonal differences. | | `aic` | the AIC value corresponding to the log-likelihood. Only valid for `method = "ML"` fits. | | `residuals` | the fitted innovations. | | `call` | the matched call. | | `series` | the name of the series `x`. | | `convergence` | the value returned by `<optim>`. | | `n.cond` | the number of initial observations not used in the fitting. | For `predict.arima0`, a time series of predictions, or if `se.fit = TRUE`, a list with components `pred`, the predictions, and `se`, the estimated standard errors. Both components are time series. ### Fitting methods The exact likelihood is computed via a state-space representation of the ARMA process, and the innovations and their variance found by a Kalman filter based on Gardner *et al* (1980). This has the option to switch to ‘fast recursions’ (assume an effectively infinite past) if the innovations variance is close enough to its asymptotic bound. The argument `delta` sets the tolerance: at its default value the approximation is normally negligible and the speed-up considerable. Exact computations can be ensured by setting `delta` to a negative value. If `transform.pars` is true, the optimization is done using an alternative parametrization which is a variation on that suggested by Jones (1980) and ensures that the model is stationary. For an AR(p) model the parametrization is via the inverse tanh of the partial autocorrelations: the same procedure is applied (separately) to the AR and seasonal AR terms. The MA terms are also constrained to be invertible during optimization by the same transformation if `transform.pars` is true. Note that the MLE for MA terms does sometimes occur for MA polynomials with unit roots: such models can be fitted by using `transform.pars = FALSE` and specifying a good set of initial values (often obtainable from a fit with `transform.pars = TRUE`). Missing values are allowed, but any missing values will force `delta` to be ignored and full recursions used. Note that missing values will be propagated by differencing, so the procedure used in this function is not fully efficient in that case. Conditional sum-of-squares is provided mainly for expositional purposes. This computes the sum of squares of the fitted innovations from observation `n.cond` on, (where `n.cond` is at least the maximum lag of an AR term), treating all earlier innovations to be zero. Argument `n.cond` can be used to allow comparability between different fits. The ‘part log-likelihood’ is the first term, half the log of the estimated mean square. Missing values are allowed, but will cause many of the innovations to be missing. When regressors are specified, they are orthogonalized prior to fitting unless any of the coefficients is fixed. It can be helpful to roughly scale the regressors to zero mean and unit variance. ### Note This is a preliminary version, and will be replaced by `<arima>`. The standard errors of prediction exclude the uncertainty in the estimation of the ARMA model and the regression coefficients. The results are likely to be different from S-PLUS's `arima.mle`, which computes a conditional likelihood and does not include a mean in the model. Further, the convention used by `arima.mle` reverses the signs of the MA coefficients. ### References Brockwell, P. J. and Davis, R. A. (1996). *Introduction to Time Series and Forecasting*. Springer, New York. Sections 3.3 and 8.3. Gardner, G, Harvey, A. C. and Phillips, G. D. A. (1980). Algorithm AS 154: An algorithm for exact maximum likelihood estimation of autoregressive-moving average models by means of Kalman filtering. *Applied Statistics*, **29**, 311–322. doi: [10.2307/2346910](https://doi.org/10.2307/2346910). Harvey, A. C. (1993). *Time Series Models*. 2nd Edition. Harvester Wheatsheaf. Sections 3.3 and 4.4. Harvey, A. C. and McKenzie, C. R. (1982). Algorithm AS 182: An algorithm for finite sample prediction from ARIMA processes. *Applied Statistics*, **31**, 180–187. doi: [10.2307/2347987](https://doi.org/10.2307/2347987). Jones, R. H. (1980). Maximum likelihood fitting of ARMA models to time series with missing observations. *Technometrics*, **22**, 389–395. doi: [10.2307/1268324](https://doi.org/10.2307/1268324). ### See Also `<arima>`, `<ar>`, `<tsdiag>` ### Examples ``` ## Not run: arima0(lh, order = c(1,0,0)) arima0(lh, order = c(3,0,0)) arima0(lh, order = c(1,0,1)) predict(arima0(lh, order = c(3,0,0)), n.ahead = 12) arima0(lh, order = c(3,0,0), method = "CSS") # for a model with as few years as this, we want full ML (fit <- arima0(USAccDeaths, order = c(0,1,1), seasonal = list(order=c(0,1,1)), delta = -1)) predict(fit, n.ahead = 6) arima0(LakeHuron, order = c(2,0,0), xreg = time(LakeHuron)-1920) ## Not run: ## presidents contains NAs ## graphs in example(acf) suggest order 1 or 3 (fit1 <- arima0(presidents, c(1, 0, 0), delta = -1)) # avoid warning tsdiag(fit1) (fit3 <- arima0(presidents, c(3, 0, 0), delta = -1)) # smaller AIC tsdiag(fit3) ## End(Not run) ```
programming_docs
r None `IQR` The Interquartile Range ------------------------------ ### Description computes interquartile range of the `x` values. ### Usage ``` IQR(x, na.rm = FALSE, type = 7) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector. | | `na.rm` | logical. Should missing values be removed? | | `type` | an integer selecting one of the many quantile algorithms, see `<quantile>`. | ### Details Note that this function computes the quartiles using the `<quantile>` function rather than following Tukey's recommendations, i.e., `IQR(x) = quantile(x, 3/4) - quantile(x, 1/4)`. For normally *N(m,1)* distributed *X*, the expected value of `IQR(X)` is `2*qnorm(3/4) = 1.3490`, i.e., for a normal-consistent estimate of the standard deviation, use `IQR(x) / 1.349`. ### References Tukey, J. W. (1977). *Exploratory Data Analysis.* Reading: Addison-Wesley. ### See Also `<fivenum>`, `<mad>` which is more robust, `[range](../../base/html/range)`, `<quantile>`. ### Examples ``` IQR(rivers) ``` r None `Multinom` The Multinomial Distribution ---------------------------------------- ### Description Generate multinomially distributed random number vectors and compute multinomial probabilities. ### Usage ``` rmultinom(n, size, prob) dmultinom(x, size = NULL, prob, log = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | vector of length *K* of integers in `0:size`. | | | | | --- | --- | | `n` | number of random vectors to draw. | | `size` | integer, say *N*, specifying the total number of objects that are put into *K* boxes in the typical multinomial experiment. For `dmultinom`, it defaults to `sum(x)`. | | `prob` | numeric non-negative vector of length *K*, specifying the probability for the *K* classes; is internally normalized to sum 1. Infinite and missing values are not allowed. | | `log` | logical; if TRUE, log probabilities are computed. | ### Details If `x` is a *K*-component vector, `dmultinom(x, prob)` is the probability *P(X[1]=x[1], … , X[K]=x[k]) = C \* prod(j=1 , …, K) p[j]^x[j]* where *C* is the ‘multinomial coefficient’ *C = N! / (x[1]! \* … \* x[K]!)* and *N = sum(j=1, …, K) x[j]*. By definition, each component *X[j]* is binomially distributed as `Bin(size, prob[j])` for *j = 1, …, K*. The `rmultinom()` algorithm draws binomials *X[j]* from *Bin(n[j], P[j])* sequentially, where *n[1] = N* (N := `size`), *P[1] = p[1]* (*p* is `prob` scaled to sum 1), and for *j ≥ 2*, recursively, *n[j] = N - sum(k=1, …, j-1) X[k]* and *P[j] = p[j] / (1 - sum(p[1:(j-1)]))*. ### Value For `rmultinom()`, an integer *K x n* matrix where each column is a random vector generated according to the desired multinomial law, and hence summing to `size`. Whereas the *transposed* result would seem more natural at first, the returned matrix is more efficient because of columnwise storage. ### Note `dmultinom` is currently *not vectorized* at all and has no C interface (API); this may be amended in the future. ### See Also [Distributions](distributions) for standard distributions, including `[dbinom](family)` which is a special case conceptually. ### Examples ``` rmultinom(10, size = 12, prob = c(0.1,0.2,0.8)) pr <- c(1,3,6,10) # normalization not necessary for generation rmultinom(10, 20, prob = pr) ## all possible outcomes of Multinom(N = 3, K = 3) X <- t(as.matrix(expand.grid(0:3, 0:3))); X <- X[, colSums(X) <= 3] X <- rbind(X, 3:3 - colSums(X)); dimnames(X) <- list(letters[1:3], NULL) X round(apply(X, 2, function(x) dmultinom(x, prob = c(1,2,5))), 3) ``` r None `loadings` Print Loadings in Factor Analysis --------------------------------------------- ### Description Extract or print loadings in factor analysis (or principal components analysis). ### Usage ``` loadings(x, ...) ## S3 method for class 'loadings' print(x, digits = 3, cutoff = 0.1, sort = FALSE, ...) ## S3 method for class 'factanal' print(x, digits = 3, ...) ``` ### Arguments | | | | --- | --- | | `x` | an object of class `"<factanal>"` or `"<princomp>"` or the `loadings` component of such an object. | | `digits` | number of decimal places to use in printing uniquenesses and loadings. | | `cutoff` | loadings smaller than this (in absolute value) are suppressed. | | `sort` | logical. If true, the variables are sorted by their importance on each factor. Each variable with any loading larger than 0.5 (in modulus) is assigned to the factor with the largest loading, and the variables are printed in the order of the factor they are assigned to, then those unassigned. | | `...` | further arguments for other methods, ignored for `loadings`. | ### Details ‘Loadings’ is a term from *factor analysis*, but because factor analysis and principal component analysis (PCA) are often conflated in the social science literature, it was used for PCA by SPSS and hence by `<princomp>` in S-PLUS to help SPSS users. Small loadings are conventionally not printed (replaced by spaces), to draw the eye to the pattern of the larger loadings. The `print` method for class `"<factanal>"` calls the `"loadings"` method to print the loadings, and so passes down arguments such as `cutoff` and `sort`. The signs of the loadings vectors are arbitrary for both factor analysis and PCA. ### Note There are other functions called `loadings` in contributed packages which are S3 or S4 generic: the `...` argument is to make it easier for this one to become a default method. ### See Also `<factanal>`, `<princomp>` r None `stepfun` Step Functions - Creation and Class ---------------------------------------------- ### Description Given the vectors *(x[1], …, x[n])* and *(y[0], y[1], …, y[n])* (one value more!), `stepfun(x, y, ...)` returns an interpolating ‘step’ function, say `fn`. I.e., *fn(t) = c**[i]* (constant) for *t in ( x[i], x[i+1])* and at the abscissa values, if (by default) `right = FALSE`, *fn(x[i]) = y[i]* and for `right = TRUE`, *fn(x[i]) = y[i-1]*, for *i=1, …, n*. The value of the constant *c[i]* above depends on the ‘continuity’ parameter `f`. For the default, `right = FALSE, f = 0`, `fn` is a *cadlag* function, i.e., continuous from the right, limits from the left, so that the function is piecewise constant on intervals that include their *left* endpoint. In general, *c[i]* is interpolated in between the neighbouring *y* values, *c[i] = (1-f)\*y[i] + f\*y[i+1]*. Therefore, for non-0 values of `f`, `fn` may no longer be a proper step function, since it can be discontinuous from both sides, unless `right = TRUE, f = 1` which is left-continuous (i.e., constant pieces contain their right endpoint). ### Usage ``` stepfun(x, y, f = as.numeric(right), ties = "ordered", right = FALSE) is.stepfun(x) knots(Fn, ...) as.stepfun(x, ...) ## S3 method for class 'stepfun' print(x, digits = getOption("digits") - 2, ...) ## S3 method for class 'stepfun' summary(object, ...) ``` ### Arguments | | | | --- | --- | | `x` | numeric vector giving the knots or jump locations of the step function for `stepfun()`. For the other functions, `x` is as `object` below. | | `y` | numeric vector one longer than `x`, giving the heights of the function values *between* the x values. | | `f` | a number between 0 and 1, indicating how interpolation outside the given x values should happen. See `<approxfun>`. | | `ties` | Handling of tied `x` values. Either a function or the string `"ordered"`. See `<approxfun>`. | | `right` | logical, indicating if the intervals should be closed on the right (and open on the left) or vice versa. | | `Fn, object` | an **R** object inheriting from `"stepfun"`. | | `digits` | number of significant digits to use, see `[print](../../base/html/print)`. | | `...` | potentially further arguments (required by the generic). | ### Value A function of class `"stepfun"`, say `fn`. There are methods available for summarizing (`"summary(.)"`), representing (`"print(.)"`) and plotting (`"plot(.)"`, see `<plot.stepfun>`) `"stepfun"` objects. The `[environment](../../base/html/environment)` of `fn` contains all the information needed; | | | | --- | --- | | `"x","y"` | the original arguments | | `"n"` | number of knots (x values) | | `"f"` | continuity parameter | | `"yleft", "yright"` | the function values *outside* the knots | | `"method"` | (always `== "constant"`, from `<approxfun>(.)`). | The knots are also available via `[knots](stepfun)(fn)`. ### Note The objects of class `"stepfun"` are not intended to be used for permanent storage and may change structure between versions of **R** (and did at **R** 3.0.0). They can usually be re-created by ``` eval(attr(old_obj, "call"), environment(old_obj)) ``` since the data used is stored as part of the object's environment. ### Author(s) Martin Maechler, [[email protected]](mailto:[email protected]) with some basic code from Thomas Lumley. ### See Also `<ecdf>` for empirical distribution functions as special step functions and `<plot.stepfun>` for *plotting* step functions. `<approxfun>` and `<splinefun>`. ### Examples ``` y0 <- c(1., 2., 4., 3.) sfun0 <- stepfun(1:3, y0, f = 0) sfun.2 <- stepfun(1:3, y0, f = 0.2) sfun1 <- stepfun(1:3, y0, f = 1) sfun1c <- stepfun(1:3, y0, right = TRUE) # hence f=1 sfun0 summary(sfun0) summary(sfun.2) ## look at the internal structure: unclass(sfun0) ls(envir = environment(sfun0)) x0 <- seq(0.5, 3.5, by = 0.25) rbind(x = x0, f.f0 = sfun0(x0), f.f02 = sfun.2(x0), f.f1 = sfun1(x0), f.f1c = sfun1c(x0)) ## Identities : stopifnot(identical(y0[-1], sfun0 (1:3)), # right = FALSE identical(y0[-4], sfun1c(1:3))) # right = TRUE ``` r None `weighted.residuals` Compute Weighted Residuals ------------------------------------------------ ### Description Computed weighted residuals from a linear model fit. ### Usage ``` weighted.residuals(obj, drop0 = TRUE) ``` ### Arguments | | | | --- | --- | | `obj` | **R** object, typically of class `<lm>` or `<glm>`. | | `drop0` | logical. If `TRUE`, drop all cases with `<weights> == 0`. | ### Details Weighted residuals are based on the deviance residuals, which for a `<lm>` fit are the raw residuals *Ri* multiplied by *wi^0.5*, where *wi* are the `weights` as specified in `<lm>`'s call. Dropping cases with weights zero is compatible with `[influence](lm.influence)` and related functions. ### Value Numeric vector of length *n'*, where *n'* is the number of of non-0 weights (`drop0 = TRUE`) or the number of observations, otherwise. ### See Also `<residuals>`, `<lm.influence>`, etc. ### Examples ``` ## following on from example(lm) all.equal(weighted.residuals(lm.D9), residuals(lm.D9)) x <- 1:10 w <- 0:9 y <- rnorm(x) weighted.residuals(lmxy <- lm(y ~ x, weights = w)) weighted.residuals(lmxy, drop0 = FALSE) ``` r None `loess.control` Set Parameters for Loess ----------------------------------------- ### Description Set control parameters for `loess` fits. ### Usage ``` loess.control(surface = c("interpolate", "direct"), statistics = c("approximate", "exact", "none"), trace.hat = c("exact", "approximate"), cell = 0.2, iterations = 4, iterTrace = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `surface` | should the fitted surface be computed exactly (`"direct"`) or via interpolation from a kd tree? Can be abbreviated. | | `statistics` | should the statistics be computed exactly, approximately or not at all? Exact computation can be very slow. Can be abbreviated. | | `trace.hat` | Only for the (default) case `(surface = "interpolate", statistics = "approximate")`: should the trace of the smoother matrix be computed exactly or approximately? It is recommended to use the approximation for more than about 1000 data points. Can be abbreviated. | | `cell` | if interpolation is used this controls the accuracy of the approximation via the maximum number of points in a cell in the kd tree. Cells with more than `floor(n*span*cell)` points are subdivided. | | `iterations` | the number of iterations used in robust fitting, i.e. only if `family` is `"symmetric"`. | | `iterTrace` | logical (or integer) determining if tracing information during the robust iterations (`iterations`*>= 2*) is produced. | | `...` | further arguments which are ignored. | ### Value A list with components | | | | --- | --- | | `surface` | | | `statistics` | | | `trace.hat` | | | `cell` | | | `iterations` | | | `iterTrace` | | with meanings as explained under ‘Arguments’. ### See Also `<loess>` r None `spec.pgram` Estimate Spectral Density of a Time Series by a Smoothed Periodogram ---------------------------------------------------------------------------------- ### Description `spec.pgram` calculates the periodogram using a fast Fourier transform, and optionally smooths the result with a series of modified Daniell smoothers (moving averages giving half weight to the end values). ### Usage ``` spec.pgram(x, spans = NULL, kernel, taper = 0.1, pad = 0, fast = TRUE, demean = FALSE, detrend = TRUE, plot = TRUE, na.action = na.fail, ...) ``` ### Arguments | | | | --- | --- | | `x` | univariate or multivariate time series. | | `spans` | vector of odd integers giving the widths of modified Daniell smoothers to be used to smooth the periodogram. | | `kernel` | alternatively, a kernel smoother of class `"tskernel"`. | | `taper` | specifies the proportion of data to taper. A split cosine bell taper is applied to this proportion of the data at the beginning and end of the series. | | `pad` | proportion of data to pad. Zeros are added to the end of the series to increase its length by the proportion `pad`. | | `fast` | logical; if `TRUE`, pad the series to a highly composite length. | | `demean` | logical. If `TRUE`, subtract the mean of the series. | | `detrend` | logical. If `TRUE`, remove a linear trend from the series. This will also remove the mean. | | `plot` | plot the periodogram? | | `na.action` | `NA` action function. | | `...` | graphical arguments passed to `plot.spec`. | ### Details The raw periodogram is not a consistent estimator of the spectral density, but adjacent values are asymptotically independent. Hence a consistent estimator can be derived by smoothing the raw periodogram, assuming that the spectral density is smooth. The series will be automatically padded with zeros until the series length is a highly composite number in order to help the Fast Fourier Transform. This is controlled by the `fast` and not the `pad` argument. The periodogram at zero is in theory zero as the mean of the series is removed (but this may be affected by tapering): it is replaced by an interpolation of adjacent values during smoothing, and no value is returned for that frequency. ### Value A list object of class `"spec"` (see `<spectrum>`) with the following additional components: | | | | --- | --- | | `kernel` | The `kernel` argument, or the kernel constructed from `spans`. | | `df` | The distribution of the spectral density estimate can be approximated by a (scaled) chi square distribution with `df` degrees of freedom. | | `bandwidth` | The equivalent bandwidth of the kernel smoother as defined by Bloomfield (1976, page 201). | | `taper` | The value of the `taper` argument. | | `pad` | The value of the `pad` argument. | | `detrend` | The value of the `detrend` argument. | | `demean` | The value of the `demean` argument. | The result is returned invisibly if `plot` is true. ### Author(s) Originally Martyn Plummer; kernel smoothing by Adrian Trapletti, synthesis by B.D. Ripley ### References Bloomfield, P. (1976) *Fourier Analysis of Time Series: An Introduction.* Wiley. Brockwell, P.J. and Davis, R.A. (1991) *Time Series: Theory and Methods.* Second edition. Springer. Venables, W.N. and Ripley, B.D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer. (Especially pp. 392–7.) ### See Also `<spectrum>`, `<spec.taper>`, `<plot.spec>`, `<fft>` ### Examples ``` require(graphics) ## Examples from Venables & Ripley spectrum(ldeaths) spectrum(ldeaths, spans = c(3,5)) spectrum(ldeaths, spans = c(5,7)) spectrum(mdeaths, spans = c(3,3)) spectrum(fdeaths, spans = c(3,3)) ## bivariate example mfdeaths.spc <- spec.pgram(ts.union(mdeaths, fdeaths), spans = c(3,3)) # plots marginal spectra: now plot coherency and phase plot(mfdeaths.spc, plot.type = "coherency") plot(mfdeaths.spc, plot.type = "phase") ## now impose a lack of alignment mfdeaths.spc <- spec.pgram(ts.intersect(mdeaths, lag(fdeaths, 4)), spans = c(3,3), plot = FALSE) plot(mfdeaths.spc, plot.type = "coherency") plot(mfdeaths.spc, plot.type = "phase") stocks.spc <- spectrum(EuStockMarkets, kernel("daniell", c(30,50)), plot = FALSE) plot(stocks.spc, plot.type = "marginal") # the default type plot(stocks.spc, plot.type = "coherency") plot(stocks.spc, plot.type = "phase") sales.spc <- spectrum(ts.union(BJsales, BJsales.lead), kernel("modified.daniell", c(5,7))) plot(sales.spc, plot.type = "coherency") plot(sales.spc, plot.type = "phase") ``` r None `mcnemar.test` McNemar's Chi-squared Test for Count Data --------------------------------------------------------- ### Description Performs McNemar's chi-squared test for symmetry of rows and columns in a two-dimensional contingency table. ### Usage ``` mcnemar.test(x, y = NULL, correct = TRUE) ``` ### Arguments | | | | --- | --- | | `x` | either a two-dimensional contingency table in matrix form, or a factor object. | | `y` | a factor object; ignored if `x` is a matrix. | | `correct` | a logical indicating whether to apply continuity correction when computing the test statistic. | ### Details The null is that the probabilities of being classified into cells `[i,j]` and `[j,i]` are the same. If `x` is a matrix, it is taken as a two-dimensional contingency table, and hence its entries should be nonnegative integers. Otherwise, both `x` and `y` must be vectors or factors of the same length. Incomplete cases are removed, vectors are coerced into factors, and the contingency table is computed from these. Continuity correction is only used in the 2-by-2 case if `correct` is `TRUE`. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the value of McNemar's statistic. | | `parameter` | the degrees of freedom of the approximate chi-squared distribution of the test statistic. | | `p.value` | the p-value of the test. | | `method` | a character string indicating the type of test performed, and whether continuity correction was used. | | `data.name` | a character string giving the name(s) of the data. | ### References Alan Agresti (1990). *Categorical data analysis*. New York: Wiley. Pages 350–354. ### Examples ``` ## Agresti (1990), p. 350. ## Presidential Approval Ratings. ## Approval of the President's performance in office in two surveys, ## one month apart, for a random sample of 1600 voting-age Americans. Performance <- matrix(c(794, 86, 150, 570), nrow = 2, dimnames = list("1st Survey" = c("Approve", "Disapprove"), "2nd Survey" = c("Approve", "Disapprove"))) Performance mcnemar.test(Performance) ## => significant change (in fact, drop) in approval ratings ``` r None `summary.manova` Summary Method for Multivariate Analysis of Variance ---------------------------------------------------------------------- ### Description A `summary` method for class `"manova"`. ### Usage ``` ## S3 method for class 'manova' summary(object, test = c("Pillai", "Wilks", "Hotelling-Lawley", "Roy"), intercept = FALSE, tol = 1e-7, ...) ``` ### Arguments | | | | --- | --- | | `object` | An object of class `"manova"` or an `aov` object with multiple responses. | | `test` | The name of the test statistic to be used. Partial matching is used so the name can be abbreviated. | | `intercept` | logical. If `TRUE`, the intercept term is included in the table. | | `tol` | tolerance to be used in deciding if the residuals are rank-deficient: see `[qr](../../matrix/html/qr-methods)`. | | `...` | further arguments passed to or from other methods. | ### Details The `summary.manova` method uses a multivariate test statistic for the summary table. Wilks' statistic is most popular in the literature, but the default Pillai–Bartlett statistic is recommended by Hand and Taylor (1987). The table gives a transformation of the test statistic which has approximately an F distribution. The approximations used follow S-PLUS and SAS (the latter apart from some cases of the Hotelling–Lawley statistic), but many other distributional approximations exist: see Anderson (1984) and Krzanowski and Marriott (1994) for further references. All four approximate F statistics are the same when the term being tested has one degree of freedom, but in other cases that for the Roy statistic is an upper bound. The tolerance `tol` is applied to the QR decomposition of the residual correlation matrix (unless some response has essentially zero residuals, when it is unscaled). Thus the default value guards against very highly correlated responses: it can be reduced but doing so will allow rather inaccurate results and it will normally be better to transform the responses to remove the high correlation. ### Value An object of class `"summary.manova"`. If there is a positive residual degrees of freedom, this is a list with components | | | | --- | --- | | `row.names` | The names of the terms, the row names of the `stats` table if present. | | `SS` | A named list of sums of squares and product matrices. | | `Eigenvalues` | A matrix of eigenvalues. | | `stats` | A matrix of the statistics, approximate F value, degrees of freedom and P value. | otherwise components `row.names`, `SS` and `Df` (degrees of freedom) for the terms (and not the residuals). ### References Anderson, T. W. (1994) *An Introduction to Multivariate Statistical Analysis.* Wiley. Hand, D. J. and Taylor, C. C. (1987) *Multivariate Analysis of Variance and Repeated Measures.* Chapman and Hall. Krzanowski, W. J. (1988) *Principles of Multivariate Analysis. A User's Perspective.* Oxford. Krzanowski, W. J. and Marriott, F. H. C. (1994) *Multivariate Analysis. Part I: Distributions, Ordination and Inference.* Edward Arnold. ### See Also `<manova>`, `<aov>` ### Examples ``` ## Example on producing plastic film from Krzanowski (1998, p. 381) tear <- c(6.5, 6.2, 5.8, 6.5, 6.5, 6.9, 7.2, 6.9, 6.1, 6.3, 6.7, 6.6, 7.2, 7.1, 6.8, 7.1, 7.0, 7.2, 7.5, 7.6) gloss <- c(9.5, 9.9, 9.6, 9.6, 9.2, 9.1, 10.0, 9.9, 9.5, 9.4, 9.1, 9.3, 8.3, 8.4, 8.5, 9.2, 8.8, 9.7, 10.1, 9.2) opacity <- c(4.4, 6.4, 3.0, 4.1, 0.8, 5.7, 2.0, 3.9, 1.9, 5.7, 2.8, 4.1, 3.8, 1.6, 3.4, 8.4, 5.2, 6.9, 2.7, 1.9) Y <- cbind(tear, gloss, opacity) rate <- gl(2,10, labels = c("Low", "High")) additive <- gl(2, 5, length = 20, labels = c("Low", "High")) fit <- manova(Y ~ rate * additive) summary.aov(fit) # univariate ANOVA tables summary(fit, test = "Wilks") # ANOVA table of Wilks' lambda summary(fit) # same F statistics as single-df terms ```
programming_docs
r None `anova.glm` Analysis of Deviance for Generalized Linear Model Fits ------------------------------------------------------------------- ### Description Compute an analysis of deviance table for one or more generalized linear model fits. ### Usage ``` ## S3 method for class 'glm' anova(object, ..., dispersion = NULL, test = NULL) ``` ### Arguments | | | | --- | --- | | `object, ...` | objects of class `glm`, typically the result of a call to `<glm>`, or a list of `objects` for the `"glmlist"` method. | | `dispersion` | the dispersion parameter for the fitting family. By default it is obtained from the object(s). | | `test` | a character string, (partially) matching one of `"Chisq"`, `"LRT"`, `"Rao"`, `"F"` or `"Cp"`. See `<stat.anova>`. | ### Details Specifying a single object gives a sequential analysis of deviance table for that fit. That is, the reductions in the residual deviance as each term of the formula is added in turn are given in as the rows of a table, plus the residual deviances themselves. If more than one object is specified, the table has a row for the residual degrees of freedom and deviance for each model. For all but the first model, the change in degrees of freedom and deviance is also given. (This only makes statistical sense if the models are nested.) It is conventional to list the models from smallest to largest, but this is up to the user. The table will optionally contain test statistics (and P values) comparing the reduction in deviance for the row to the residuals. For models with known dispersion (e.g., binomial and Poisson fits) the chi-squared test is most appropriate, and for those with dispersion estimated by moments (e.g., `gaussian`, `quasibinomial` and `quasipoisson` fits) the F test is most appropriate. Mallows' *Cp* statistic is the residual deviance plus twice the estimate of *σ^2* times the residual degrees of freedom, which is closely related to AIC (and a multiple of it if the dispersion is known). You can also choose `"LRT"` and `"Rao"` for likelihood ratio tests and Rao's efficient score test. The former is synonymous with `"Chisq"` (although both have an asymptotic chi-square distribution). The dispersion estimate will be taken from the largest model, using the value returned by `<summary.glm>`. As this will in most cases use a Chisquared-based estimate, the F tests are not based on the residual deviance in the analysis of deviance table shown. ### Value An object of class `"anova"` inheriting from class `"data.frame"`. ### Warning The comparison between two or more models will only be valid if they are fitted to the same dataset. This may be a problem if there are missing values and **R**'s default of `na.action = na.omit` is used, and `anova` will detect this with an error. ### References Hastie, T. J. and Pregibon, D. (1992) *Generalized linear models.* Chapter 6 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<glm>`, `<anova>`. `[drop1](add1)` for so-called ‘type II’ anova where each term is dropped one at a time respecting their hierarchy. ### Examples ``` ## --- Continuing the Example from '?glm': anova(glm.D93) anova(glm.D93, test = "Cp") anova(glm.D93, test = "Chisq") glm.D93a <- update(glm.D93, ~treatment*outcome) # equivalent to Pearson Chi-square anova(glm.D93, glm.D93a, test = "Rao") ``` r None `predict.glm` Predict Method for GLM Fits ------------------------------------------ ### Description Obtains predictions and optionally estimates standard errors of those predictions from a fitted generalized linear model object. ### Usage ``` ## S3 method for class 'glm' predict(object, newdata = NULL, type = c("link", "response", "terms"), se.fit = FALSE, dispersion = NULL, terms = NULL, na.action = na.pass, ...) ``` ### Arguments | | | | --- | --- | | `object` | a fitted object of class inheriting from `"glm"`. | | `newdata` | optionally, a data frame in which to look for variables with which to predict. If omitted, the fitted linear predictors are used. | | `type` | the type of prediction required. The default is on the scale of the linear predictors; the alternative `"response"` is on the scale of the response variable. Thus for a default binomial model the default predictions are of log-odds (probabilities on logit scale) and `type = "response"` gives the predicted probabilities. The `"terms"` option returns a matrix giving the fitted values of each term in the model formula on the linear predictor scale. The value of this argument can be abbreviated. | | `se.fit` | logical switch indicating if standard errors are required. | | `dispersion` | the dispersion of the GLM fit to be assumed in computing the standard errors. If omitted, that returned by `summary` applied to the object is used. | | `terms` | with `type = "terms"` by default all terms are returned. A character vector specifies which terms are to be returned | | `na.action` | function determining what should be done with missing values in `newdata`. The default is to predict `NA`. | | `...` | further arguments passed to or from other methods. | ### Details If `newdata` is omitted the predictions are based on the data used for the fit. In that case how cases with missing values in the original fit is determined by the `na.action` argument of that fit. If `na.action = na.omit` omitted cases will not appear in the residuals, whereas if `na.action = na.exclude` they will appear (in predictions and standard errors), with residual value `NA`. See also `[napredict](nafns)`. ### Value If `se.fit = FALSE`, a vector or matrix of predictions. For `type = "terms"` this is a matrix with a column per term, and may have an attribute `"constant"`. If `se.fit = TRUE`, a list with components | | | | --- | --- | | `fit` | Predictions, as for `se.fit = FALSE`. | | `se.fit` | Estimated standard errors. | | `residual.scale` | A scalar giving the square root of the dispersion used in computing the standard errors. | ### Note Variables are first looked for in `newdata` and then searched for in the usual way (which will include the environment of the formula used in the fit). A warning will be given if the variables found are not of the same length as those in `newdata` if it was supplied. ### See Also `<glm>`, `[SafePrediction](makepredictcall)` ### Examples ``` require(graphics) ## example from Venables and Ripley (2002, pp. 190-2.) ldose <- rep(0:5, 2) numdead <- c(1, 4, 9, 13, 18, 20, 0, 2, 6, 10, 12, 16) sex <- factor(rep(c("M", "F"), c(6, 6))) SF <- cbind(numdead, numalive = 20-numdead) budworm.lg <- glm(SF ~ sex*ldose, family = binomial) summary(budworm.lg) plot(c(1,32), c(0,1), type = "n", xlab = "dose", ylab = "prob", log = "x") text(2^ldose, numdead/20, as.character(sex)) ld <- seq(0, 5, 0.1) lines(2^ld, predict(budworm.lg, data.frame(ldose = ld, sex = factor(rep("M", length(ld)), levels = levels(sex))), type = "response")) lines(2^ld, predict(budworm.lg, data.frame(ldose = ld, sex = factor(rep("F", length(ld)), levels = levels(sex))), type = "response")) ``` r None `lag` Lag a Time Series ------------------------ ### Description Compute a lagged version of a time series, shifting the time base back by a given number of observations. `lag` is a generic function; this page documents its default method. ### Usage ``` lag(x, ...) ## Default S3 method: lag(x, k = 1, ...) ``` ### Arguments | | | | --- | --- | | `x` | A vector or matrix or univariate or multivariate time series | | `k` | The number of lags (in units of observations). | | `...` | further arguments to be passed to or from methods. | ### Details Vector or matrix arguments `x` are given a `tsp` attribute *via* `[hasTsp](tsp)`. ### Value A time series object with the same class as `x`. ### Note Note the sign of `k`: a series lagged by a positive `k` starts *earlier*. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[diff](../../base/html/diff)`, `[deltat](time)` ### Examples ``` lag(ldeaths, 12) # starts one year earlier ``` r None `as.hclust` Convert Objects to Class hclust -------------------------------------------- ### Description Converts objects from other hierarchical clustering functions to class `"hclust"`. ### Usage ``` as.hclust(x, ...) ``` ### Arguments | | | | --- | --- | | `x` | Hierarchical clustering object | | `...` | further arguments passed to or from other methods. | ### Details Currently there is only support for converting objects of class `"twins"` as produced by the functions `diana` and `agnes` from the package [cluster](https://CRAN.R-project.org/package=cluster). The default method throws an error unless passed an `"hclust"` object. ### Value An object of class `"hclust"`. ### See Also `<hclust>`, and from package [cluster](https://CRAN.R-project.org/package=cluster), `[diana](../../cluster/html/diana)` and `[agnes](../../cluster/html/agnes)` ### Examples ``` x <- matrix(rnorm(30), ncol = 3) hc <- hclust(dist(x), method = "complete") if(require("cluster", quietly = TRUE)) {# is a recommended package ag <- agnes(x, method = "complete") hcag <- as.hclust(ag) ## The dendrograms order slightly differently: op <- par(mfrow = c(1,2)) plot(hc) ; mtext("hclust", side = 1) plot(hcag); mtext("agnes", side = 1) detach("package:cluster") } ``` r None `pairwise.wilcox.test` Pairwise Wilcoxon Rank Sum Tests -------------------------------------------------------- ### Description Calculate pairwise comparisons between group levels with corrections for multiple testing. ### Usage ``` pairwise.wilcox.test(x, g, p.adjust.method = p.adjust.methods, paired = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `x` | response vector. | | `g` | grouping vector or factor. | | `p.adjust.method` | method for adjusting p values (see `<p.adjust>`). Can be abbreviated. | | `paired` | a logical indicating whether you want a paired test. | | `...` | additional arguments to pass to `<wilcox.test>`. | ### Details Extra arguments that are passed on to `wilcox.test` may or may not be sensible in this context. In particular, only the lower triangle of the matrix of possible comparisons is being calculated, so setting `alternative` to anything other than `"two.sided"` requires that the levels of `g` are ordered sensibly. ### Value Object of class `"pairwise.htest"` ### See Also `<wilcox.test>`, `<p.adjust>` ### Examples ``` attach(airquality) Month <- factor(Month, labels = month.abb[5:9]) ## These give warnings because of ties : pairwise.wilcox.test(Ozone, Month) pairwise.wilcox.test(Ozone, Month, p.adjust.method = "bonf") detach() ``` r None `update.formula` Model Updating -------------------------------- ### Description `update.formula` is used to update model formulae. This typically involves adding or dropping terms, but updates can be more general. ### Usage ``` ## S3 method for class 'formula' update(old, new, ...) ``` ### Arguments | | | | --- | --- | | `old` | a model formula to be updated. | | `new` | a formula giving a template which specifies how to update. | | `...` | further arguments passed to or from other methods. | ### Details Either or both of `old` and `new` can be objects such as length-one character vectors which can be coerced to a formula via `[as.formula](formula)`. The function works by first identifying the *left-hand side* and *right-hand side* of the `old` formula. It then examines the `new` formula and substitutes the *lhs* of the `old` formula for any occurrence of ‘.’ on the left of `new`, and substitutes the *rhs* of the `old` formula for any occurrence of ‘.’ on the right of `new`. The result is then simplified *via* `<terms.formula>(simplify = TRUE)`. ### Value The updated formula is returned. The environment of the result is that of `old`. ### See Also `<terms>`, `<model.matrix>`. ### Examples ``` update(y ~ x, ~ . + x2) #> y ~ x + x2 update(y ~ x, log(.) ~ . ) #> log(y) ~ x update(. ~ u+v, res ~ . ) #> res ~ u + v ``` r None `ppoints` Ordinates for Probability Plotting --------------------------------------------- ### Description Generates the sequence of probability points `(1:m - a)/(m + (1-a)-a)` where `m` is either `n`, if `length(n)==1`, or `length(n)`. ### Usage ``` ppoints(n, a = if(n <= 10) 3/8 else 1/2) ``` ### Arguments | | | | --- | --- | | `n` | either the number of points generated or a vector of observations. | | `a` | the offset fraction to be used; typically in *(0,1)*. | ### Details If *0 < a < 1*, the resulting values are within *(0,1)* (excluding boundaries). In any case, the resulting sequence is symmetric in *[0,1]*, i.e., `p + rev(p) == 1`. `ppoints()` is used in `qqplot` and `qqnorm` to generate the set of probabilities at which to evaluate the inverse distribution. The choice of `a` follows the documentation of the function of the same name in Becker *et al* (1988), and appears to have been motivated by results from Blom (1958) on approximations to expect normal order statistics (see also `<quantile>`). The probability points for the continuous sample quantile types 5 to 9 (see `<quantile>`) can be obtained by taking `a` as, respectively, 1/2, 0, 1, 1/3, and 3/8. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Blom, G. (1958) *Statistical Estimates and Transformed Beta Variables.* Wiley ### See Also `[qqplot](qqnorm)`, `<qqnorm>`. ### Examples ``` ppoints(4) # the same as ppoints(1:4) ppoints(10) ppoints(10, a = 1/2) ## Visualize including the fractions : require(graphics) p.ppoints <- function(n, ..., add = FALSE, col = par("col")) { pn <- ppoints(n, ...) if(add) points(pn, pn, col = col) else { tit <- match.call(); tit[[1]] <- quote(ppoints) plot(pn,pn, main = deparse(tit), col=col, xlim = 0:1, ylim = 0:1, xaxs = "i", yaxs = "i") abline(0, 1, col = adjustcolor(1, 1/4), lty = 3) } if(!add && requireNamespace("MASS", quietly = TRUE)) text(pn, pn, as.character(MASS::fractions(pn)), adj = c(0,0)-1/4, cex = 3/4, xpd = NA, col=col) abline(h = pn, v = pn, col = adjustcolor(col, 1/2), lty = 2, lwd = 1/2) } p.ppoints(4) p.ppoints(10) p.ppoints(10, a = 1/2) p.ppoints(21) p.ppoints(8) ; p.ppoints(8, a = 1/2, add=TRUE, col="tomato") ``` r None `HoltWinters` Holt-Winters Filtering ------------------------------------- ### Description Computes Holt-Winters Filtering of a given time series. Unknown parameters are determined by minimizing the squared prediction error. ### Usage ``` HoltWinters(x, alpha = NULL, beta = NULL, gamma = NULL, seasonal = c("additive", "multiplicative"), start.periods = 2, l.start = NULL, b.start = NULL, s.start = NULL, optim.start = c(alpha = 0.3, beta = 0.1, gamma = 0.1), optim.control = list()) ``` ### Arguments | | | | --- | --- | | `x` | An object of class `ts` | | `alpha` | *alpha* parameter of Holt-Winters Filter. | | `beta` | *beta* parameter of Holt-Winters Filter. If set to `FALSE`, the function will do exponential smoothing. | | `gamma` | *gamma* parameter used for the seasonal component. If set to `FALSE`, an non-seasonal model is fitted. | | `seasonal` | Character string to select an `"additive"` (the default) or `"multiplicative"` seasonal model. The first few characters are sufficient. (Only takes effect if `gamma` is non-zero). | | `start.periods` | Start periods used in the autodetection of start values. Must be at least 2. | | `l.start` | Start value for level (a[0]). | | `b.start` | Start value for trend (b[0]). | | `s.start` | Vector of start values for the seasonal component (*s\_1[0] … s\_p[0]*) | | `optim.start` | Vector with named components `alpha`, `beta`, and `gamma` containing the starting values for the optimizer. Only the values needed must be specified. Ignored in the one-parameter case. | | `optim.control` | Optional list with additional control parameters passed to `optim` if this is used. Ignored in the one-parameter case. | ### Details The additive Holt-Winters prediction function (for time series with period length p) is *Yhat[t+h] = a[t] + h \* b[t] + s[t - p + 1 + (h - 1) mod p],* where *a[t]*, *b[t]* and *s[t]* are given by *a[t] = α (Y[t] - s[t-p]) + (1-α) (a[t-1] + b[t-1])* *b[t] = β (a[t] - a[t-1]) + (1-β) b[t-1]* *s[t] = γ (Y[t] - a[t]) + (1-γ) s[t-p]* The multiplicative Holt-Winters prediction function (for time series with period length p) is *Yhat[t+h] = (a[t] + h \* b[t]) \* s[t - p + 1 + (h - 1) mod p],* where *a[t]*, *b[t]* and *s[t]* are given by *a[t] = α (Y[t] / s[t-p]) + (1-α) (a[t-1] + b[t-1])* *b[t] = β (a[t] - a[t-1]) + (1-β) b[t-1]* *s[t] = γ (Y[t] / a[t]) + (1-γ) s[t-p]* The data in `x` are required to be non-zero for a multiplicative model, but it makes most sense if they are all positive. The function tries to find the optimal values of *α* and/or *β* and/or *γ* by minimizing the squared one-step prediction error if they are `NULL` (the default). `optimize` will be used for the single-parameter case, and `optim` otherwise. For seasonal models, start values for `a`, `b` and `s` are inferred by performing a simple decomposition in trend and seasonal component using moving averages (see function `<decompose>`) on the `start.periods` first periods (a simple linear regression on the trend component is used for starting level and trend). For level/trend-models (no seasonal component), start values for `a` and `b` are `x[2]` and `x[2] - x[1]`, respectively. For level-only models (ordinary exponential smoothing), the start value for `a` is `x[1]`. ### Value An object of class `"HoltWinters"`, a list with components: | | | | --- | --- | | `fitted` | A multiple time series with one column for the filtered series as well as for the level, trend and seasonal components, estimated contemporaneously (that is at time t and not at the end of the series). | | `x` | The original series | | `alpha` | alpha used for filtering | | `beta` | beta used for filtering | | `gamma` | gamma used for filtering | | `coefficients` | A vector with named components `a, b, s1, ..., sp` containing the estimated values for the level, trend and seasonal components | | `seasonal` | The specified `seasonal` parameter | | `SSE` | The final sum of squared errors achieved in optimizing | | `call` | The call used | ### Author(s) David Meyer [[email protected]](mailto:[email protected]) ### References C. C. Holt (1957) Forecasting seasonals and trends by exponentially weighted moving averages, *ONR Research Memorandum, Carnegie Institute of Technology* **52**. (reprint at doi: [10.1016/j.ijforecast.2003.09.015](https://doi.org/10.1016/j.ijforecast.2003.09.015)). P. R. Winters (1960). Forecasting sales by exponentially weighted moving averages. *Management Science*, **6**, 324–342. doi: [10.1287/mnsc.6.3.324](https://doi.org/10.1287/mnsc.6.3.324). ### See Also `[predict.HoltWinters](predict.holtwinters)`, `<optim>`. ### Examples ``` require(graphics) ## Seasonal Holt-Winters (m <- HoltWinters(co2)) plot(m) plot(fitted(m)) (m <- HoltWinters(AirPassengers, seasonal = "mult")) plot(m) ## Non-Seasonal Holt-Winters x <- uspop + rnorm(uspop, sd = 5) m <- HoltWinters(x, gamma = FALSE) plot(m) ## Exponential Smoothing m2 <- HoltWinters(x, gamma = FALSE, beta = FALSE) lines(fitted(m2)[,1], col = 3) ``` r None `summary.glm` Summarizing Generalized Linear Model Fits -------------------------------------------------------- ### Description These functions are all `[methods](../../utils/html/methods)` for class `glm` or `summary.glm` objects. ### Usage ``` ## S3 method for class 'glm' summary(object, dispersion = NULL, correlation = FALSE, symbolic.cor = FALSE, ...) ## S3 method for class 'summary.glm' print(x, digits = max(3, getOption("digits") - 3), symbolic.cor = x$symbolic.cor, signif.stars = getOption("show.signif.stars"), ...) ``` ### Arguments | | | | --- | --- | | `object` | an object of class `"glm"`, usually, a result of a call to `<glm>`. | | `x` | an object of class `"summary.glm"`, usually, a result of a call to `summary.glm`. | | `dispersion` | the dispersion parameter for the family used. Either a single numerical value or `NULL` (the default), when it is inferred from `object` (see ‘Details’). | | `correlation` | logical; if `TRUE`, the correlation matrix of the estimated parameters is returned and printed. | | `digits` | the number of significant digits to use when printing. | | `symbolic.cor` | logical. If `TRUE`, print the correlations in a symbolic form (see `<symnum>`) rather than as numbers. | | `signif.stars` | logical. If `TRUE`, ‘significance stars’ are printed for each coefficient. | | `...` | further arguments passed to or from other methods. | ### Details `print.summary.glm` tries to be smart about formatting the coefficients, standard errors, etc. and additionally gives ‘significance stars’ if `signif.stars` is `TRUE`. The `coefficients` component of the result gives the estimated coefficients and their estimated standard errors, together with their ratio. This third column is labelled `t ratio` if the dispersion is estimated, and `z ratio` if the dispersion is known (or fixed by the family). A fourth column gives the two-tailed p-value corresponding to the t or z ratio based on a Student t or Normal reference distribution. (It is possible that the dispersion is not known and there are no residual degrees of freedom from which to estimate it. In that case the estimate is `NaN`.) Aliased coefficients are omitted in the returned object but restored by the `print` method. Correlations are printed to two decimal places (or symbolically): to see the actual correlations print `summary(object)$correlation` directly. The dispersion of a GLM is not used in the fitting process, but it is needed to find standard errors. If `dispersion` is not supplied or `NULL`, the dispersion is taken as `1` for the `binomial` and `Poisson` families, and otherwise estimated by the residual Chisquared statistic (calculated from cases with non-zero weights) divided by the residual degrees of freedom. `summary` can be used with Gaussian `glm` fits to handle the case of a linear regression with known error variance, something not handled by `<summary.lm>`. ### Value `summary.glm` returns an object of class `"summary.glm"`, a list with components | | | | --- | --- | | `call` | the component from `object`. | | `family` | the component from `object`. | | `deviance` | the component from `object`. | | `contrasts` | the component from `object`. | | `df.residual` | the component from `object`. | | `null.deviance` | the component from `object`. | | `df.null` | the component from `object`. | | `deviance.resid` | the deviance residuals: see `[residuals.glm](glm.summaries)`. | | `coefficients` | the matrix of coefficients, standard errors, z-values and p-values. Aliased coefficients are omitted. | | `aliased` | named logical vector showing if the original coefficients are aliased. | | `dispersion` | either the supplied argument or the inferred/estimated dispersion if the latter is `NULL`. | | `df` | a 3-vector of the rank of the model and the number of residual degrees of freedom, plus number of coefficients (including aliased ones). | | `cov.unscaled` | the unscaled (`dispersion = 1`) estimated covariance matrix of the estimated coefficients. | | `cov.scaled` | ditto, scaled by `dispersion`. | | `correlation` | (only if `correlation` is true.) The estimated correlations of the estimated coefficients. | | `symbolic.cor` | (only if `correlation` is true.) The value of the argument `symbolic.cor`. | ### See Also `<glm>`, `[summary](../../base/html/summary)`. ### Examples ``` ## For examples see example(glm) ```
programming_docs
r None `r2dtable` Random 2-way Tables with Given Marginals ---------------------------------------------------- ### Description Generate random 2-way tables with given marginals using Patefield's algorithm. ### Usage ``` r2dtable(n, r, c) ``` ### Arguments | | | | --- | --- | | `n` | a non-negative numeric giving the number of tables to be drawn. | | `r` | a non-negative vector of length at least 2 giving the row totals, to be coerced to `integer`. Must sum to the same as `c`. | | `c` | a non-negative vector of length at least 2 giving the column totals, to be coerced to `integer`. | ### Value A list of length `n` containing the generated tables as its components. ### References Patefield, W. M. (1981). Algorithm AS 159: An efficient method of generating r x c tables with given row and column totals. *Applied Statistics*, **30**, 91–97. doi: [10.2307/2346669](https://doi.org/10.2307/2346669). ### Examples ``` ## Fisher's Tea Drinker data. TeaTasting <- matrix(c(3, 1, 1, 3), nrow = 2, dimnames = list(Guess = c("Milk", "Tea"), Truth = c("Milk", "Tea"))) ## Simulate permutation test for independence based on the maximum ## Pearson residuals (rather than their sum). rowTotals <- rowSums(TeaTasting) colTotals <- colSums(TeaTasting) nOfCases <- sum(rowTotals) expected <- outer(rowTotals, colTotals, "*") / nOfCases maxSqResid <- function(x) max((x - expected) ^ 2 / expected) simMaxSqResid <- sapply(r2dtable(1000, rowTotals, colTotals), maxSqResid) sum(simMaxSqResid >= maxSqResid(TeaTasting)) / 1000 ## Fisher's exact test gives p = 0.4857 ... ``` r None `predict.lm` Predict method for Linear Model Fits -------------------------------------------------- ### Description Predicted values based on linear model object. ### Usage ``` ## S3 method for class 'lm' predict(object, newdata, se.fit = FALSE, scale = NULL, df = Inf, interval = c("none", "confidence", "prediction"), level = 0.95, type = c("response", "terms"), terms = NULL, na.action = na.pass, pred.var = res.var/weights, weights = 1, ...) ``` ### Arguments | | | | --- | --- | | `object` | Object of class inheriting from `"lm"` | | `newdata` | An optional data frame in which to look for variables with which to predict. If omitted, the fitted values are used. | | `se.fit` | A switch indicating if standard errors are required. | | `scale` | Scale parameter for std.err. calculation. | | `df` | Degrees of freedom for scale. | | `interval` | Type of interval calculation. Can be abbreviated. | | `level` | Tolerance/confidence level. | | `type` | Type of prediction (response or model term). Can be abbreviated. | | `terms` | If `type = "terms"`, which terms (default is all terms), a `[character](../../base/html/character)` vector. | | `na.action` | function determining what should be done with missing values in `newdata`. The default is to predict `NA`. | | `pred.var` | the variance(s) for future observations to be assumed for prediction intervals. See ‘Details’. | | `weights` | variance weights for prediction. This can be a numeric vector or a one-sided model formula. In the latter case, it is interpreted as an expression evaluated in `newdata`. | | `...` | further arguments passed to or from other methods. | ### Details `predict.lm` produces predicted values, obtained by evaluating the regression function in the frame `newdata` (which defaults to `model.frame(object)`). If the logical `se.fit` is `TRUE`, standard errors of the predictions are calculated. If the numeric argument `scale` is set (with optional `df`), it is used as the residual standard deviation in the computation of the standard errors, otherwise this is extracted from the model fit. Setting `intervals` specifies computation of confidence or prediction (tolerance) intervals at the specified `level`, sometimes referred to as narrow vs. wide intervals. If the fit is rank-deficient, some of the columns of the design matrix will have been dropped. Prediction from such a fit only makes sense if `newdata` is contained in the same subspace as the original data. That cannot be checked accurately, so a warning is issued. If `newdata` is omitted the predictions are based on the data used for the fit. In that case how cases with missing values in the original fit are handled is determined by the `na.action` argument of that fit. If `na.action = na.omit` omitted cases will not appear in the predictions, whereas if `na.action = na.exclude` they will appear (in predictions, standard errors or interval limits), with value `NA`. See also `[napredict](nafns)`. The prediction intervals are for a single observation at each case in `newdata` (or by default, the data used for the fit) with error variance(s) `pred.var`. This can be a multiple of `res.var`, the estimated value of *σ^2*: the default is to assume that future observations have the same error variance as those used for fitting. If `weights` is supplied, the inverse of this is used as a scale factor. For a weighted fit, if the prediction is for the original data frame, `weights` defaults to the weights used for the model fit, with a warning since it might not be the intended result. If the fit was weighted and `newdata` is given, the default is to assume constant prediction variance, with a warning. ### Value `predict.lm` produces a vector of predictions or a matrix of predictions and bounds with column names `fit`, `lwr`, and `upr` if `interval` is set. For `type = "terms"` this is a matrix with a column per term and may have an attribute `"constant"`. If `se.fit` is `TRUE`, a list with the following components is returned: | | | | --- | --- | | `fit` | vector or matrix as above | | `se.fit` | standard error of predicted means | | `residual.scale` | residual standard deviations | | `df` | degrees of freedom for residual | ### Note Variables are first looked for in `newdata` and then searched for in the usual way (which will include the environment of the formula used in the fit). A warning will be given if the variables found are not of the same length as those in `newdata` if it was supplied. Notice that prediction variances and prediction intervals always refer to *future* observations, possibly corresponding to the same predictors as used for the fit. The variance of the *residuals* will be smaller. Strictly speaking, the formula used for prediction limits assumes that the degrees of freedom for the fit are the same as those for the residual variance. This may not be the case if `res.var` is not obtained from the fit. ### See Also The model fitting function `<lm>`, `<predict>`. [SafePrediction](makepredictcall) for prediction from (univariable) polynomial and spline fits. ### Examples ``` require(graphics) ## Predictions x <- rnorm(15) y <- x + rnorm(15) predict(lm(y ~ x)) new <- data.frame(x = seq(-3, 3, 0.5)) predict(lm(y ~ x), new, se.fit = TRUE) pred.w.plim <- predict(lm(y ~ x), new, interval = "prediction") pred.w.clim <- predict(lm(y ~ x), new, interval = "confidence") matplot(new$x, cbind(pred.w.clim, pred.w.plim[,-1]), lty = c(1,2,2,3,3), type = "l", ylab = "predicted y") ## Prediction intervals, special cases ## The first three of these throw warnings w <- 1 + x^2 fit <- lm(y ~ x) wfit <- lm(y ~ x, weights = w) predict(fit, interval = "prediction") predict(wfit, interval = "prediction") predict(wfit, new, interval = "prediction") predict(wfit, new, interval = "prediction", weights = (new$x)^2) predict(wfit, new, interval = "prediction", weights = ~x^2) ##-- From aov(.) example ---- predict(.. terms) npk.aov <- aov(yield ~ block + N*P*K, npk) (termL <- attr(terms(npk.aov), "term.labels")) (pt <- predict(npk.aov, type = "terms")) pt. <- predict(npk.aov, type = "terms", terms = termL[1:4]) stopifnot(all.equal(pt[,1:4], pt., tolerance = 1e-12, check.attributes = FALSE)) ``` r None `uniroot` One Dimensional Root (Zero) Finding ---------------------------------------------- ### Description The function `uniroot` searches the interval from `lower` to `upper` for a root (i.e., zero) of the function `f` with respect to its first argument. Setting `extendInt` to a non-`"no"` string, means searching for the correct `interval = c(lower,upper)` if `sign(f(x))` does not satisfy the requirements at the interval end points; see the ‘Details’ section. ### Usage ``` uniroot(f, interval, ..., lower = min(interval), upper = max(interval), f.lower = f(lower, ...), f.upper = f(upper, ...), extendInt = c("no", "yes", "downX", "upX"), check.conv = FALSE, tol = .Machine$double.eps^0.25, maxiter = 1000, trace = 0) ``` ### Arguments | | | | --- | --- | | `f` | the function for which the root is sought. | | `interval` | a vector containing the end-points of the interval to be searched for the root. | | `...` | additional named or unnamed arguments to be passed to `f` | | `lower, upper` | the lower and upper end points of the interval to be searched. | | `f.lower, f.upper` | the same as `f(upper)` and `f(lower)`, respectively. Passing these values from the caller where they are often known is more economical as soon as `f()` contains non-trivial computations. | | `extendInt` | character string specifying if the interval `c(lower,upper)` should be extended or directly produce an error when `f()` does not have differing signs at the endpoints. The default, `"no"`, keeps the search interval and hence produces an error. Can be abbreviated. | | `check.conv` | logical indicating whether a convergence warning of the underlying `<uniroot>` should be caught as an error and if non-convergence in `maxiter` iterations should be an error instead of a warning. | | `tol` | the desired accuracy (convergence tolerance). | | `maxiter` | the maximum number of iterations. | | `trace` | integer number; if positive, tracing information is produced. Higher values giving more details. | ### Details Note that arguments after `...` must be matched exactly. Either `interval` or both `lower` and `upper` must be specified: the upper endpoint must be strictly larger than the lower endpoint. The function values at the endpoints must be of opposite signs (or zero), for `extendInt="no"`, the default. Otherwise, if `extendInt="yes"`, the interval is extended on both sides, in search of a sign change, i.e., until the search interval *[l,u]* satisfies *f(l) \* f(u) <= 0*. If it is *known how* *f* changes sign at the root *x0*, that is, if the function is increasing or decreasing there, `extendInt` can (and typically should) be specified as `"upX"` (for “upward crossing”) or `"downX"`, respectively. Equivalently, define *S:= +/- 1*, to require *S = sign(f(x0 + eps))* at the solution. In that case, the search interval *[l,u]* possibly is extended to be such that *S \* f(l) <= 0* and *S \* f(u) >= 0*. `uniroot()` uses Fortran subroutine ‘"zeroin"’ (from Netlib) based on algorithms given in the reference below. They assume a continuous function (which then is known to have at least one root in the interval). Convergence is declared either if `f(x) == 0` or the change in `x` for one step of the algorithm is less than `tol` (plus an allowance for representation error in `x`). If the algorithm does not converge in `maxiter` steps, a warning is printed and the current approximation is returned. `f` will be called as `f(x, ...)` for a numeric value of x. The argument passed to `f` has special semantics and used to be shared between calls. The function should not copy it. ### Value A list with at least four components: `root` and `f.root` give the location of the root and the value of the function evaluated at that point. `iter` and `estim.prec` give the number of iterations used and an approximate estimated precision for `root`. (If the root occurs at one of the endpoints, the estimated precision is `NA`.) Further components may be added in future: component `init.it` was added in **R** 3.1.0. ### Source Based on ‘zeroin.c’ in <https://www.netlib.org/c/brent.shar>. ### References Brent, R. (1973) *Algorithms for Minimization without Derivatives.* Englewood Cliffs, NJ: Prentice-Hall. ### See Also `[polyroot](../../base/html/polyroot)` for all complex roots of a polynomial; `<optimize>`, `<nlm>`. ### Examples ``` require(utils) # for str ## some platforms hit zero exactly on the first step: ## if so the estimated precision is 2/3. f <- function (x, a) x - a str(xmin <- uniroot(f, c(0, 1), tol = 0.0001, a = 1/3)) ## handheld calculator example: fixed point of cos(.): uniroot(function(x) cos(x) - x, lower = -pi, upper = pi, tol = 1e-9)$root str(uniroot(function(x) x*(x^2-1) + .5, lower = -2, upper = 2, tol = 0.0001)) str(uniroot(function(x) x*(x^2-1) + .5, lower = -2, upper = 2, tol = 1e-10)) ## Find the smallest value x for which exp(x) > 0 (numerically): r <- uniroot(function(x) 1e80*exp(x) - 1e-300, c(-1000, 0), tol = 1e-15) str(r, digits.d = 15) # around -745, depending on the platform. exp(r$root) # = 0, but not for r$root * 0.999... minexp <- r$root * (1 - 10*.Machine$double.eps) exp(minexp) # typically denormalized ##--- uniroot() with new interval extension + checking features: -------------- f1 <- function(x) (121 - x^2)/(x^2+1) f2 <- function(x) exp(-x)*(x - 12) try(uniroot(f1, c(0,10))) try(uniroot(f2, c(0, 2))) ##--> error: f() .. end points not of opposite sign ## where as 'extendInt="yes"' simply first enlarges the search interval: u1 <- uniroot(f1, c(0,10),extendInt="yes", trace=1) u2 <- uniroot(f2, c(0,2), extendInt="yes", trace=2) stopifnot(all.equal(u1$root, 11, tolerance = 1e-5), all.equal(u2$root, 12, tolerance = 6e-6)) ## The *danger* of interval extension: ## No way to find a zero of a positive function, but ## numerically, f(-|M|) becomes zero : u3 <- uniroot(exp, c(0,2), extendInt="yes", trace=TRUE) ## Nonsense example (must give an error): tools::assertCondition( uniroot(function(x) 1, 0:1, extendInt="yes"), "error", verbose=TRUE) ## Convergence checking : sinc <- function(x) ifelse(x == 0, 1, sin(x)/x) curve(sinc, -6,18); abline(h=0,v=0, lty=3, col=adjustcolor("gray", 0.8)) uniroot(sinc, c(0,5), extendInt="yes", maxiter=4) #-> "just" a warning ## now with check.conv=TRUE, must signal a convergence error : uniroot(sinc, c(0,5), extendInt="yes", maxiter=4, check.conv=TRUE) ### Weibull cumulative hazard (example origin, Ravi Varadhan): cumhaz <- function(t, a, b) b * (t/b)^a froot <- function(x, u, a, b) cumhaz(x, a, b) - u n <- 1000 u <- -log(runif(n)) a <- 1/2 b <- 1 ## Find failure times ru <- sapply(u, function(x) uniroot(froot, u=x, a=a, b=b, interval= c(1.e-14, 1e04), extendInt="yes")$root) ru2 <- sapply(u, function(x) uniroot(froot, u=x, a=a, b=b, interval= c(0.01, 10), extendInt="yes")$root) stopifnot(all.equal(ru, ru2, tolerance = 6e-6)) r1 <- uniroot(froot, u= 0.99, a=a, b=b, interval= c(0.01, 10), extendInt="up") stopifnot(all.equal(0.99, cumhaz(r1$root, a=a, b=b))) ## An error if 'extendInt' assumes "wrong zero-crossing direction": uniroot(froot, u= 0.99, a=a, b=b, interval= c(0.1, 10), extendInt="down") ``` r None `NegBinomial` The Negative Binomial Distribution ------------------------------------------------- ### Description Density, distribution function, quantile function and random generation for the negative binomial distribution with parameters `size` and `prob`. ### Usage ``` dnbinom(x, size, prob, mu, log = FALSE) pnbinom(q, size, prob, mu, lower.tail = TRUE, log.p = FALSE) qnbinom(p, size, prob, mu, lower.tail = TRUE, log.p = FALSE) rnbinom(n, size, prob, mu) ``` ### Arguments | | | | --- | --- | | `x` | vector of (non-negative integer) quantiles. | | `q` | vector of quantiles. | | `p` | vector of probabilities. | | `n` | number of observations. If `length(n) > 1`, the length is taken to be the number required. | | `size` | target for number of successful trials, or dispersion parameter (the shape parameter of the gamma mixing distribution). Must be strictly positive, need not be integer. | | `prob` | probability of success in each trial. `0 < prob <= 1`. | | `mu` | alternative parametrization via mean: see ‘Details’. | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details The negative binomial distribution with `size` *= n* and `prob` *= p* has density *Γ(x+n)/(Γ(n) x!) p^n (1-p)^x* for *x = 0, 1, 2, …*, *n > 0* and *0 < p ≤ 1*. This represents the number of failures which occur in a sequence of Bernoulli trials before a target number of successes is reached. The mean is *μ = n(1-p)/p* and variance *n(1-p)/p^2*. A negative binomial distribution can also arise as a mixture of Poisson distributions with mean distributed as a gamma distribution (see `[pgamma](gammadist)`) with scale parameter `(1 - prob)/prob` and shape parameter `size`. (This definition allows non-integer values of `size`.) An alternative parametrization (often used in ecology) is by the *mean* `mu` (see above), and `size`, the *dispersion parameter*, where `prob` = `size/(size+mu)`. The variance is `mu + mu^2/size` in this parametrization. If an element of `x` is not integer, the result of `dnbinom` is zero, with a warning. The case `size == 0` is the distribution concentrated at zero. This is the limiting distribution for `size` approaching zero, even if `mu` rather than `prob` is held constant. Notice though, that the mean of the limit distribution is 0, whatever the value of `mu`. The quantile is defined as the smallest value *x* such that *F(x) ≥ p*, where *F* is the distribution function. ### Value `dnbinom` gives the density, `pnbinom` gives the distribution function, `qnbinom` gives the quantile function, and `rnbinom` generates random deviates. Invalid `size` or `prob` will result in return value `NaN`, with a warning. The length of the result is determined by `n` for `rnbinom`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `n` are recycled to the length of the result. Only the first elements of the logical arguments are used. `rnbinom` returns a vector of type [integer](../../base/html/integer) unless generated values exceed the maximum representable integer when `[double](../../base/html/double)` values are returned since R version 4.0.0. ### Source `dnbinom` computes via binomial probabilities, using code contributed by Catherine Loader (see `[dbinom](family)`). `pnbinom` uses `[pbeta](beta)`. `qnbinom` uses the Cornish–Fisher Expansion to include a skewness correction to a normal approximation, followed by a search. `rnbinom` uses the derivation as a gamma mixture of Poissons, see Devroye, L. (1986) *Non-Uniform Random Variate Generation.* Springer-Verlag, New York. Page 480. ### See Also [Distributions](distributions) for standard distributions, including `[dbinom](family)` for the binomial, `[dpois](family)` for the Poisson and `[dgeom](geometric)` for the geometric distribution, which is a special case of the negative binomial. ### Examples ``` require(graphics) x <- 0:11 dnbinom(x, size = 1, prob = 1/2) * 2^(1 + x) # == 1 126 / dnbinom(0:8, size = 2, prob = 1/2) #- theoretically integer ## Cumulative ('p') = Sum of discrete prob.s ('d'); Relative error : summary(1 - cumsum(dnbinom(x, size = 2, prob = 1/2)) / pnbinom(x, size = 2, prob = 1/2)) x <- 0:15 size <- (1:20)/4 persp(x, size, dnb <- outer(x, size, function(x,s) dnbinom(x, s, prob = 0.4)), xlab = "x", ylab = "s", zlab = "density", theta = 150) title(tit <- "negative binomial density(x,s, pr = 0.4) vs. x & s") image (x, size, log10(dnb), main = paste("log [", tit, "]")) contour(x, size, log10(dnb), add = TRUE) ## Alternative parametrization x1 <- rnbinom(500, mu = 4, size = 1) x2 <- rnbinom(500, mu = 4, size = 10) x3 <- rnbinom(500, mu = 4, size = 100) h1 <- hist(x1, breaks = 20, plot = FALSE) h2 <- hist(x2, breaks = h1$breaks, plot = FALSE) h3 <- hist(x3, breaks = h1$breaks, plot = FALSE) barplot(rbind(h1$counts, h2$counts, h3$counts), beside = TRUE, col = c("red","blue","cyan"), names.arg = round(h1$breaks[-length(h1$breaks)])) ```
programming_docs
r None `simulate` Simulate Responses ------------------------------ ### Description Simulate one or more responses from the distribution corresponding to a fitted model object. ### Usage ``` simulate(object, nsim = 1, seed = NULL, ...) ``` ### Arguments | | | | --- | --- | | `object` | an object representing a fitted model. | | `nsim` | number of response vectors to simulate. Defaults to `1`. | | `seed` | an object specifying if and how the random number generator should be initialized (‘seeded’). For the "lm" method, either `NULL` or an integer that will be used in a call to `set.seed` before simulating the response vectors. If set, the value is saved as the `"seed"` attribute of the returned value. The default, `NULL` will not change the random generator state, and return `[.Random.seed](../../base/html/random)` as the `"seed"` attribute, see ‘Value’. | | `...` | additional optional arguments. | ### Details This is a generic function. Consult the individual modeling functions for details on how to use this function. Package stats has a method for `"lm"` objects which is used for `<lm>` and `<glm>` fits. There is a method for fits from `glm.nb` in package [MASS](https://CRAN.R-project.org/package=MASS), and hence the case of negative binomial families is not covered by the `"lm"` method. The methods for linear models fitted by `lm` or `glm(family = "gaussian")` assume that any weights which have been supplied are inversely proportional to the error variance. For other GLMs the (optional) `simulate` component of the `<family>` object is used—there is no appropriate simulation method for ‘quasi’ models as they are specified only up to two moments. For binomial and Poisson GLMs the dispersion is fixed at one. Integer prior weights *w\_i* can be interpreted as meaning that observation *i* is an average of *w\_i* observations, which is natural for binomials specified as proportions but less so for a Poisson, for which prior weights are ignored with a warning. For a gamma GLM the shape parameter is estimated by maximum likelihood (using function `[gamma.shape](../../mass/html/gamma.shape.glm)` in package [MASS](https://CRAN.R-project.org/package=MASS)). The interpretation of weights is as multipliers to a basic shape parameter, since dispersion is inversely proportional to shape. For an inverse gaussian GLM the model assumed is *IG(μ\_i, λ w\_i)* (see <https://en.wikipedia.org/wiki/Inverse_Gaussian_distribution>) where *λ* is estimated by the inverse of the dispersion estimate for the fit. The variance is *μ\_i^3/(λ w\_i)* and hence inversely proportional to the prior weights. The simulation is done by function `[rinvGauss](../../suppdists/html/invgauss)` from the [SuppDists](https://CRAN.R-project.org/package=SuppDists) package, which must be installed. ### Value Typically, a list of length `nsim` of simulated responses. Where appropriate the result can be a data frame (which is a special type of list). For the `"lm"` method, the result is a data frame with an attribute `"seed"`. If argument `seed` is `NULL`, the attribute is the value of `[.Random.seed](../../base/html/random)` before the simulation was started; otherwise it is the value of the argument with a `"kind"` attribute with value `as.list([RNGkind](../../base/html/random)())`. ### See Also `[RNG](../../base/html/random)` about random number generation in **R**, `<fitted.values>` and `<residuals>` for related methods; `<glm>`, `<lm>` for model fitting. There are further examples in the ‘simulate.R’ tests file in the sources for package stats. ### Examples ``` x <- 1:5 mod1 <- lm(c(1:3, 7, 6) ~ x) S1 <- simulate(mod1, nsim = 4) ## repeat the simulation: .Random.seed <- attr(S1, "seed") identical(S1, simulate(mod1, nsim = 4)) S2 <- simulate(mod1, nsim = 200, seed = 101) rowMeans(S2) # should be about the same as fitted(mod1) ## repeat identically: (sseed <- attr(S2, "seed")) # seed; RNGkind as attribute stopifnot(identical(S2, simulate(mod1, nsim = 200, seed = sseed))) ## To be sure about the proper RNGkind, e.g., after RNGversion("2.7.0") ## first set the RNG kind, then simulate do.call(RNGkind, attr(sseed, "kind")) identical(S2, simulate(mod1, nsim = 200, seed = sseed)) ## Binomial GLM examples yb1 <- matrix(c(4, 4, 5, 7, 8, 6, 6, 5, 3, 2), ncol = 2) modb1 <- glm(yb1 ~ x, family = binomial) S3 <- simulate(modb1, nsim = 4) # each column of S3 is a two-column matrix. x2 <- sort(runif(100)) yb2 <- rbinom(100, prob = plogis(2*(x2-1)), size = 1) yb2 <- factor(1 + yb2, labels = c("failure", "success")) modb2 <- glm(yb2 ~ x2, family = binomial) S4 <- simulate(modb2, nsim = 4) # each column of S4 is a factor ``` r None `extractAIC` Extract AIC from a Fitted Model --------------------------------------------- ### Description Computes the (generalized) Akaike **A**n **I**nformation **C**riterion for a fitted parametric model. ### Usage ``` extractAIC(fit, scale, k = 2, ...) ``` ### Arguments | | | | --- | --- | | `fit` | fitted model, usually the result of a fitter like `<lm>`. | | `scale` | optional numeric specifying the scale parameter of the model, see `scale` in `<step>`. Currently only used in the `"lm"` method, where `scale` specifies the estimate of the error variance, and `scale = 0` indicates that it is to be estimated by maximum likelihood. | | `k` | numeric specifying the ‘weight’ of the *equivalent degrees of freedom* (*=:* `edf`) part in the AIC formula. | | `...` | further arguments (currently unused in base **R**). | ### Details This is a generic function, with methods in base **R** for classes `"aov"`, `"glm"` and `"lm"` as well as for `"negbin"` (package [MASS](https://CRAN.R-project.org/package=MASS)) and `"coxph"` and `"survreg"` (package [survival](https://CRAN.R-project.org/package=survival)). The criterion used is *AIC = - 2\*log L + k \* edf,* where *L* is the likelihood and `edf` the equivalent degrees of freedom (i.e., the number of free parameters for usual parametric models) of `fit`. For linear models with unknown scale (i.e., for `<lm>` and `<aov>`), *-2 log L* is computed from the *deviance* and uses a different additive constant to `[logLik](loglik)` and hence `[AIC](aic)`. If *RSS* denotes the (weighted) residual sum of squares then `extractAIC` uses for *-2 log L* the formulae *RSS/s - n* (corresponding to Mallows' *Cp*) in the case of known scale *s* and *n log (RSS/n)* for unknown scale. `[AIC](aic)` only handles unknown scale and uses the formula *n\*log(RSS/n) + n + n\*log 2pi - sum(log w)* where *w* are the weights. Further `AIC` counts the scale estimation as a parameter in the `edf` and `extractAIC` does not. For `glm` fits the family's `aic()` function is used to compute the AIC: see the note under `logLik` about the assumptions this makes. `k = 2` corresponds to the traditional AIC, using `k = log(n)` provides the BIC (Bayesian IC) instead. Note that the methods for this function may differ in their assumptions from those of methods for `[AIC](aic)` (usually *via* a method for `[logLik](loglik)`). We have already mentioned the case of `"lm"` models with estimated scale, and there are similar issues in the `"glm"` and `"negbin"` methods where the dispersion parameter may or may not be taken as ‘free’. This is immaterial as `extractAIC` is only used to compare models of the same class (where only differences in AIC values are considered). ### Value A numeric vector of length 2, with first and second elements giving | | | | --- | --- | | `edf` | the ‘**e**quivalent **d**egrees of **f**reedom’ for the fitted model `fit`. | | `AIC` | the (generalized) Akaike Information Criterion for `fit`. | ### Note This function is used in `<add1>`, `[drop1](add1)` and `<step>` and the similar functions in package [MASS](https://CRAN.R-project.org/package=MASS) from which it was adopted. ### Author(s) B. D. Ripley ### References Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* New York: Springer (4th ed). ### See Also `[AIC](aic)`, `<deviance>`, `<add1>`, `<step>` ### Examples ``` utils::example(glm) extractAIC(glm.D93) #>> 5 15.129 ``` r None `predict.HoltWinters` Prediction Function for Fitted Holt-Winters Models ------------------------------------------------------------------------- ### Description Computes predictions and prediction intervals for models fitted by the Holt-Winters method. ### Usage ``` ## S3 method for class 'HoltWinters' predict(object, n.ahead = 1, prediction.interval = FALSE, level = 0.95, ...) ``` ### Arguments | | | | --- | --- | | `object` | An object of class `HoltWinters`. | | `n.ahead` | Number of future periods to predict. | | `prediction.interval` | logical. If `TRUE`, the lower and upper bounds of the corresponding prediction intervals are computed. | | `level` | Confidence level for the prediction interval. | | `...` | arguments passed to or from other methods. | ### Value A time series of the predicted values. If prediction intervals are requested, a multiple time series is returned with columns `fit`, `lwr` and `upr` for the predicted values and the lower and upper bounds respectively. ### Author(s) David Meyer [[email protected]](mailto:[email protected]) ### References C. C. Holt (1957) Forecasting trends and seasonals by exponentially weighted moving averages, *ONR Research Memorandum, Carnegie Institute of Technology* **52**. P. R. Winters (1960). Forecasting sales by exponentially weighted moving averages. *Management Science*, **6**, 324–342. doi: [10.1287/mnsc.6.3.324](https://doi.org/10.1287/mnsc.6.3.324). ### See Also `[HoltWinters](holtwinters)` ### Examples ``` require(graphics) m <- HoltWinters(co2) p <- predict(m, 50, prediction.interval = TRUE) plot(m, p) ``` r None `Distributions` Distributions in the stats package --------------------------------------------------- ### Description Density, cumulative distribution function, quantile function and random variate generation for many standard probability distributions are available in the stats package. ### Details The functions for the density/mass function, cumulative distribution function, quantile function and random variate generation are named in the form `dxxx`, `pxxx`, `qxxx` and `rxxx` respectively. For the beta distribution see `[dbeta](beta)`. For the binomial (including Bernoulli) distribution see `[dbinom](family)`. For the Cauchy distribution see `[dcauchy](cauchy)`. For the chi-squared distribution see `[dchisq](chisquare)`. For the exponential distribution see `[dexp](exponential)`. For the F distribution see `[df](fdist)`. For the gamma distribution see `[dgamma](gammadist)`. For the geometric distribution see `[dgeom](geometric)`. (This is also a special case of the negative binomial.) For the hypergeometric distribution see `[dhyper](hypergeometric)`. For the log-normal distribution see `[dlnorm](lognormal)`. For the multinomial distribution see `[dmultinom](multinom)`. For the negative binomial distribution see `[dnbinom](negbinomial)`. For the normal distribution see `[dnorm](normal)`. For the Poisson distribution see `[dpois](family)`. For the Student's t distribution see `[dt](tdist)`. For the uniform distribution see `[dunif](uniform)`. For the Weibull distribution see `[dweibull](weibull)`. For less common distributions of test statistics see `[pbirthday](birthday)`, `[dsignrank](signrank)`, `[ptukey](tukey)` and `[dwilcox](wilcoxon)` (and see the ‘See Also’ section of `<cor.test>`). ### See Also `[RNG](../../base/html/random)` about random number generation in **R**. The CRAN task view on distributions, <https://CRAN.R-project.org/view=Distributions>, mentioning several CRAN packages for additional distributions. r None `poisson.test` Exact Poisson tests ----------------------------------- ### Description Performs an exact test of a simple null hypothesis about the rate parameter in Poisson distribution, or for the ratio between two rate parameters. ### Usage ``` poisson.test(x, T = 1, r = 1, alternative = c("two.sided", "less", "greater"), conf.level = 0.95) ``` ### Arguments | | | | --- | --- | | `x` | number of events. A vector of length one or two. | | `T` | time base for event count. A vector of length one or two. | | `r` | hypothesized rate or rate ratio | | `alternative` | indicates the alternative hypothesis and must be one of `"two.sided"`, `"greater"` or `"less"`. You can specify just the initial letter. | | `conf.level` | confidence level for the returned confidence interval. | ### Details Confidence intervals are computed similarly to those of `<binom.test>` in the one-sample case, and using `<binom.test>` in the two sample case. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the number of events (in the first sample if there are two.) | | `parameter` | the corresponding expected count | | `p.value` | the p-value of the test. | | `conf.int` | a confidence interval for the rate or rate ratio. | | `estimate` | the estimated rate or rate ratio. | | `null.value` | the rate or rate ratio under the null, `r`. | | `alternative` | a character string describing the alternative hypothesis. | | `method` | the character string `"Exact Poisson test"` or `"Comparison of Poisson rates"` as appropriate. | | `data.name` | a character string giving the names of the data. | ### Note The rate parameter in Poisson data is often given based on a “time on test” or similar quantity (person-years, population size, or expected number of cases from mortality tables). This is the role of the `T` argument. The one-sample case is effectively the binomial test with a very large `n`. The two sample case is converted to a binomial test by conditioning on the total event count, and the rate ratio is directly related to the odds in that binomial distribution. ### See Also `<binom.test>` ### Examples ``` ### These are paraphrased from data sets in the ISwR package ## SMR, Welsh Nickel workers poisson.test(137, 24.19893) ## eba1977, compare Fredericia to other three cities for ages 55-59 poisson.test(c(11, 6+8+7), c(800, 1083+1050+878)) ``` r None `offset` Include an Offset in a Model Formula ---------------------------------------------- ### Description An offset is a term to be added to a linear predictor, such as in a generalised linear model, with known coefficient 1 rather than an estimated coefficient. ### Usage ``` offset(object) ``` ### Arguments | | | | --- | --- | | `object` | An offset to be included in a model frame | ### Details There can be more than one offset in a model formula, but `-` is not supported for `offset` terms (and is equivalent to `+`). ### Value The input value. ### See Also `[model.offset](model.extract)`, `<model.frame>`. For examples see `<glm>` and `[Insurance](../../mass/html/insurance)` in package [MASS](https://CRAN.R-project.org/package=MASS). r None `df.residual` Residual Degrees-of-Freedom ------------------------------------------ ### Description Returns the residual degrees-of-freedom extracted from a fitted model object. ### Usage ``` df.residual(object, ...) ``` ### Arguments | | | | --- | --- | | `object` | an object for which the degrees-of-freedom are desired. | | `...` | additional optional arguments. | ### Details This is a generic function which can be used to extract residual degrees-of-freedom for fitted models. Consult the individual modeling functions for details on how to use this function. The default method just extracts the `df.residual` component. ### Value The value of the residual degrees-of-freedom extracted from the object `x`. ### See Also `<deviance>`, `<glm>`, `<lm>`. r None `plot.HoltWinters` Plot function for HoltWinters objects --------------------------------------------------------- ### Description Produces a chart of the original time series along with the fitted values. Optionally, predicted values (and their confidence bounds) can also be plotted. ### Usage ``` ## S3 method for class 'HoltWinters' plot(x, predicted.values = NA, intervals = TRUE, separator = TRUE, col = 1, col.predicted = 2, col.intervals = 4, col.separator = 1, lty = 1, lty.predicted = 1, lty.intervals = 1, lty.separator = 3, ylab = "Observed / Fitted", main = "Holt-Winters filtering", ylim = NULL, ...) ``` ### Arguments | | | | --- | --- | | `x` | Object of class `"HoltWinters"` | | `predicted.values` | Predicted values as returned by `predict.HoltWinters` | | `intervals` | If `TRUE`, the prediction intervals are plotted (default). | | `separator` | If `TRUE`, a separating line between fitted and predicted values is plotted (default). | | `col, lty` | Color/line type of original data (default: black solid). | | `col.predicted, lty.predicted` | Color/line type of fitted and predicted values (default: red solid). | | `col.intervals, lty.intervals` | Color/line type of prediction intervals (default: blue solid). | | `col.separator, lty.separator` | Color/line type of observed/predicted values separator (default: black dashed). | | `ylab` | Label of the y-axis. | | `main` | Main title. | | `ylim` | Limits of the y-axis. If `NULL`, the range is chosen such that the plot contains the original series, the fitted values, and the predicted values if any. | | `...` | Other graphics parameters. | ### Author(s) David Meyer [[email protected]](mailto:[email protected]) ### References C. C. Holt (1957) Forecasting trends and seasonals by exponentially weighted moving averages, *ONR Research Memorandum, Carnegie Institute of Technology* **52**. P. R. Winters (1960). Forecasting sales by exponentially weighted moving averages. *Management Science*, **6**, 324–342. doi: [10.1287/mnsc.6.3.324](https://doi.org/10.1287/mnsc.6.3.324). ### See Also `[HoltWinters](holtwinters)`, `[predict.HoltWinters](predict.holtwinters)` r None `lm.summaries` Accessing Linear Model Fits ------------------------------------------- ### Description All these functions are `[methods](../../utils/html/methods)` for class `"lm"` objects. ### Usage ``` ## S3 method for class 'lm' family(object, ...) ## S3 method for class 'lm' formula(x, ...) ## S3 method for class 'lm' residuals(object, type = c("working", "response", "deviance", "pearson", "partial"), ...) ## S3 method for class 'lm' labels(object, ...) ``` ### Arguments | | | | --- | --- | | `object, x` | an object inheriting from class `lm`, usually the result of a call to `<lm>` or `<aov>`. | | `...` | further arguments passed to or from other methods. | | `type` | the type of residuals which should be returned. Can be abbreviated. | ### Details The generic accessor functions `coef`, `effects`, `fitted` and `residuals` can be used to extract various useful features of the value returned by `lm`. The working and response residuals are ‘observed - fitted’. The deviance and pearson residuals are weighted residuals, scaled by the square root of the weights used in fitting. The partial residuals are a matrix with each column formed by omitting a term from the model. In all these, zero weight cases are never omitted (as opposed to the standardized `[rstudent](influence.measures)` residuals, and the `<weighted.residuals>`). How `residuals` treats cases with missing values in the original fit is determined by the `na.action` argument of that fit. If `na.action = na.omit` omitted cases will not appear in the residuals, whereas if `na.action = na.exclude` they will appear, with residual value `NA`. See also `[naresid](nafns)`. The `"lm"` method for generic `[labels](../../base/html/labels)` returns the term labels for estimable terms, that is the names of the terms with an least one estimable coefficient. ### References Chambers, J. M. (1992) *Linear models.* Chapter 4 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also The model fitting function `<lm>`, `<anova.lm>`. `<coef>`, `<deviance>`, `<df.residual>`, `<effects>`, `[fitted](fitted.values)`, `<glm>` for **generalized** linear models, `[influence](lm.influence)` (etc on that page) for regression diagnostics, `<weighted.residuals>`, `<residuals>`, `[residuals.glm](glm.summaries)`, `<summary.lm>`, `<weights>`. <influence.measures> for deletion diagnostics, including standardized (`[rstandard](influence.measures)`) and studentized (`[rstudent](influence.measures)`) residuals. ### Examples ``` ##-- Continuing the lm(.) example: coef(lm.D90) # the bare coefficients ## The 2 basic regression diagnostic plots [plot.lm(.) is preferred] plot(resid(lm.D90), fitted(lm.D90)) # Tukey-Anscombe's abline(h = 0, lty = 2, col = "gray") qqnorm(residuals(lm.D90)) ```
programming_docs
r None `glm.summaries` Accessing Generalized Linear Model Fits -------------------------------------------------------- ### Description These functions are all `[methods](../../utils/html/methods)` for class `glm` or `summary.glm` objects. ### Usage ``` ## S3 method for class 'glm' family(object, ...) ## S3 method for class 'glm' residuals(object, type = c("deviance", "pearson", "working", "response", "partial"), ...) ``` ### Arguments | | | | --- | --- | | `object` | an object of class `glm`, typically the result of a call to `<glm>`. | | `type` | the type of residuals which should be returned. The alternatives are: `"deviance"` (default), `"pearson"`, `"working"`, `"response"`, and `"partial"`. Can be abbreviated. | | `...` | further arguments passed to or from other methods. | ### Details The references define the types of residuals: Davison & Snell is a good reference for the usages of each. The partial residuals are a matrix of working residuals, with each column formed by omitting a term from the model. How `residuals` treats cases with missing values in the original fit is determined by the `na.action` argument of that fit. If `na.action = na.omit` omitted cases will not appear in the residuals, whereas if `na.action = na.exclude` they will appear, with residual value `NA`. See also `[naresid](nafns)`. For fits done with `y = FALSE` the response values are computed from other components. ### References Davison, A. C. and Snell, E. J. (1991) *Residuals and diagnostics.* In: Statistical Theory and Modelling. In Honour of Sir David Cox, FRS, eds. Hinkley, D. V., Reid, N. and Snell, E. J., Chapman & Hall. Hastie, T. J. and Pregibon, D. (1992) *Generalized linear models.* Chapter 6 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. McCullagh P. and Nelder, J. A. (1989) *Generalized Linear Models.* London: Chapman and Hall. ### See Also `<glm>` for computing `glm.obj`, `<anova.glm>`; the corresponding *generic* functions, `<summary.glm>`, `<coef>`, `<deviance>`, `<df.residual>`, `<effects>`, `[fitted](fitted.values)`, `<residuals>`. <influence.measures> for deletion diagnostics, including standardized (`[rstandard](influence.measures)`) and studentized (`[rstudent](influence.measures)`) residuals. r None `arima.sim` Simulate from an ARIMA Model ----------------------------------------- ### Description Simulate from an ARIMA model. ### Usage ``` arima.sim(model, n, rand.gen = rnorm, innov = rand.gen(n, ...), n.start = NA, start.innov = rand.gen(n.start, ...), ...) ``` ### Arguments | | | | --- | --- | | `model` | A list with component `ar` and/or `ma` giving the AR and MA coefficients respectively. Optionally a component `order` can be used. An empty list gives an ARIMA(0, 0, 0) model, that is white noise. | | `n` | length of output series, before un-differencing. A strictly positive integer. | | `rand.gen` | optional: a function to generate the innovations. | | `innov` | an optional times series of innovations. If not provided, `rand.gen` is used. | | `n.start` | length of ‘burn-in’ period. If `NA`, the default, a reasonable value is computed. | | `start.innov` | an optional times series of innovations to be used for the burn-in period. If supplied there must be at least `n.start` values (and `n.start` is by default computed inside the function). | | `...` | additional arguments for `rand.gen`. Most usefully, the standard deviation of the innovations generated by `rnorm` can be specified by `sd`. | ### Details See `<arima>` for the precise definition of an ARIMA model. The ARMA model is checked for stationarity. ARIMA models are specified via the `order` component of `model`, in the same way as for `<arima>`. Other aspects of the `order` component are ignored, but inconsistent specifications of the MA and AR orders are detected. The un-differencing assumes previous values of zero, and to remind the user of this, those values are returned. Random inputs for the ‘burn-in’ period are generated by calling `rand.gen`. ### Value A time-series object of class `"ts"`. ### See Also `<arima>` ### Examples ``` require(graphics) arima.sim(n = 63, list(ar = c(0.8897, -0.4858), ma = c(-0.2279, 0.2488)), sd = sqrt(0.1796)) # mildly long-tailed arima.sim(n = 63, list(ar = c(0.8897, -0.4858), ma = c(-0.2279, 0.2488)), rand.gen = function(n, ...) sqrt(0.1796) * rt(n, df = 5)) # An ARIMA simulation ts.sim <- arima.sim(list(order = c(1,1,0), ar = 0.7), n = 200) ts.plot(ts.sim) ``` r None `terms` Model Terms -------------------- ### Description The function `terms` is a generic function which can be used to extract *terms* objects from various kinds of **R** data objects. ### Usage ``` terms(x, ...) ``` ### Arguments | | | | --- | --- | | `x` | object used to select a method to dispatch. | | `...` | further arguments passed to or from other methods. | ### Details There are methods for classes `"aovlist"`, and `"terms"` `"formula"` (see `<terms.formula>`): the default method just extracts the `terms` component of the object, or failing that a `"terms"` attribute (as used by `<model.frame>`). There are `[print](../../base/html/print)` and `[labels](../../base/html/labels)` methods for class `"terms"`: the latter prints the term labels (see `<terms.object>`). ### Value An object of class `c("terms", "formula")` which contains the *terms* representation of a symbolic model. See `<terms.object>` for its structure. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical models.* Chapter 2 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<terms.object>`, `<terms.formula>`, `<lm>`, `<glm>`, `<formula>`. r None `ARMAacf` Compute Theoretical ACF for an ARMA Process ------------------------------------------------------ ### Description Compute the theoretical autocorrelation function or partial autocorrelation function for an ARMA process. ### Usage ``` ARMAacf(ar = numeric(), ma = numeric(), lag.max = r, pacf = FALSE) ``` ### Arguments | | | | --- | --- | | `ar` | numeric vector of AR coefficients | | `ma` | numeric vector of MA coefficients | | `lag.max` | integer. Maximum lag required. Defaults to `max(p, q+1)`, where `p, q` are the numbers of AR and MA terms respectively. | | `pacf` | logical. Should the partial autocorrelations be returned? | ### Details The methods used follow Brockwell & Davis (1991, section 3.3). Their equations (3.3.8) are solved for the autocovariances at lags *0, …, max(p, q+1)*, and the remaining autocorrelations are given by a recursive filter. ### Value A vector of (partial) autocorrelations, named by the lags. ### References Brockwell, P. J. and Davis, R. A. (1991) *Time Series: Theory and Methods*, Second Edition. Springer. ### See Also `<arima>`, `[ARMAtoMA](armatoma)`, `[acf2AR](acf2ar)` for inverting part of `ARMAacf`; further `<filter>`. ### Examples ``` ARMAacf(c(1.0, -0.25), 1.0, lag.max = 10) ## Example from Brockwell & Davis (1991, pp.92-4) ## answer: 2^(-n) * (32/3 + 8 * n) /(32/3) n <- 1:10 a.n <- 2^(-n) * (32/3 + 8 * n) /(32/3) (A.n <- ARMAacf(c(1.0, -0.25), 1.0, lag.max = 10)) stopifnot(all.equal(unname(A.n), c(1, a.n))) ARMAacf(c(1.0, -0.25), 1.0, lag.max = 10, pacf = TRUE) zapsmall(ARMAacf(c(1.0, -0.25), lag.max = 10, pacf = TRUE)) ## Cov-Matrix of length-7 sub-sample of AR(1) example: toeplitz(ARMAacf(0.8, lag.max = 7)) ``` r None `rWishart` Random Wishart Distributed Matrices ----------------------------------------------- ### Description Generate `n` random matrices, distributed according to the Wishart distribution with parameters `Sigma` and `df`, *W\_p(Sigma, df)*. ### Usage ``` rWishart(n, df, Sigma) ``` ### Arguments | | | | --- | --- | | `n` | integer sample size. | | `df` | numeric parameter, “degrees of freedom”. | | `Sigma` | positive definite (*p \* p*) “scale” matrix, the matrix parameter of the distribution. | ### Details If *X1,...,Xm, Xi in R^p* is a sample of *m* independent multivariate Gaussians with mean (vector) 0, and covariance matrix *Σ*, the distribution of *M = X'X* is *W\_p(Σ, m)*. Consequently, the expectation of *M* is *E[M] = m \* Sigma.* Further, if `Sigma` is scalar (*p = 1*), the Wishart distribution is a scaled chi-squared (*chi^2*) distribution with `df` degrees of freedom, *W\_1(sigma^2, m) = sigma^2 chi[m]^2*. The component wise variance is *Var(M[i,j]) = m\*(S[i,j]^2 + S[i,i] \* S[j,j]), where S=Sigma.* ### Value a numeric `[array](../../base/html/array)`, say `R`, of dimension *p \* p \* n*, where each `R[,,i]` is a positive definite matrix, a realization of the Wishart distribution *W\_p(Sigma, df)*. ### Author(s) Douglas Bates ### References Mardia, K. V., J. T. Kent, and J. M. Bibby (1979) *Multivariate Analysis*, London: Academic Press. ### See Also `[cov](cor)`, `[rnorm](normal)`, `[rchisq](chisquare)`. ### Examples ``` ## Artificial S <- toeplitz((10:1)/10) set.seed(11) R <- rWishart(1000, 20, S) dim(R) # 10 10 1000 mR <- apply(R, 1:2, mean) # ~= E[ Wish(S, 20) ] = 20 * S stopifnot(all.equal(mR, 20*S, tolerance = .009)) ## See Details, the variance is Va <- 20*(S^2 + tcrossprod(diag(S))) vR <- apply(R, 1:2, var) stopifnot(all.equal(vR, Va, tolerance = 1/16)) ``` r None `ftable.formula` Formula Notation for Flat Contingency Tables -------------------------------------------------------------- ### Description Produce or manipulate a flat contingency table using formula notation. ### Usage ``` ## S3 method for class 'formula' ftable(formula, data = NULL, subset, na.action, ...) ``` ### Arguments | | | | --- | --- | | `formula` | a formula object with both left and right hand sides specifying the column and row variables of the flat table. | | `data` | a data frame, list or environment (or similar: see `<model.frame>`) containing the variables to be cross-tabulated, or a contingency table (see below). | | `subset` | an optional vector specifying a subset of observations to be used. Ignored if `data` is a contingency table. | | `na.action` | a function which indicates what should happen when the data contain `NA`s. Ignored if `data` is a contingency table. | | `...` | further arguments to the default ftable method may also be passed as arguments, see `[ftable.default](ftable)`. | ### Details This is a method of the generic function `<ftable>`. The left and right hand side of `formula` specify the column and row variables, respectively, of the flat contingency table to be created. Only the `+` operator is allowed for combining the variables. A `.` may be used once in the formula to indicate inclusion of all the remaining variables. If `data` is an object of class `"table"` or an array with more than 2 dimensions, it is taken as a contingency table, and hence all entries should be nonnegative. Otherwise, if it is not a flat contingency table (i.e., an object of class `"ftable"`), it should be a data frame or matrix, list or environment containing the variables to be cross-tabulated. In this case, `na.action` is applied to the data to handle missing values, and, after possibly selecting a subset of the data as specified by the `subset` argument, a contingency table is computed from the variables. The contingency table is then collapsed to a flat table, according to the row and column variables specified by `formula`. ### Value A flat contingency table which contains the counts of each combination of the levels of the variables, collapsed into a matrix for suitably displaying the counts. ### See Also `<ftable>`, `[ftable.default](ftable)`; `[table](../../base/html/table)`. ### Examples ``` Titanic x <- ftable(Survived ~ ., data = Titanic) x ftable(Sex ~ Class + Age, data = x) ``` r None `hclust` Hierarchical Clustering --------------------------------- ### Description Hierarchical cluster analysis on a set of dissimilarities and methods for analyzing it. ### Usage ``` hclust(d, method = "complete", members = NULL) ## S3 method for class 'hclust' plot(x, labels = NULL, hang = 0.1, check = TRUE, axes = TRUE, frame.plot = FALSE, ann = TRUE, main = "Cluster Dendrogram", sub = NULL, xlab = NULL, ylab = "Height", ...) ``` ### Arguments | | | | --- | --- | | `d` | a dissimilarity structure as produced by `dist`. | | `method` | the agglomeration method to be used. This should be (an unambiguous abbreviation of) one of `"ward.D"`, `"ward.D2"`, `"single"`, `"complete"`, `"average"` (= UPGMA), `"mcquitty"` (= WPGMA), `"median"` (= WPGMC) or `"centroid"` (= UPGMC). | | `members` | `NULL` or a vector with length size of `d`. See the ‘Details’ section. | | `x` | an object of the type produced by `hclust`. | | `hang` | The fraction of the plot height by which labels should hang below the rest of the plot. A negative value will cause the labels to hang down from 0. | | `check` | logical indicating if the `x` object should be checked for validity. This check is not necessary when `x` is known to be valid such as when it is the direct result of `hclust()`. The default is `check=TRUE`, as invalid inputs may crash **R** due to memory violation in the internal C plotting code. | | `labels` | A character vector of labels for the leaves of the tree. By default the row names or row numbers of the original data are used. If `labels = FALSE` no labels at all are plotted. | | `axes, frame.plot, ann` | logical flags as in `[plot.default](../../graphics/html/plot.default)`. | | `main, sub, xlab, ylab` | character strings for `[title](../../graphics/html/title)`. `sub` and `xlab` have a non-NULL default when there's a `tree$call`. | | `...` | Further graphical arguments. E.g., `cex` controls the size of the labels (if plotted) in the same way as `[text](../../graphics/html/text)`. | ### Details This function performs a hierarchical cluster analysis using a set of dissimilarities for the *n* objects being clustered. Initially, each object is assigned to its own cluster and then the algorithm proceeds iteratively, at each stage joining the two most similar clusters, continuing until there is just a single cluster. At each stage distances between clusters are recomputed by the Lance–Williams dissimilarity update formula according to the particular clustering method being used. A number of different clustering methods are provided. *Ward's* minimum variance method aims at finding compact, spherical clusters. The *complete linkage* method finds similar clusters. The *single linkage* method (which is closely related to the minimal spanning tree) adopts a ‘friends of friends’ clustering strategy. The other methods can be regarded as aiming for clusters with characteristics somewhere between the single and complete link methods. Note however, that methods `"median"` and `"centroid"` are *not* leading to a *monotone distance* measure, or equivalently the resulting dendrograms can have so called *inversions* or *reversals* which are hard to interpret, but note the trichotomies in Legendre and Legendre (2012). Two different algorithms are found in the literature for Ward clustering. The one used by option `"ward.D"` (equivalent to the only Ward option `"ward"` in **R** versions *<=* 3.0.3) *does not* implement Ward's (1963) clustering criterion, whereas option `"ward.D2"` implements that criterion (Murtagh and Legendre 2014). With the latter, the dissimilarities are *squared* before cluster updating. Note that `[agnes](../../cluster/html/agnes)(*, method="ward")` corresponds to `hclust(*, "ward.D2")`. If `members != NULL`, then `d` is taken to be a dissimilarity matrix between clusters instead of dissimilarities between singletons and `members` gives the number of observations per cluster. This way the hierarchical cluster algorithm can be ‘started in the middle of the dendrogram’, e.g., in order to reconstruct the part of the tree above a cut (see examples). Dissimilarities between clusters can be efficiently computed (i.e., without `hclust` itself) only for a limited number of distance/linkage combinations, the simplest one being *squared* Euclidean distance and centroid linkage. In this case the dissimilarities between the clusters are the squared Euclidean distances between cluster means. In hierarchical cluster displays, a decision is needed at each merge to specify which subtree should go on the left and which on the right. Since, for *n* observations there are *n-1* merges, there are *2^{(n-1)}* possible orderings for the leaves in a cluster tree, or dendrogram. The algorithm used in `hclust` is to order the subtree so that the tighter cluster is on the left (the last, i.e., most recent, merge of the left subtree is at a lower value than the last merge of the right subtree). Single observations are the tightest clusters possible, and merges involving two observations place them in order by their observation sequence number. ### Value An object of class **hclust** which describes the tree produced by the clustering process. The object is a list with components: | | | | --- | --- | | `merge` | an *n-1* by 2 matrix. Row *i* of `merge` describes the merging of clusters at step *i* of the clustering. If an element *j* in the row is negative, then observation *-j* was merged at this stage. If *j* is positive then the merge was with the cluster formed at the (earlier) stage *j* of the algorithm. Thus negative entries in `merge` indicate agglomerations of singletons, and positive entries indicate agglomerations of non-singletons. | | `height` | a set of *n-1* real values (non-decreasing for ultrametric trees). The clustering *height*: that is, the value of the criterion associated with the clustering `method` for the particular agglomeration. | | `order` | a vector giving the permutation of the original observations suitable for plotting, in the sense that a cluster plot using this ordering and matrix `merge` will not have crossings of the branches. | | `labels` | labels for each of the objects being clustered. | | `call` | the call which produced the result. | | `method` | the cluster method that has been used. | | `dist.method` | the distance that has been used to create `d` (only returned if the distance object has a `"method"` attribute). | There are `[print](../../base/html/print)`, `[plot](../../graphics/html/plot.default)` and `identify` (see `<identify.hclust>`) methods and the `<rect.hclust>()` function for `hclust` objects. ### Note Method `"centroid"` is typically meant to be used with *squared* Euclidean distances. ### Author(s) The `hclust` function is based on Fortran code contributed to STATLIB by F. Murtagh. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988). *The New S Language*. Wadsworth & Brooks/Cole. (S version.) Everitt, B. (1974). *Cluster Analysis*. London: Heinemann Educ. Books. Hartigan, J.A. (1975). *Clustering Algorithms*. New York: Wiley. Sneath, P. H. A. and R. R. Sokal (1973). *Numerical Taxonomy*. San Francisco: Freeman. Anderberg, M. R. (1973). *Cluster Analysis for Applications*. Academic Press: New York. Gordon, A. D. (1999). *Classification*. Second Edition. London: Chapman and Hall / CRC Murtagh, F. (1985). “Multidimensional Clustering Algorithms”, in *COMPSTAT Lectures 4*. Wuerzburg: Physica-Verlag (for algorithmic details of algorithms used). McQuitty, L.L. (1966). Similarity Analysis by Reciprocal Pairs for Discrete and Continuous Data. *Educational and Psychological Measurement*, **26**, 825–831. doi: [10.1177/001316446602600402](https://doi.org/10.1177/001316446602600402). Legendre, P. and L. Legendre (2012). *Numerical Ecology*, 3rd English ed. Amsterdam: Elsevier Science BV. Murtagh, Fionn and Legendre, Pierre (2014). Ward's hierarchical agglomerative clustering method: which algorithms implement Ward's criterion? *Journal of Classification*, **31**, 274–295. doi: [10.1007/s00357-014-9161-z](https://doi.org/10.1007/s00357-014-9161-z). ### See Also `<identify.hclust>`, `<rect.hclust>`, `<cutree>`, `<dendrogram>`, `<kmeans>`. For the Lance–Williams formula and methods that apply it generally, see `[agnes](../../cluster/html/agnes)` from package [cluster](https://CRAN.R-project.org/package=cluster). ### Examples ``` require(graphics) ### Example 1: Violent crime rates by US state hc <- hclust(dist(USArrests), "ave") plot(hc) plot(hc, hang = -1) ## Do the same with centroid clustering and *squared* Euclidean distance, ## cut the tree into ten clusters and reconstruct the upper part of the ## tree from the cluster centers. hc <- hclust(dist(USArrests)^2, "cen") memb <- cutree(hc, k = 10) cent <- NULL for(k in 1:10){ cent <- rbind(cent, colMeans(USArrests[memb == k, , drop = FALSE])) } hc1 <- hclust(dist(cent)^2, method = "cen", members = table(memb)) opar <- par(mfrow = c(1, 2)) plot(hc, labels = FALSE, hang = -1, main = "Original Tree") plot(hc1, labels = FALSE, hang = -1, main = "Re-start from 10 clusters") par(opar) ### Example 2: Straight-line distances among 10 US cities ## Compare the results of algorithms "ward.D" and "ward.D2" mds2 <- -cmdscale(UScitiesD) plot(mds2, type="n", axes=FALSE, ann=FALSE) text(mds2, labels=rownames(mds2), xpd = NA) hcity.D <- hclust(UScitiesD, "ward.D") # "wrong" hcity.D2 <- hclust(UScitiesD, "ward.D2") opar <- par(mfrow = c(1, 2)) plot(hcity.D, hang=-1) plot(hcity.D2, hang=-1) par(opar) ```
programming_docs
r None `qqnorm` Quantile-Quantile Plots --------------------------------- ### Description `qqnorm` is a generic function the default method of which produces a normal QQ plot of the values in `y`. `qqline` adds a line to a “theoretical”, by default normal, quantile-quantile plot which passes through the `probs` quantiles, by default the first and third quartiles. `qqplot` produces a QQ plot of two datasets. Graphical parameters may be given as arguments to `qqnorm`, `qqplot` and `qqline`. ### Usage ``` qqnorm(y, ...) ## Default S3 method: qqnorm(y, ylim, main = "Normal Q-Q Plot", xlab = "Theoretical Quantiles", ylab = "Sample Quantiles", plot.it = TRUE, datax = FALSE, ...) qqline(y, datax = FALSE, distribution = qnorm, probs = c(0.25, 0.75), qtype = 7, ...) qqplot(x, y, plot.it = TRUE, xlab = deparse1(substitute(x)), ylab = deparse1(substitute(y)), ...) ``` ### Arguments | | | | --- | --- | | `x` | The first sample for `qqplot`. | | `y` | The second or only data sample. | | `xlab, ylab, main` | plot labels. The `xlab` and `ylab` refer to the y and x axes respectively if `datax = TRUE`. | | `plot.it` | logical. Should the result be plotted? | | `datax` | logical. Should data values be on the x-axis? | | `distribution` | quantile function for reference theoretical distribution. | | `probs` | numeric vector of length two, representing probabilities. Corresponding quantile pairs define the line drawn. | | `qtype` | the `type` of quantile computation used in `<quantile>`. | | `ylim, ...` | graphical parameters. | ### Value For `qqnorm` and `qqplot`, a list with components | | | | --- | --- | | `x` | The x coordinates of the points that were/would be plotted | | `y` | The original `y` vector, i.e., the corresponding y coordinates *including `[NA](../../base/html/na)`s*. | ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<ppoints>`, used by `qqnorm` to generate approximations to expected order statistics for a normal distribution. ### Examples ``` require(graphics) y <- rt(200, df = 5) qqnorm(y); qqline(y, col = 2) qqplot(y, rt(300, df = 5)) qqnorm(precip, ylab = "Precipitation [in/yr] for 70 US cities") ## "QQ-Chisquare" : -------------------------- y <- rchisq(500, df = 3) ## Q-Q plot for Chi^2 data against true theoretical distribution: qqplot(qchisq(ppoints(500), df = 3), y, main = expression("Q-Q plot for" ~~ {chi^2}[nu == 3])) qqline(y, distribution = function(p) qchisq(p, df = 3), probs = c(0.1, 0.6), col = 2) mtext("qqline(*, dist = qchisq(., df=3), prob = c(0.1, 0.6))") ## (Note that the above uses ppoints() with a = 1/2, giving the ## probability points for quantile type 5: so theoretically, using ## qqline(qtype = 5) might be preferable.) ``` r None `SSmicmen` Self-Starting Nls Michaelis-Menten Model ---------------------------------------------------- ### Description This `selfStart` model evaluates the Michaelis-Menten model and its gradient. It has an `initial` attribute that will evaluate initial estimates of the parameters `Vm` and `K` ### Usage ``` SSmicmen(input, Vm, K) ``` ### Arguments | | | | --- | --- | | `input` | a numeric vector of values at which to evaluate the model. | | `Vm` | a numeric parameter representing the maximum value of the response. | | `K` | a numeric parameter representing the `input` value at which half the maximum response is attained. In the field of enzyme kinetics this is called the Michaelis parameter. | ### Value a numeric vector of the same length as `input`. It is the value of the expression `Vm*input/(K+input)`. If both the arguments `Vm` and `K` are names of objects, the gradient matrix with respect to these names is attached as an attribute named `gradient`. ### Author(s) José Pinheiro and Douglas Bates ### See Also `<nls>`, `[selfStart](selfstart)` ### Examples ``` PurTrt <- Puromycin[ Puromycin$state == "treated", ] SSmicmen(PurTrt$conc, 200, 0.05) # response only local({ Vm <- 200; K <- 0.05 SSmicmen(PurTrt$conc, Vm, K) # response _and_ gradient }) print(getInitial(rate ~ SSmicmen(conc, Vm, K), data = PurTrt), digits = 3) ## Initial values are in fact the converged values fm1 <- nls(rate ~ SSmicmen(conc, Vm, K), data = PurTrt) summary(fm1) ## Alternative call using the subset argument fm2 <- nls(rate ~ SSmicmen(conc, Vm, K), data = Puromycin, subset = state == "treated") summary(fm2) # The same indeed: stopifnot(all.equal(coef(summary(fm1)), coef(summary(fm2)))) ## Visualize the SSmicmen() Michaelis-Menton model parametrization : xx <- seq(0, 5, length.out = 101) yy <- 5 * xx/(1+xx) stopifnot(all.equal(yy, SSmicmen(xx, Vm = 5, K = 1))) require(graphics) op <- par(mar = c(0, 0, 3.5, 0)) plot(xx, yy, type = "l", lwd = 2, ylim = c(-1/4,6), xlim = c(-1, 5), ann = FALSE, axes = FALSE, main = "Parameters in the SSmicmen model") mtext(quote(list(phi[1] == "Vm", phi[2] == "K"))) usr <- par("usr") arrows(usr[1], 0, usr[2], 0, length = 0.1, angle = 25) arrows(0, usr[3], 0, usr[4], length = 0.1, angle = 25) text(usr[2] - 0.2, 0.1, "x", adj = c(1, 0)) text( -0.1, usr[4], "y", adj = c(1, 1)) abline(h = 5, lty = 3) arrows(-0.8, c(2.1, 2.9), -0.8, c(0, 5 ), length = 0.1, angle = 25) text( -0.8, 2.5, quote(phi[1])) segments(1, 0, 1, 2.7, lty = 2, lwd = 0.75) text(1, 2.7, quote(phi[2])) par(op) ``` r None `predict` Model Predictions ---------------------------- ### Description `predict` is a generic function for predictions from the results of various model fitting functions. The function invokes particular *methods* which depend on the `[class](../../base/html/class)` of the first argument. ### Usage ``` predict (object, ...) ``` ### Arguments | | | | --- | --- | | `object` | a model object for which prediction is desired. | | `...` | additional arguments affecting the predictions produced. | ### Details Most prediction methods which are similar to those for linear models have an argument `newdata` specifying the first place to look for explanatory variables to be used for prediction. Some considerable attempts are made to match up the columns in `newdata` to those used for fitting, for example that they are of comparable types and that any factors have the same level set in the same order (or can be transformed to be so). Time series prediction methods in package stats have an argument `n.ahead` specifying how many time steps ahead to predict. Many methods have a logical argument `se.fit` saying if standard errors are to returned. ### Value The form of the value returned by `predict` depends on the class of its argument. See the documentation of the particular methods for details of what is produced by that method. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S*. Wadsworth & Brooks/Cole. ### See Also `<predict.glm>`, `<predict.lm>`, `<predict.loess>`, `<predict.nls>`, `[predict.poly](poly)`, `[predict.princomp](princomp)`, `<predict.smooth.spline>`. [SafePrediction](makepredictcall) for prediction from (univariable) polynomial and spline fits. For time-series prediction, `[predict.ar](ar)`, `[predict.Arima](predict.arima)`, `[predict.arima0](arima0)`, `[predict.HoltWinters](predict.holtwinters)`, `[predict.StructTS](structts)`. ### Examples ``` require(utils) ## All the "predict" methods found ## NB most of the methods in the standard packages are hidden. ## Output will depend on what namespaces are (or have been) loaded. ## IGNORE_RDIFF_BEGIN for(fn in methods("predict")) try({ f <- eval(substitute(getAnywhere(fn)$objs[[1]], list(fn = fn))) cat(fn, ":\n\t", deparse(args(f)), "\n") }, silent = TRUE) ## IGNORE_RDIFF_END ``` r None `diffinv` Discrete Integration: Inverse of Differencing -------------------------------------------------------- ### Description Computes the inverse function of the lagged differences function `[diff](../../base/html/diff)`. ### Usage ``` diffinv(x, ...) ## Default S3 method: diffinv(x, lag = 1, differences = 1, xi, ...) ## S3 method for class 'ts' diffinv(x, lag = 1, differences = 1, xi, ...) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector, matrix, or time series. | | `lag` | a scalar lag parameter. | | `differences` | an integer representing the order of the difference. | | `xi` | a numeric vector, matrix, or time series containing the initial values for the integrals. If missing, zeros are used. | | `...` | arguments passed to or from other methods. | ### Details `diffinv` is a generic function with methods for class `"ts"` and `default` for vectors and matrices. Missing values are not handled. ### Value A numeric vector, matrix, or time series (the latter for the `"ts"` method) representing the discrete integral of `x`. ### Author(s) A. Trapletti ### See Also `[diff](../../base/html/diff)` ### Examples ``` s <- 1:10 d <- diff(s) diffinv(d, xi = 1) ``` r None `SSasymp` Self-Starting Nls Asymptotic Regression Model -------------------------------------------------------- ### Description This `selfStart` model evaluates the asymptotic regression function and its gradient. It has an `initial` attribute that will evaluate initial estimates of the parameters `Asym`, `R0`, and `lrc` for a given set of data. Note that `[SSweibull](ssweibull)()` generalizes this asymptotic model with an extra parameter. ### Usage ``` SSasymp(input, Asym, R0, lrc) ``` ### Arguments | | | | --- | --- | | `input` | a numeric vector of values at which to evaluate the model. | | `Asym` | a numeric parameter representing the horizontal asymptote on the right side (very large values of `input`). | | `R0` | a numeric parameter representing the response when `input` is zero. | | `lrc` | a numeric parameter representing the natural logarithm of the rate constant. | ### Value a numeric vector of the same length as `input`. It is the value of the expression `Asym+(R0-Asym)*exp(-exp(lrc)*input)`. If all of the arguments `Asym`, `R0`, and `lrc` are names of objects, the gradient matrix with respect to these names is attached as an attribute named `gradient`. ### Author(s) José Pinheiro and Douglas Bates ### See Also `<nls>`, `[selfStart](selfstart)` ### Examples ``` Lob.329 <- Loblolly[ Loblolly$Seed == "329", ] SSasymp( Lob.329$age, 100, -8.5, -3.2 ) # response only local({ Asym <- 100 ; resp0 <- -8.5 ; lrc <- -3.2 SSasymp( Lob.329$age, Asym, resp0, lrc) # response _and_ gradient }) getInitial(height ~ SSasymp( age, Asym, resp0, lrc), data = Lob.329) ## Initial values are in fact the converged values fm1 <- nls(height ~ SSasymp( age, Asym, resp0, lrc), data = Lob.329) summary(fm1) ## Visualize the SSasymp() model parametrization : xx <- seq(-.3, 5, length.out = 101) ## Asym + (R0-Asym) * exp(-exp(lrc)* x) : yy <- 5 - 4 * exp(-xx / exp(3/4)) stopifnot( all.equal(yy, SSasymp(xx, Asym = 5, R0 = 1, lrc = -3/4)) ) require(graphics) op <- par(mar = c(0, .2, 4.1, 0)) plot(xx, yy, type = "l", axes = FALSE, ylim = c(0,5.2), xlim = c(-.3, 5), xlab = "", ylab = "", lwd = 2, main = quote("Parameters in the SSasymp model " ~ {f[phi](x) == phi[1] + (phi[2]-phi[1])*~e^{-e^{phi[3]}*~x}})) mtext(quote(list(phi[1] == "Asym", phi[2] == "R0", phi[3] == "lrc"))) usr <- par("usr") arrows(usr[1], 0, usr[2], 0, length = 0.1, angle = 25) arrows(0, usr[3], 0, usr[4], length = 0.1, angle = 25) text(usr[2] - 0.2, 0.1, "x", adj = c(1, 0)) text( -0.1, usr[4], "y", adj = c(1, 1)) abline(h = 5, lty = 3) arrows(c(0.35, 0.65), 1, c(0 , 1 ), 1, length = 0.08, angle = 25); text(0.5, 1, quote(1)) y0 <- 1 + 4*exp(-3/4) ; t.5 <- log(2) / exp(-3/4) ; AR2 <- 3 # (Asym + R0)/2 segments(c(1, 1), c( 1, y0), c(1, 0), c(y0, 1), lty = 2, lwd = 0.75) text(1.1, 1/2+y0/2, quote((phi[1]-phi[2])*e^phi[3]), adj = c(0,.5)) axis(2, at = c(1,AR2,5), labels= expression(phi[2], frac(phi[1]+phi[2],2), phi[1]), pos=0, las=1) arrows(c(.6,t.5-.6), AR2, c(0, t.5 ), AR2, length = 0.08, angle = 25) text( t.5/2, AR2, quote(t[0.5])) text( t.5 +.4, AR2, quote({f(t[0.5]) == frac(phi[1]+phi[2],2)}~{} %=>% {}~~ {t[0.5] == frac(log(2), e^{phi[3]})}), adj = c(0, 0.5)) par(op) ``` r None `Lognormal` The Log Normal Distribution ---------------------------------------- ### Description Density, distribution function, quantile function and random generation for the log normal distribution whose logarithm has mean equal to `meanlog` and standard deviation equal to `sdlog`. ### Usage ``` dlnorm(x, meanlog = 0, sdlog = 1, log = FALSE) plnorm(q, meanlog = 0, sdlog = 1, lower.tail = TRUE, log.p = FALSE) qlnorm(p, meanlog = 0, sdlog = 1, lower.tail = TRUE, log.p = FALSE) rlnorm(n, meanlog = 0, sdlog = 1) ``` ### Arguments | | | | --- | --- | | `x, q` | vector of quantiles. | | `p` | vector of probabilities. | | `n` | number of observations. If `length(n) > 1`, the length is taken to be the number required. | | `meanlog, sdlog` | mean and standard deviation of the distribution on the log scale with default values of `0` and `1` respectively. | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details The log normal distribution has density *f(x) = 1/(√(2 π) σ x) e^-((log x - μ)^2 / (2 σ^2))* where *μ* and *σ* are the mean and standard deviation of the logarithm. The mean is *E(X) = exp(μ + 1/2 σ^2)*, the median is *med(X) = exp(μ)*, and the variance *Var(X) = exp(2\*μ + σ^2)\*(exp(σ^2) - 1)* and hence the coefficient of variation is *sqrt(exp(σ^2) - 1)* which is approximately *σ* when that is small (e.g., *σ < 1/2*). ### Value `dlnorm` gives the density, `plnorm` gives the distribution function, `qlnorm` gives the quantile function, and `rlnorm` generates random deviates. The length of the result is determined by `n` for `rlnorm`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `n` are recycled to the length of the result. Only the first elements of the logical arguments are used. ### Note The cumulative hazard *H(t) = - log(1 - F(t))* is `-plnorm(t, r, lower = FALSE, log = TRUE)`. ### Source `dlnorm` is calculated from the definition (in ‘Details’). `[pqr]lnorm` are based on the relationship to the normal. Consequently, they model a single point mass at `exp(meanlog)` for the boundary case `sdlog = 0`. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Johnson, N. L., Kotz, S. and Balakrishnan, N. (1995) *Continuous Univariate Distributions*, volume 1, chapter 14. Wiley, New York. ### See Also [Distributions](distributions) for other standard distributions, including `[dnorm](normal)` for the normal distribution. ### Examples ``` dlnorm(1) == dnorm(0) ``` r None `Uniform` The Uniform Distribution ----------------------------------- ### Description These functions provide information about the uniform distribution on the interval from `min` to `max`. `dunif` gives the density, `punif` gives the distribution function `qunif` gives the quantile function and `runif` generates random deviates. ### Usage ``` dunif(x, min = 0, max = 1, log = FALSE) punif(q, min = 0, max = 1, lower.tail = TRUE, log.p = FALSE) qunif(p, min = 0, max = 1, lower.tail = TRUE, log.p = FALSE) runif(n, min = 0, max = 1) ``` ### Arguments | | | | --- | --- | | `x, q` | vector of quantiles. | | `p` | vector of probabilities. | | `n` | number of observations. If `length(n) > 1`, the length is taken to be the number required. | | `min, max` | lower and upper limits of the distribution. Must be finite. | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details If `min` or `max` are not specified they assume the default values of `0` and `1` respectively. The uniform distribution has density *f(x) = 1/(max-min)* for *min ≤ x ≤ max*. For the case of *u := min == max*, the limit case of *X == u* is assumed, although there is no density in that case and `dunif` will return `NaN` (the error condition). `runif` will not generate either of the extreme values unless `max = min` or `max-min` is small compared to `min`, and in particular not for the default arguments. ### Value `dunif` gives the density, `punif` gives the distribution function, `qunif` gives the quantile function, and `runif` generates random deviates. The length of the result is determined by `n` for `runif`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `n` are recycled to the length of the result. Only the first elements of the logical arguments are used. ### Note The characteristics of output from pseudo-random number generators (such as precision and periodicity) vary widely. See `[.Random.seed](../../base/html/random)` for more information on **R**'s random number generation algorithms. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[RNG](../../base/html/random)` about random number generation in **R**. [Distributions](distributions) for other standard distributions. ### Examples ``` u <- runif(20) ## The following relations always hold : punif(u) == u dunif(u) == 1 var(runif(10000)) #- ~ = 1/12 = .08333 ``` r None `scatter.smooth` Scatter Plot with Smooth Curve Fitted by Loess ---------------------------------------------------------------- ### Description Plot and add a smooth curve computed by `loess` to a scatter plot. ### Usage ``` scatter.smooth(x, y = NULL, span = 2/3, degree = 1, family = c("symmetric", "gaussian"), xlab = NULL, ylab = NULL, ylim = range(y, pred$y, na.rm = TRUE), evaluation = 50, ..., lpars = list()) loess.smooth(x, y, span = 2/3, degree = 1, family = c("symmetric", "gaussian"), evaluation = 50, ...) ``` ### Arguments | | | | --- | --- | | `x, y` | the `x` and `y` arguments provide the x and y coordinates for the plot. Any reasonable way of defining the coordinates is acceptable. See the function `[xy.coords](../../grdevices/html/xy.coords)` for details. | | `span` | smoothness parameter for `loess`. | | `degree` | degree of local polynomial used. | | `family` | if `"gaussian"` fitting is by least-squares, and if `family = "symmetric"` a re-descending M estimator is used. Can be abbreviated. | | `xlab` | label for x axis. | | `ylab` | label for y axis. | | `ylim` | the y limits of the plot. | | `evaluation` | number of points at which to evaluate the smooth curve. | | `...` | For `scatter.smooth()`, graphical parameters, passed to `plot()` only. For `loess.smooth`, control parameters passed to `<loess.control>`. | | `lpars` | a `[list](../../base/html/list)` of arguments to be passed to `[lines](../../graphics/html/lines)()`. | ### Details `loess.smooth` is an auxiliary function which evaluates the `loess` smooth at `evaluation` equally spaced points covering the range of `x`. ### Value For `scatter.smooth`, none. For `loess.smooth`, a list with two components, `x` (the grid of evaluation points) and `y` (the smoothed values at the grid points). ### See Also `<loess>`; `[smoothScatter](../../graphics/html/smoothscatter)` for scatter plots with smoothed *density* color representation. ### Examples ``` require(graphics) with(cars, scatter.smooth(speed, dist)) ## or with dotted thick smoothed line results : with(cars, scatter.smooth(speed, dist, lpars = list(col = "red", lwd = 3, lty = 3))) ```
programming_docs
r None `bartlett.test` Bartlett Test of Homogeneity of Variances ---------------------------------------------------------- ### Description Performs Bartlett's test of the null that the variances in each of the groups (samples) are the same. ### Usage ``` bartlett.test(x, ...) ## Default S3 method: bartlett.test(x, g, ...) ## S3 method for class 'formula' bartlett.test(formula, data, subset, na.action, ...) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector of data values, or a list of numeric data vectors representing the respective samples, or fitted linear model objects (inheriting from class `"lm"`). | | `g` | a vector or factor object giving the group for the corresponding elements of `x`. Ignored if `x` is a list. | | `formula` | a formula of the form `lhs ~ rhs` where `lhs` gives the data values and `rhs` the corresponding groups. | | `data` | an optional matrix or data frame (or similar: see `<model.frame>`) containing the variables in the formula `formula`. By default the variables are taken from `environment(formula)`. | | `subset` | an optional vector specifying a subset of observations to be used. | | `na.action` | a function which indicates what should happen when the data contain `NA`s. Defaults to `getOption("na.action")`. | | `...` | further arguments to be passed to or from methods. | ### Details If `x` is a list, its elements are taken as the samples or fitted linear models to be compared for homogeneity of variances. In this case, the elements must either all be numeric data vectors or fitted linear model objects, `g` is ignored, and one can simply use `bartlett.test(x)` to perform the test. If the samples are not yet contained in a list, use `bartlett.test(list(x, ...))`. Otherwise, `x` must be a numeric data vector, and `g` must be a vector or factor object of the same length as `x` giving the group for the corresponding elements of `x`. ### Value A list of class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | Bartlett's K-squared test statistic. | | `parameter` | the degrees of freedom of the approximate chi-squared distribution of the test statistic. | | `p.value` | the p-value of the test. | | `method` | the character string `"Bartlett test of homogeneity of variances"`. | | `data.name` | a character string giving the names of the data. | ### References Bartlett, M. S. (1937). Properties of sufficiency and statistical tests. *Proceedings of the Royal Society of London Series A* **160**, 268–282. doi: [10.1098/rspa.1937.0109](https://doi.org/10.1098/rspa.1937.0109). ### See Also `<var.test>` for the special case of comparing variances in two samples from normal distributions; `<fligner.test>` for a rank-based (nonparametric) *k*-sample test for homogeneity of variances; `<ansari.test>` and `<mood.test>` for two rank based two-sample tests for difference in scale. ### Examples ``` require(graphics) plot(count ~ spray, data = InsectSprays) bartlett.test(InsectSprays$count, InsectSprays$spray) bartlett.test(count ~ spray, data = InsectSprays) ``` r None `eff.aovlist` Compute Efficiencies of Multistratum Analysis of Variance ------------------------------------------------------------------------ ### Description Computes the efficiencies of fixed-effect terms in an analysis of variance model with multiple strata. ### Usage ``` eff.aovlist(aovlist) ``` ### Arguments | | | | --- | --- | | `aovlist` | The result of a call to `aov` with an `Error` term. | ### Details Fixed-effect terms in an analysis of variance model with multiple strata may be estimable in more than one stratum, in which case there is less than complete information in each. The efficiency for a term is the fraction of the maximum possible precision (inverse variance) obtainable by estimating in just that stratum. Under the assumption of balance, this is the same for all contrasts involving that term. This function is used to pick strata in which to estimate terms in `[model.tables.aovlist](model.tables)` and `[se.contrast.aovlist](se.contrast)`. In many cases terms will only occur in one stratum, when all the efficiencies will be one: this is detected and no further calculations are done. The calculation used requires orthogonal contrasts for each term, and will throw an error if non-orthogonal contrasts (e.g., treatment contrasts or an unbalanced design) are detected. ### Value A matrix giving for each non-pure-error stratum (row) the efficiencies for each fixed-effect term in the model. ### References Heiberger, R. M. (1989) *Computation for the Analysis of Designed Experiments*. Wiley. ### See Also `<aov>`, `[model.tables.aovlist](model.tables)`, `[se.contrast.aovlist](se.contrast)` ### Examples ``` ## An example from Yates (1932), ## a 2^3 design in 2 blocks replicated 4 times Block <- gl(8, 4) A <- factor(c(0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1, 0,1,0,1,0,1,0,1,0,1,0,1)) B <- factor(c(0,0,1,1,0,0,1,1,0,1,0,1,1,0,1,0,0,0,1,1, 0,0,1,1,0,0,1,1,0,0,1,1)) C <- factor(c(0,1,1,0,1,0,0,1,0,0,1,1,0,0,1,1,0,1,0,1, 1,0,1,0,0,0,1,1,1,1,0,0)) Yield <- c(101, 373, 398, 291, 312, 106, 265, 450, 106, 306, 324, 449, 272, 89, 407, 338, 87, 324, 279, 471, 323, 128, 423, 334, 131, 103, 445, 437, 324, 361, 302, 272) aovdat <- data.frame(Block, A, B, C, Yield) old <- getOption("contrasts") options(contrasts = c("contr.helmert", "contr.poly")) ## IGNORE_RDIFF_BEGIN (fit <- aov(Yield ~ A*B*C + Error(Block), data = aovdat)) ## IGNORE_RDIFF_END eff.aovlist(fit) options(contrasts = old) ``` r None `preplot` Pre-computations for a Plotting Object ------------------------------------------------- ### Description Compute an object to be used for plots relating to the given model object. ### Usage ``` preplot(object, ...) ``` ### Arguments | | | | --- | --- | | `object` | a fitted model object. | | `...` | additional arguments for specific methods. | ### Details Only the generic function is currently provided in base **R**, but some add-on packages have methods. Principally here for S compatibility. ### Value An object set up to make a plot that describes `object`. r None `stlmethods` Methods for STL Objects ------------------------------------- ### Description Methods for objects of class `stl`, typically the result of `<stl>`. The `plot` method does a multiple figure plot with some flexibility. There are also (non-visible) `print` and `summary` methods. ### Usage ``` ## S3 method for class 'stl' plot(x, labels = colnames(X), set.pars = list(mar = c(0, 6, 0, 6), oma = c(6, 0, 4, 0), tck = -0.01, mfrow = c(nplot, 1)), main = NULL, range.bars = TRUE, ..., col.range = "light gray") ``` ### Arguments | | | | --- | --- | | `x` | `<stl>` object. | | `labels` | character of length 4 giving the names of the component time-series. | | `set.pars` | settings for `[par](../../graphics/html/par)(.)` when setting up the plot. | | `main` | plot main title. | | `range.bars` | logical indicating if each plot should have a bar at its right side which are of equal heights in user coordinates. | | `...` | further arguments passed to or from other methods. | | `col.range` | colour to be used for the range bars, if plotted. Note this appears after `...` and so cannot be abbreviated. | ### See Also `<plot.ts>` and `<stl>`, particularly for examples. r None `NLSstRtAsymptote` Horizontal Asymptote on the Right Side ---------------------------------------------------------- ### Description Provide an initial guess at the horizontal asymptote on the right side (i.e., large values of `x`) of the graph of `y` versus `x` from the `xy` object. Primarily used within `initial` functions for self-starting nonlinear regression models. ### Usage ``` NLSstRtAsymptote(xy) ``` ### Arguments | | | | --- | --- | | `xy` | a `sortedXyData` object | ### Value A single numeric value estimating the horizontal asymptote for large `x`. ### Author(s) José Pinheiro and Douglas Bates ### See Also `[sortedXyData](sortedxydata)`, `[NLSstClosestX](nlsstclosestx)`, `[NLSstRtAsymptote](nlsstrtasymptote)`, `[selfStart](selfstart)` ### Examples ``` DNase.2 <- DNase[ DNase$Run == "2", ] DN.srt <- sortedXyData( expression(log(conc)), expression(density), DNase.2 ) NLSstRtAsymptote( DN.srt ) ``` r None `power.anova.test` Power Calculations for Balanced One-Way Analysis of Variance Tests -------------------------------------------------------------------------------------- ### Description Compute power of test or determine parameters to obtain target power. ### Usage ``` power.anova.test(groups = NULL, n = NULL, between.var = NULL, within.var = NULL, sig.level = 0.05, power = NULL) ``` ### Arguments | | | | --- | --- | | `groups` | Number of groups | | `n` | Number of observations (per group) | | `between.var` | Between group variance | | `within.var` | Within group variance | | `sig.level` | Significance level (Type I error probability) | | `power` | Power of test (1 minus Type II error probability) | ### Details Exactly one of the parameters `groups`, `n`, `between.var`, `power`, `within.var`, and `sig.level` must be passed as NULL, and that parameter is determined from the others. Notice that `sig.level` has non-NULL default so NULL must be explicitly passed if you want it computed. ### Value Object of class `"power.htest"`, a list of the arguments (including the computed one) augmented with `method` and `note` elements. ### Note `uniroot` is used to solve power equation for unknowns, so you may see errors from it, notably about inability to bracket the root when invalid arguments are given. ### Author(s) Claus Ekstrøm ### See Also `<anova>`, `<lm>`, `<uniroot>` ### Examples ``` power.anova.test(groups = 4, n = 5, between.var = 1, within.var = 3) # Power = 0.3535594 power.anova.test(groups = 4, between.var = 1, within.var = 3, power = .80) # n = 11.92613 ## Assume we have prior knowledge of the group means: groupmeans <- c(120, 130, 140, 150) power.anova.test(groups = length(groupmeans), between.var = var(groupmeans), within.var = 500, power = .90) # n = 15.18834 ``` r None `spec.ar` Estimate Spectral Density of a Time Series from AR Fit ----------------------------------------------------------------- ### Description Fits an AR model to `x` (or uses the existing fit) and computes (and by default plots) the spectral density of the fitted model. ### Usage ``` spec.ar(x, n.freq, order = NULL, plot = TRUE, na.action = na.fail, method = "yule-walker", ...) ``` ### Arguments | | | | --- | --- | | `x` | A univariate (not yet:or multivariate) time series or the result of a fit by `<ar>`. | | `n.freq` | The number of points at which to plot. | | `order` | The order of the AR model to be fitted. If omitted, the order is chosen by AIC. | | `plot` | Plot the periodogram? | | `na.action` | `NA` action function. | | `method` | `method` for `<ar>` fit. | | `...` | Graphical arguments passed to `<plot.spec>`. | ### Value An object of class `"spec"`. The result is returned invisibly if `plot` is true. ### Warning Some authors, for example Thomson (1990), warn strongly that AR spectra can be misleading. ### Note The multivariate case is not yet implemented. ### References Thompson, D.J. (1990). Time series analysis of Holocene climate data. *Philosophical Transactions of the Royal Society of London Series A*, **330**, 601–616. doi: [10.1098/rsta.1990.0041](https://doi.org/10.1098/rsta.1990.0041). <https://www.jstor.org/stable/53609>. Venables, W.N. and Ripley, B.D. (2002) *Modern Applied Statistics with S.* Fourth edition. Springer. (Especially page 402.) ### See Also `<ar>`, `<spectrum>`. ### Examples ``` require(graphics) spec.ar(lh) spec.ar(ldeaths) spec.ar(ldeaths, method = "burg") spec.ar(log(lynx)) spec.ar(log(lynx), method = "burg", add = TRUE, col = "purple") spec.ar(log(lynx), method = "mle", add = TRUE, col = "forest green") spec.ar(log(lynx), method = "ols", add = TRUE, col = "blue") ``` r None `coef` Extract Model Coefficients ---------------------------------- ### Description `coef` is a generic function which extracts model coefficients from objects returned by modeling functions. `coefficients` is an *alias* for it. ### Usage ``` coef(object, ...) coefficients(object, ...) ## Default S3 method: coef(object, complete = TRUE, ...) ## S3 method for class 'aov' coef(object, complete = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `object` | an object for which the extraction of model coefficients is meaningful. | | `complete` | for the default (used for `lm`, etc) and `aov` methods: logical indicating if the full coefficient vector should be returned also in case of an over-determined system where some coefficients will be set to `[NA](../../base/html/na)`, see also `<alias>`. Note that the default *differs* for `<lm>()` and `<aov>()` results. | | `...` | other arguments. | ### Details All object classes which are returned by model fitting functions should provide a `coef` method or use the default one. (Note that the method is for `coef` and not `coefficients`.) The `"aov"` method does not report aliased coefficients (see `<alias>`) by default where `complete = FALSE`. The `complete` argument also exists for compatibility with `<vcov>` methods, and `coef` and `aov` methods for other classes should typically also keep the `complete = *` behavior in sync. By that, with `p <- length(coef(obj, complete = TF))`, `dim(vcov(obj, complete = TF)) == c(p,p)` will be fulfilled for both `complete` settings and the default. ### Value Coefficients extracted from the model object `object`. For standard model fitting classes this will be a named numeric vector. For `"maov"` objects (produced by `<aov>`) it will be a matrix. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S*. Wadsworth & Brooks/Cole. ### See Also `<fitted.values>` and `<residuals>` for related methods; `<glm>`, `<lm>` for model fitting. ### Examples ``` x <- 1:5; coef(lm(c(1:3, 7, 6) ~ x)) ``` r None `confint` Confidence Intervals for Model Parameters ---------------------------------------------------- ### Description Computes confidence intervals for one or more parameters in a fitted model. There is a default and a method for objects inheriting from class `"<lm>"`. ### Usage ``` confint(object, parm, level = 0.95, ...) ``` ### Arguments | | | | --- | --- | | `object` | a fitted model object. | | `parm` | a specification of which parameters are to be given confidence intervals, either a vector of numbers or a vector of names. If missing, all parameters are considered. | | `level` | the confidence level required. | | `...` | additional argument(s) for methods. | ### Details `confint` is a generic function. The default method assumes normality, and needs suitable `<coef>` and `<vcov>` methods to be available. The default method can be called directly for comparison with other methods. For objects of class `"lm"` the direct formulae based on *t* values are used. There are stub methods in package stats for classes `"glm"` and `"nls"` which call those in package [MASS](https://CRAN.R-project.org/package=MASS) (if installed): if the [MASS](https://CRAN.R-project.org/package=MASS) namespace has been loaded, its methods will be used directly. (Those methods are based on profile likelihood.) ### Value A matrix (or vector) with columns giving lower and upper confidence limits for each parameter. These will be labelled as (1-level)/2 and 1 - (1-level)/2 in % (by default 2.5% and 97.5%). ### See Also `[confint.glm](../../mass/html/confint)` and `[confint.nls](../../mass/html/confint)` in package [MASS](https://CRAN.R-project.org/package=MASS). ### Examples ``` fit <- lm(100/mpg ~ disp + hp + wt + am, data = mtcars) confint(fit) confint(fit, "wt") ## from example(glm) counts <- c(18,17,15,20,10,20,25,13,12) outcome <- gl(3, 1, 9); treatment <- gl(3, 3) glm.D93 <- glm(counts ~ outcome + treatment, family = poisson()) confint(glm.D93) # needs MASS to be installed confint.default(glm.D93) # based on asymptotic normality ``` r None `ks.test` Kolmogorov-Smirnov Tests ----------------------------------- ### Description Perform a one- or two-sample Kolmogorov-Smirnov test. ### Usage ``` ks.test(x, y, ..., alternative = c("two.sided", "less", "greater"), exact = NULL) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector of data values. | | `y` | either a numeric vector of data values, or a character string naming a cumulative distribution function or an actual cumulative distribution function such as `pnorm`. Only continuous CDFs are valid. | | `...` | parameters of the distribution specified (as a character string) by `y`. | | `alternative` | indicates the alternative hypothesis and must be one of `"two.sided"` (default), `"less"`, or `"greater"`. You can specify just the initial letter of the value, but the argument name must be given in full. See ‘Details’ for the meanings of the possible values. | | `exact` | `NULL` or a logical indicating whether an exact p-value should be computed. See ‘Details’ for the meaning of `NULL`. Not available in the two-sample case for a one-sided test or if ties are present. | ### Details If `y` is numeric, a two-sample test of the null hypothesis that `x` and `y` were drawn from the same *continuous* distribution is performed. Alternatively, `y` can be a character string naming a continuous (cumulative) distribution function, or such a function. In this case, a one-sample test is carried out of the null that the distribution function which generated `x` is distribution `y` with parameters specified by `...`. The presence of ties always generates a warning, since continuous distributions do not generate them. If the ties arose from rounding the tests may be approximately valid, but even modest amounts of rounding can have a significant effect on the calculated statistic. Missing values are silently omitted from `x` and (in the two-sample case) `y`. The possible values `"two.sided"`, `"less"` and `"greater"` of `alternative` specify the null hypothesis that the true distribution function of `x` is equal to, not less than or not greater than the hypothesized distribution function (one-sample case) or the distribution function of `y` (two-sample case), respectively. This is a comparison of cumulative distribution functions, and the test statistic is the maximum difference in value, with the statistic in the `"greater"` alternative being *D^+ = max[F\_x(u) - F\_y(u)]*. Thus in the two-sample case `alternative = "greater"` includes distributions for which `x` is stochastically *smaller* than `y` (the CDF of `x` lies above and hence to the left of that for `y`), in contrast to `<t.test>` or `<wilcox.test>`. Exact p-values are not available for the two-sample case if one-sided or in the presence of ties. If `exact = NULL` (the default), an exact p-value is computed if the sample size is less than 100 in the one-sample case *and there are no ties*, and if the product of the sample sizes is less than 10000 in the two-sample case. Otherwise, asymptotic distributions are used whose approximations may be inaccurate in small samples. In the one-sample two-sided case, exact p-values are obtained as described in Marsaglia, Tsang & Wang (2003) (but not using the optional approximation in the right tail, so this can be slow for small p-values). The formula of Birnbaum & Tingey (1951) is used for the one-sample one-sided case. If a single-sample test is used, the parameters specified in `...` must be pre-specified and not estimated from the data. There is some more refined distribution theory for the KS test with estimated parameters (see Durbin, 1973), but that is not implemented in `ks.test`. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the value of the test statistic. | | `p.value` | the p-value of the test. | | `alternative` | a character string describing the alternative hypothesis. | | `method` | a character string indicating what type of test was performed. | | `data.name` | a character string giving the name(s) of the data. | ### Source The two-sided one-sample distribution comes *via* Marsaglia, Tsang and Wang (2003). ### References Z. W. Birnbaum and Fred H. Tingey (1951). One-sided confidence contours for probability distribution functions. *The Annals of Mathematical Statistics*, **22**/4, 592–596. doi: [10.1214/aoms/1177729550](https://doi.org/10.1214/aoms/1177729550). William J. Conover (1971). *Practical Nonparametric Statistics*. New York: John Wiley & Sons. Pages 295–301 (one-sample Kolmogorov test), 309–314 (two-sample Smirnov test). Durbin, J. (1973). *Distribution theory for tests based on the sample distribution function*. SIAM. George Marsaglia, Wai Wan Tsang and Jingbo Wang (2003). Evaluating Kolmogorov's distribution. *Journal of Statistical Software*, **8**/18. doi: [10.18637/jss.v008.i18](https://doi.org/10.18637/jss.v008.i18). ### See Also `<shapiro.test>` which performs the Shapiro-Wilk test for normality. ### Examples ``` require(graphics) x <- rnorm(50) y <- runif(30) # Do x and y come from the same distribution? ks.test(x, y) # Does x come from a shifted gamma distribution with shape 3 and rate 2? ks.test(x+2, "pgamma", 3, 2) # two-sided, exact ks.test(x+2, "pgamma", 3, 2, exact = FALSE) ks.test(x+2, "pgamma", 3, 2, alternative = "gr") # test if x is stochastically larger than x2 x2 <- rnorm(50, -1) plot(ecdf(x), xlim = range(c(x, x2))) plot(ecdf(x2), add = TRUE, lty = "dashed") t.test(x, x2, alternative = "g") wilcox.test(x, x2, alternative = "g") ks.test(x, x2, alternative = "l") ```
programming_docs
r None `complete.cases` Find Complete Cases ------------------------------------- ### Description Return a logical vector indicating which cases are complete, i.e., have no missing values. ### Usage ``` complete.cases(...) ``` ### Arguments | | | | --- | --- | | `...` | a sequence of vectors, matrices and data frames. | ### Value A logical vector specifying which observations/rows have no missing values across the entire sequence. ### Note A current limitation of this function is that it uses low level functions to determine lengths and missingness, ignoring the class. This will lead to spurious errors when some columns have classes with `[length](../../base/html/length)` or `[is.na](../../base/html/na)` methods, for example `"[POSIXlt](../../base/html/datetimeclasses)"`, as described in [PR#16648](https://bugs.R-project.org/bugzilla3/show_bug.cgi?id=16648). ### See Also `[is.na](../../base/html/na)`, `[na.omit](na.fail)`, `<na.fail>`. ### Examples ``` x <- airquality[, -1] # x is a regression design matrix y <- airquality[, 1] # y is the corresponding response stopifnot(complete.cases(y) != is.na(y)) ok <- complete.cases(x, y) sum(!ok) # how many are not "ok" ? x <- x[ok,] y <- y[ok] ``` r None `constrOptim` Linearly Constrained Optimization ------------------------------------------------ ### Description Minimise a function subject to linear inequality constraints using an adaptive barrier algorithm. ### Usage ``` constrOptim(theta, f, grad, ui, ci, mu = 1e-04, control = list(), method = if(is.null(grad)) "Nelder-Mead" else "BFGS", outer.iterations = 100, outer.eps = 1e-05, ..., hessian = FALSE) ``` ### Arguments | | | | --- | --- | | `theta` | numeric (vector) starting value (of length *p*): must be in the feasible region. | | `f` | function to minimise (see below). | | `grad` | gradient of `f` (a `[function](../../base/html/function)` as well), or `NULL` (see below). | | `ui` | constraint matrix (*k x p*), see below. | | `ci` | constraint vector of length *k* (see below). | | `mu` | (Small) tuning parameter. | | `control, method, hessian` | passed to `<optim>`. | | `outer.iterations` | iterations of the barrier algorithm. | | `outer.eps` | non-negative number; the relative convergence tolerance of the barrier algorithm. | | `...` | Other named arguments to be passed to `f` and `grad`: needs to be passed through `<optim>` so should not match its argument names. | ### Details The feasible region is defined by `ui %*% theta - ci >= 0`. The starting value must be in the interior of the feasible region, but the minimum may be on the boundary. A logarithmic barrier is added to enforce the constraints and then `<optim>` is called. The barrier function is chosen so that the objective function should decrease at each outer iteration. Minima in the interior of the feasible region are typically found quite quickly, but a substantial number of outer iterations may be needed for a minimum on the boundary. The tuning parameter `mu` multiplies the barrier term. Its precise value is often relatively unimportant. As `mu` increases the augmented objective function becomes closer to the original objective function but also less smooth near the boundary of the feasible region. Any `optim` method that permits infinite values for the objective function may be used (currently all but "L-BFGS-B"). The objective function `f` takes as first argument the vector of parameters over which minimisation is to take place. It should return a scalar result. Optional arguments `...` will be passed to `optim` and then (if not used by `optim`) to `f`. As with `optim`, the default is to minimise, but maximisation can be performed by setting `control$fnscale` to a negative value. The gradient function `grad` must be supplied except with `method = "Nelder-Mead"`. It should take arguments matching those of `f` and return a vector containing the gradient. ### Value As for `<optim>`, but with two extra components: `barrier.value` giving the value of the barrier function at the optimum and `outer.iterations` gives the number of outer iterations (calls to `optim`). The `counts` component contains the *sum* of all `<optim>()$counts`. ### References K. Lange *Numerical Analysis for Statisticians.* Springer 2001, p185ff ### See Also `<optim>`, especially `method = "L-BFGS-B"` which does box-constrained optimisation. ### Examples ``` ## from optim fr <- function(x) { ## Rosenbrock Banana function x1 <- x[1] x2 <- x[2] 100 * (x2 - x1 * x1)^2 + (1 - x1)^2 } grr <- function(x) { ## Gradient of 'fr' x1 <- x[1] x2 <- x[2] c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1), 200 * (x2 - x1 * x1)) } optim(c(-1.2,1), fr, grr) #Box-constraint, optimum on the boundary constrOptim(c(-1.2,0.9), fr, grr, ui = rbind(c(-1,0), c(0,-1)), ci = c(-1,-1)) # x <= 0.9, y - x > 0.1 constrOptim(c(.5,0), fr, grr, ui = rbind(c(-1,0), c(1,-1)), ci = c(-0.9,0.1)) ## Solves linear and quadratic programming problems ## but needs a feasible starting value # # from example(solve.QP) in 'quadprog' # no derivative fQP <- function(b) {-sum(c(0,5,0)*b)+0.5*sum(b*b)} Amat <- matrix(c(-4,-3,0,2,1,0,0,-2,1), 3, 3) bvec <- c(-8, 2, 0) constrOptim(c(2,-1,-1), fQP, NULL, ui = t(Amat), ci = bvec) # derivative gQP <- function(b) {-c(0, 5, 0) + b} constrOptim(c(2,-1,-1), fQP, gQP, ui = t(Amat), ci = bvec) ## Now with maximisation instead of minimisation hQP <- function(b) {sum(c(0,5,0)*b)-0.5*sum(b*b)} constrOptim(c(2,-1,-1), hQP, NULL, ui = t(Amat), ci = bvec, control = list(fnscale = -1)) ``` r None `glm` Fitting Generalized Linear Models ---------------------------------------- ### Description `glm` is used to fit generalized linear models, specified by giving a symbolic description of the linear predictor and a description of the error distribution. ### Usage ``` glm(formula, family = gaussian, data, weights, subset, na.action, start = NULL, etastart, mustart, offset, control = list(...), model = TRUE, method = "glm.fit", x = FALSE, y = TRUE, singular.ok = TRUE, contrasts = NULL, ...) glm.fit(x, y, weights = rep.int(1, nobs), start = NULL, etastart = NULL, mustart = NULL, offset = rep.int(0, nobs), family = gaussian(), control = list(), intercept = TRUE, singular.ok = TRUE) ## S3 method for class 'glm' weights(object, type = c("prior", "working"), ...) ``` ### Arguments | | | | --- | --- | | `formula` | an object of class `"<formula>"` (or one that can be coerced to that class): a symbolic description of the model to be fitted. The details of model specification are given under ‘Details’. | | `family` | a description of the error distribution and link function to be used in the model. For `glm` this can be a character string naming a family function, a family function or the result of a call to a family function. For `glm.fit` only the third option is supported. (See `<family>` for details of family functions.) | | `data` | an optional data frame, list or environment (or object coercible by `[as.data.frame](../../base/html/as.data.frame)` to a data frame) containing the variables in the model. If not found in `data`, the variables are taken from `environment(formula)`, typically the environment from which `glm` is called. | | `weights` | an optional vector of ‘prior weights’ to be used in the fitting process. Should be `NULL` or a numeric vector. | | `subset` | an optional vector specifying a subset of observations to be used in the fitting process. | | `na.action` | a function which indicates what should happen when the data contain `NA`s. The default is set by the `na.action` setting of `[options](../../base/html/options)`, and is `<na.fail>` if that is unset. The ‘factory-fresh’ default is `[na.omit](na.fail)`. Another possible value is `NULL`, no action. Value `[na.exclude](na.fail)` can be useful. | | `start` | starting values for the parameters in the linear predictor. | | `etastart` | starting values for the linear predictor. | | `mustart` | starting values for the vector of means. | | `offset` | this can be used to specify an *a priori* known component to be included in the linear predictor during fitting. This should be `NULL` or a numeric vector of length equal to the number of cases. One or more `<offset>` terms can be included in the formula instead or as well, and if more than one is specified their sum is used. See `[model.offset](model.extract)`. | | `control` | a list of parameters for controlling the fitting process. For `glm.fit` this is passed to `<glm.control>`. | | `model` | a logical value indicating whether *model frame* should be included as a component of the returned value. | | `method` | the method to be used in fitting the model. The default method `"glm.fit"` uses iteratively reweighted least squares (IWLS): the alternative `"model.frame"` returns the model frame and does no fitting. User-supplied fitting functions can be supplied either as a function or a character string naming a function, with a function which takes the same arguments as `glm.fit`. If specified as a character string it is looked up from within the stats namespace. | | `x, y` | For `glm`: logical values indicating whether the response vector and model matrix used in the fitting process should be returned as components of the returned value. For `glm.fit`: `x` is a design matrix of dimension `n * p`, and `y` is a vector of observations of length `n`. | | `singular.ok` | logical; if `FALSE` a singular fit is an error. | | `contrasts` | an optional list. See the `contrasts.arg` of `model.matrix.default`. | | `intercept` | logical. Should an intercept be included in the *null* model? | | `object` | an object inheriting from class `"glm"`. | | `type` | character, partial matching allowed. Type of weights to extract from the fitted model object. Can be abbreviated. | | `...` | For `glm`: arguments to be used to form the default `control` argument if it is not supplied directly. For `weights`: further arguments passed to or from other methods. | ### Details A typical predictor has the form `response ~ terms` where `response` is the (numeric) response vector and `terms` is a series of terms which specifies a linear predictor for `response`. For `binomial` and `quasibinomial` families the response can also be specified as a `[factor](../../base/html/factor)` (when the first level denotes failure and all others success) or as a two-column matrix with the columns giving the numbers of successes and failures. A terms specification of the form `first + second` indicates all the terms in `first` together with all the terms in `second` with any duplicates removed. A specification of the form `first:second` indicates the set of terms obtained by taking the interactions of all terms in `first` with all terms in `second`. The specification `first*second` indicates the *cross* of `first` and `second`. This is the same as `first + second + first:second`. The terms in the formula will be re-ordered so that main effects come first, followed by the interactions, all second-order, all third-order and so on: to avoid this pass a `terms` object as the formula. Non-`NULL` `weights` can be used to indicate that different observations have different dispersions (with the values in `weights` being inversely proportional to the dispersions); or equivalently, when the elements of `weights` are positive integers *w\_i*, that each response *y\_i* is the mean of *w\_i* unit-weight observations. For a binomial GLM prior weights are used to give the number of trials when the response is the proportion of successes: they would rarely be used for a Poisson GLM. `glm.fit` is the workhorse function: it is not normally called directly but can be more efficient where the response vector, design matrix and family have already been calculated. If more than one of `etastart`, `start` and `mustart` is specified, the first in the list will be used. It is often advisable to supply starting values for a `[quasi](family)` family, and also for families with unusual links such as `gaussian("log")`. All of `weights`, `subset`, `offset`, `etastart` and `mustart` are evaluated in the same way as variables in `formula`, that is first in `data` and then in the environment of `formula`. For the background to warning messages about ‘fitted probabilities numerically 0 or 1 occurred’ for binomial GLMs, see Venables & Ripley (2002, pp. 197–8). ### Value `glm` returns an object of class inheriting from `"glm"` which inherits from the class `"lm"`. See later in this section. If a non-standard `method` is used, the object will also inherit from the class (if any) returned by that function. The function `[summary](../../base/html/summary)` (i.e., `<summary.glm>`) can be used to obtain or print a summary of the results and the function `<anova>` (i.e., `<anova.glm>`) to produce an analysis of variance table. The generic accessor functions `[coefficients](coef)`, `effects`, `fitted.values` and `residuals` can be used to extract various useful features of the value returned by `glm`. `weights` extracts a vector of weights, one for each case in the fit (after subsetting and `na.action`). An object of class `"glm"` is a list containing at least the following components: | | | | --- | --- | | `coefficients` | a named vector of coefficients | | `residuals` | the *working* residuals, that is the residuals in the final iteration of the IWLS fit. Since cases with zero weights are omitted, their working residuals are `NA`. | | `fitted.values` | the fitted mean values, obtained by transforming the linear predictors by the inverse of the link function. | | `rank` | the numeric rank of the fitted linear model. | | `family` | the `<family>` object used. | | `linear.predictors` | the linear fit on link scale. | | `deviance` | up to a constant, minus twice the maximized log-likelihood. Where sensible, the constant is chosen so that a saturated model has deviance zero. | | `aic` | A version of Akaike's *An Information Criterion*, minus twice the maximized log-likelihood plus twice the number of parameters, computed via the `aic` component of the family. For binomial and Poison families the dispersion is fixed at one and the number of parameters is the number of coefficients. For gaussian, Gamma and inverse gaussian families the dispersion is estimated from the residual deviance, and the number of parameters is the number of coefficients plus one. For a gaussian family the MLE of the dispersion is used so this is a valid value of AIC, but for Gamma and inverse gaussian families it is not. For families fitted by quasi-likelihood the value is `NA`. | | `null.deviance` | The deviance for the null model, comparable with `deviance`. The null model will include the offset, and an intercept if there is one in the model. Note that this will be incorrect if the link function depends on the data other than through the fitted mean: specify a zero offset to force a correct calculation. | | `iter` | the number of iterations of IWLS used. | | `weights` | the *working* weights, that is the weights in the final iteration of the IWLS fit. | | `prior.weights` | the weights initially supplied, a vector of `1`s if none were. | | `df.residual` | the residual degrees of freedom. | | `df.null` | the residual degrees of freedom for the null model. | | `y` | if requested (the default) the `y` vector used. (It is a vector even for a binomial model.) | | `x` | if requested, the model matrix. | | `model` | if requested (the default), the model frame. | | `converged` | logical. Was the IWLS algorithm judged to have converged? | | `boundary` | logical. Is the fitted value on the boundary of the attainable values? | | `call` | the matched call. | | `formula` | the formula supplied. | | `terms` | the `<terms>` object used. | | `data` | the `data argument`. | | `offset` | the offset vector used. | | `control` | the value of the `control` argument used. | | `method` | the name of the fitter function used (when provided as a `[character](../../base/html/character)` string to `glm()`) or the fitter `[function](../../base/html/function)` (when provided as that). | | `contrasts` | (where relevant) the contrasts used. | | `xlevels` | (where relevant) a record of the levels of the factors used in fitting. | | `na.action` | (where relevant) information returned by `<model.frame>` on the special handling of `NA`s. | In addition, non-empty fits will have components `qr`, `R` and `effects` relating to the final weighted linear fit. Objects of class `"glm"` are normally of class `c("glm", "lm")`, that is inherit from class `"lm"`, and well-designed methods for class `"lm"` will be applied to the weighted linear model at the final iteration of IWLS. However, care is needed, as extractor functions for class `"glm"` such as `<residuals>` and `weights` do **not** just pick out the component of the fit with the same name. If a `[binomial](family)` `glm` model was specified by giving a two-column response, the weights returned by `prior.weights` are the total numbers of cases (factored by the supplied case weights) and the component `y` of the result is the proportion of successes. ### Fitting functions The argument `method` serves two purposes. One is to allow the model frame to be recreated with no fitting. The other is to allow the default fitting function `glm.fit` to be replaced by a function which takes the same arguments and uses a different fitting algorithm. If `glm.fit` is supplied as a character string it is used to search for a function of that name, starting in the stats namespace. The class of the object return by the fitter (if any) will be prepended to the class returned by `glm`. ### Author(s) The original **R** implementation of `glm` was written by Simon Davies working for Ross Ihaka at the University of Auckland, but has since been extensively re-written by members of the R Core team. The design was inspired by the S function of the same name described in Hastie & Pregibon (1992). ### References Dobson, A. J. (1990) *An Introduction to Generalized Linear Models.* London: Chapman and Hall. Hastie, T. J. and Pregibon, D. (1992) *Generalized linear models.* Chapter 6 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. McCullagh P. and Nelder, J. A. (1989) *Generalized Linear Models.* London: Chapman and Hall. Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* New York: Springer. ### See Also `<anova.glm>`, `<summary.glm>`, etc. for `glm` methods, and the generic functions `<anova>`, `[summary](../../base/html/summary)`, `<effects>`, `<fitted.values>`, and `<residuals>`. `<lm>` for non-generalized *linear* models (which SAS calls GLMs, for ‘general’ linear models). `<loglin>` and `[loglm](../../mass/html/loglm)` (package [MASS](https://CRAN.R-project.org/package=MASS)) for fitting log-linear models (which binomial and Poisson GLMs are) to contingency tables. `bigglm` in package [biglm](https://CRAN.R-project.org/package=biglm) for an alternative way to fit GLMs to large datasets (especially those with many cases). `[esoph](../../datasets/html/esoph)`, `[infert](../../datasets/html/infert)` and `<predict.glm>` have examples of fitting binomial glms. ### Examples ``` ## Dobson (1990) Page 93: Randomized Controlled Trial : counts <- c(18,17,15,20,10,20,25,13,12) outcome <- gl(3,1,9) treatment <- gl(3,3) data.frame(treatment, outcome, counts) # showing data glm.D93 <- glm(counts ~ outcome + treatment, family = poisson()) anova(glm.D93) summary(glm.D93) ## Computing AIC [in many ways]: (A0 <- AIC(glm.D93)) (ll <- logLik(glm.D93)) A1 <- -2*c(ll) + 2*attr(ll, "df") A2 <- glm.D93$family$aic(counts, mu=fitted(glm.D93), wt=1) + 2 * length(coef(glm.D93)) stopifnot(exprs = { all.equal(A0, A1) all.equal(A1, A2) all.equal(A1, glm.D93$aic) }) ## an example with offsets from Venables & Ripley (2002, p.189) utils::data(anorexia, package = "MASS") anorex.1 <- glm(Postwt ~ Prewt + Treat + offset(Prewt), family = gaussian, data = anorexia) summary(anorex.1) # A Gamma example, from McCullagh & Nelder (1989, pp. 300-2) clotting <- data.frame( u = c(5,10,15,20,30,40,60,80,100), lot1 = c(118,58,42,35,27,25,21,19,18), lot2 = c(69,35,26,21,18,16,13,12,12)) summary(glm(lot1 ~ log(u), data = clotting, family = Gamma)) summary(glm(lot2 ~ log(u), data = clotting, family = Gamma)) ## Aliased ("S"ingular) -> 1 NA coefficient (fS <- glm(lot2 ~ log(u) + log(u^2), data = clotting, family = Gamma)) tools::assertError(update(fS, singular.ok=FALSE), verbose=interactive()) ## -> .. "singular fit encountered" ## Not run: ## for an example of the use of a terms object as a formula demo(glm.vr) ## End(Not run) ```
programming_docs
r None `start` Encode the Terminal Times of Time Series ------------------------------------------------- ### Description Extract and encode the times the first and last observations were taken. Provided only for compatibility with S version 2. ### Usage ``` start(x, ...) end(x, ...) ``` ### Arguments | | | | --- | --- | | `x` | a univariate or multivariate time-series, or a vector or matrix. | | `...` | extra arguments for future methods. | ### Details These are generic functions, which will use the `<tsp>` attribute of `x` if it exists. Their default methods decode the start time from the original time units, so that for a monthly series `1995.5` is represented as `c(1995, 7)`. For a series of frequency `f`, time `n+i/f` is presented as `c(n, i+1)` (even for `i = 0` and `f = 1`). ### Warning The representation used by `start` and `end` has no meaning unless the frequency is supplied. ### See Also `<ts>`, `<time>`, `<tsp>`. r None `order.dendrogram` Ordering or Labels of the Leaves in a Dendrogram -------------------------------------------------------------------- ### Description Theses functions return the order (index) or the `"label"` attribute for the leaves in a dendrogram. These indices can then be used to access the appropriate components of any additional data. ### Usage ``` order.dendrogram(x) ## S3 method for class 'dendrogram' labels(object, ...) ``` ### Arguments | | | | --- | --- | | `x, object` | a dendrogram (see `[as.dendrogram](dendrogram)`). | | `...` | additional arguments | ### Details The indices or labels for the leaves in left to right order are retrieved. ### Value A vector with length equal to the number of leaves in the dendrogram is returned. From `r <- order.dendrogram()`, each element is the index into the original data (from which the dendrogram was computed). ### Author(s) R. Gentleman (`order.dendrogram`) and Martin Maechler (`labels.dendrogram`). ### See Also `[reorder](reorder.factor)`, `<dendrogram>`. ### Examples ``` set.seed(123) x <- rnorm(10) hc <- hclust(dist(x)) hc$order dd <- as.dendrogram(hc) order.dendrogram(dd) ## the same : stopifnot(hc$order == order.dendrogram(dd)) d2 <- as.dendrogram(hclust(dist(USArrests))) labels(d2) ## in this case the same as stopifnot(identical(labels(d2), rownames(USArrests)[order.dendrogram(d2)])) ``` r None `predict.smooth.spline` Predict from Smoothing Spline Fit ---------------------------------------------------------- ### Description Predict a smoothing spline fit at new points, return the derivative if desired. The predicted fit is linear beyond the original data. ### Usage ``` ## S3 method for class 'smooth.spline' predict(object, x, deriv = 0, ...) ``` ### Arguments | | | | --- | --- | | `object` | a fit from `smooth.spline`. | | `x` | the new values of x. | | `deriv` | integer; the order of the derivative required. | | `...` | further arguments passed to or from other methods. | ### Value A list with components | | | | --- | --- | | `x` | The input `x`. | | `y` | The fitted values or derivatives at `x`. | ### See Also `<smooth.spline>` ### Examples ``` require(graphics) attach(cars) cars.spl <- smooth.spline(speed, dist, df = 6.4) ## "Proof" that the derivatives are okay, by comparing with approximation diff.quot <- function(x, y) { ## Difference quotient (central differences where available) n <- length(x); i1 <- 1:2; i2 <- (n-1):n c(diff(y[i1]) / diff(x[i1]), (y[-i1] - y[-i2]) / (x[-i1] - x[-i2]), diff(y[i2]) / diff(x[i2])) } xx <- unique(sort(c(seq(0, 30, by = .2), kn <- unique(speed)))) i.kn <- match(kn, xx) # indices of knots within xx op <- par(mfrow = c(2,2)) plot(speed, dist, xlim = range(xx), main = "Smooth.spline & derivatives") lines(pp <- predict(cars.spl, xx), col = "red") points(kn, pp$y[i.kn], pch = 3, col = "dark red") mtext("s(x)", col = "red") for(d in 1:3){ n <- length(pp$x) plot(pp$x, diff.quot(pp$x,pp$y), type = "l", xlab = "x", ylab = "", col = "blue", col.main = "red", main = paste0("s" ,paste(rep("'", d), collapse = ""), "(x)")) mtext("Difference quotient approx.(last)", col = "blue") lines(pp <- predict(cars.spl, xx, deriv = d), col = "red") points(kn, pp$y[i.kn], pch = 3, col = "dark red") abline(h = 0, lty = 3, col = "gray") } detach(); par(op) ``` r None `mauchly.test` Mauchly's Test of Sphericity -------------------------------------------- ### Description Tests whether a Wishart-distributed covariance matrix (or transformation thereof) is proportional to a given matrix. ### Usage ``` mauchly.test(object, ...) ## S3 method for class 'mlm' mauchly.test(object, ...) ## S3 method for class 'SSD' mauchly.test(object, Sigma = diag(nrow = p), T = Thin.row(proj(M) - proj(X)), M = diag(nrow = p), X = ~0, idata = data.frame(index = seq_len(p)), ...) ``` ### Arguments | | | | --- | --- | | `object` | object of class `SSD` or `mlm`. | | `Sigma` | matrix to be proportional to. | | `T` | transformation matrix. By default computed from `M` and `X`. | | `M` | formula or matrix describing the outer projection (see below). | | `X` | formula or matrix describing the inner projection (see below). | | `idata` | data frame describing intra-block design. | | `...` | arguments to be passed to or from other methods. | ### Details Mauchly's test test for whether a covariance matrix can be assumed to be proportional to a given matrix. This is a generic function with methods for classes `"mlm"` and `"[SSD](ssd)"`. The basic method is for objects of class `SSD` the method for `mlm` objects just extracts the SSD matrix and invokes the corresponding method with the same options and arguments. The `T` argument is used to transform the observations prior to testing. This typically involves transformation to intra-block differences, but more complicated within-block designs can be encountered, making more elaborate transformations necessary. A matrix `T` can be given directly or specified as the difference between two projections onto the spaces spanned by `M` and `X`, which in turn can be given as matrices or as model formulas with respect to `idata` (the tests will be invariant to parametrization of the quotient space `M/X`). The common use of this test is in repeated measurements designs, with `X = ~1`. This is almost, but not quite the same as testing for compound symmetry in the untransformed covariance matrix. Notice that the defaults involve `p`, which is calculated internally as the dimension of the SSD matrix, and a couple of hidden functions in the stats namespace, namely `proj` which calculates projection matrices from design matrices or model formulas and `Thin.row` which removes linearly dependent rows from a matrix until it has full row rank. ### Value An object of class `"htest"` ### Note The p-value differs slightly from that of SAS because a second order term is included in the asymptotic approximation in **R**. ### References T. W. Anderson (1958). *An Introduction to Multivariate Statistical Analysis.* Wiley. ### See Also `[SSD](ssd)`, `<anova.mlm>`, `[rWishart](rwishart)` ### Examples ``` utils::example(SSD) # Brings in the mlmfit and reacttime objects ### traditional test of intrasubj. contrasts mauchly.test(mlmfit, X = ~1) ### tests using intra-subject 3x2 design idata <- data.frame(deg = gl(3, 1, 6, labels = c(0,4,8)), noise = gl(2, 3, 6, labels = c("A","P"))) mauchly.test(mlmfit, X = ~ deg + noise, idata = idata) mauchly.test(mlmfit, M = ~ deg + noise, X = ~ noise, idata = idata) ``` r None `dummy.coef` Extract Coefficients in Original Coding ----------------------------------------------------- ### Description This extracts coefficients in terms of the original levels of the coefficients rather than the coded variables. ### Usage ``` dummy.coef(object, ...) ## S3 method for class 'lm' dummy.coef(object, use.na = FALSE, ...) ## S3 method for class 'aovlist' dummy.coef(object, use.na = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `object` | a linear model fit. | | `use.na` | logical flag for coefficients in a singular model. If `use.na` is true, undetermined coefficients will be missing; if false they will get one possible value. | | `...` | arguments passed to or from other methods. | ### Details A fitted linear model has coefficients for the contrasts of the factor terms, usually one less in number than the number of levels. This function re-expresses the coefficients in the original coding; as the coefficients will have been fitted in the reduced basis, any implied constraints (e.g., zero sum for `contr.helmert` or `contr.sum`) will be respected. There will be little point in using `dummy.coef` for `contr.treatment` contrasts, as the missing coefficients are by definition zero. The method used has some limitations, and will give incomplete results for terms such as `poly(x, 2)`. However, it is adequate for its main purpose, `aov` models. ### Value A list giving for each term the values of the coefficients. For a multistratum `aov` model, such a list for each stratum. ### Warning This function is intended for human inspection of the output: it should not be used for calculations. Use coded variables for all calculations. The results differ from S for singular values, where S can be incorrect. ### See Also `<aov>`, `<model.tables>` ### Examples ``` options(contrasts = c("contr.helmert", "contr.poly")) ## From Venables and Ripley (2002) p.165. npk.aov <- aov(yield ~ block + N*P*K, npk) dummy.coef(npk.aov) npk.aovE <- aov(yield ~ N*P*K + Error(block), npk) dummy.coef(npk.aovE) ``` r None `interaction.plot` Two-way Interaction Plot -------------------------------------------- ### Description Plots the mean (or other summary) of the response for two-way combinations of factors, thereby illustrating possible interactions. ### Usage ``` interaction.plot(x.factor, trace.factor, response, fun = mean, type = c("l", "p", "b", "o", "c"), legend = TRUE, trace.label = deparse1(substitute(trace.factor)), fixed = FALSE, xlab = deparse1(substitute(x.factor)), ylab = ylabel, ylim = range(cells, na.rm = TRUE), lty = nc:1, col = 1, pch = c(1:9, 0, letters), xpd = NULL, leg.bg = par("bg"), leg.bty = "n", xtick = FALSE, xaxt = par("xaxt"), axes = TRUE, ...) ``` ### Arguments | | | | --- | --- | | `x.factor` | a factor whose levels will form the x axis. | | `trace.factor` | another factor whose levels will form the traces. | | `response` | a numeric variable giving the response | | `fun` | the function to compute the summary. Should return a single real value. | | `type` | the type of plot (see `[plot.default](../../graphics/html/plot.default)`): lines or points or both. | | `legend` | logical. Should a legend be included? | | `trace.label` | overall label for the legend. | | `fixed` | logical. Should the legend be in the order of the levels of `trace.factor` or in the order of the traces at their right-hand ends? | | `xlab,ylab` | the x and y label of the plot each with a sensible default. | | `ylim` | numeric of length 2 giving the y limits for the plot. | | `lty` | line type for the lines drawn, with sensible default. | | `col` | the color to be used for plotting. | | `pch` | a vector of plotting symbols or characters, with sensible default. | | `xpd` | determines clipping behaviour for the `[legend](../../graphics/html/legend)` used, see `[par](../../graphics/html/par)(xpd)`. Per default, the legend is *not* clipped at the figure border. | | `leg.bg, leg.bty` | arguments passed to `[legend](../../graphics/html/legend)()`. | | `xtick` | logical. Should tick marks be used on the x axis? | | `xaxt, axes, ...` | graphics parameters to be passed to the plotting routines. | ### Details By default the levels of `x.factor` are plotted on the x axis in their given order, with extra space left at the right for the legend (if specified). If `x.factor` is an ordered factor and the levels are numeric, these numeric values are used for the x axis. The response and hence its summary can contain missing values. If so, the missing values and the line segments joining them are omitted from the plot (and this can be somewhat disconcerting). The graphics parameters `xlab`, `ylab`, `ylim`, `lty`, `col` and `pch` are given suitable defaults (and `xlim` and `xaxs` are set and cannot be overridden). The defaults are to cycle through the line types, use the foreground colour, and to use the symbols 1:9, 0, and the capital letters to plot the traces. ### Note Some of the argument names and the precise behaviour are chosen for S-compatibility. ### References Chambers, J. M., Freeny, A and Heiberger, R. M. (1992) *Analysis of variance; designed experiments.* Chapter 5 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### Examples ``` require(graphics) with(ToothGrowth, { interaction.plot(dose, supp, len, fixed = TRUE) dose <- ordered(dose) interaction.plot(dose, supp, len, fixed = TRUE, col = 2:3, leg.bty = "o") interaction.plot(dose, supp, len, fixed = TRUE, col = 2:3, type = "p") }) with(OrchardSprays, { interaction.plot(treatment, rowpos, decrease) interaction.plot(rowpos, treatment, decrease, cex.axis = 0.8) ## order the rows by their mean effect rowpos <- factor(rowpos, levels = sort.list(tapply(decrease, rowpos, mean))) interaction.plot(rowpos, treatment, decrease, col = 2:9, lty = 1) }) ``` r None `dist` Distance Matrix Computation ----------------------------------- ### Description This function computes and returns the distance matrix computed by using the specified distance measure to compute the distances between the rows of a data matrix. ### Usage ``` dist(x, method = "euclidean", diag = FALSE, upper = FALSE, p = 2) as.dist(m, diag = FALSE, upper = FALSE) ## Default S3 method: as.dist(m, diag = FALSE, upper = FALSE) ## S3 method for class 'dist' print(x, diag = NULL, upper = NULL, digits = getOption("digits"), justify = "none", right = TRUE, ...) ## S3 method for class 'dist' as.matrix(x, ...) ``` ### Arguments | | | | --- | --- | | `x` | a numeric matrix, data frame or `"dist"` object. | | `method` | the distance measure to be used. This must be one of `"euclidean"`, `"maximum"`, `"manhattan"`, `"canberra"`, `"binary"` or `"minkowski"`. Any unambiguous substring can be given. | | `diag` | logical value indicating whether the diagonal of the distance matrix should be printed by `print.dist`. | | `upper` | logical value indicating whether the upper triangle of the distance matrix should be printed by `print.dist`. | | `p` | The power of the Minkowski distance. | | `m` | An object with distance information to be converted to a `"dist"` object. For the default method, a `"dist"` object, or a matrix (of distances) or an object which can be coerced to such a matrix using `[as.matrix](../../base/html/matrix)()`. (Only the lower triangle of the matrix is used, the rest is ignored). | | `digits, justify` | passed to `[format](../../base/html/format)` inside of `print()`. | | `right, ...` | further arguments, passed to other methods. | ### Details Available distance measures are (written for two vectors *x* and *y*): `euclidean`: Usual distance between the two vectors (2 norm aka *L\_2*), *sqrt(sum((x\_i - y\_i)^2))*. `maximum`: Maximum distance between two components of *x* and *y* (supremum norm) `manhattan`: Absolute distance between the two vectors (1 norm aka *L\_1*). `canberra`: *sum(|x\_i - y\_i| / (|x\_i| + |y\_i|))*. Terms with zero numerator and denominator are omitted from the sum and treated as if the values were missing. This is intended for non-negative values (e.g., counts), in which case the denominator can be written in various equivalent ways; Originally, **R** used *x\_i + y\_i*, then from 1998 to 2017, *|x\_i + y\_i|*, and then the correct *|x\_i| + |y\_i|*. `binary`: (aka *asymmetric binary*): The vectors are regarded as binary bits, so non-zero elements are ‘on’ and zero elements are ‘off’. The distance is the *proportion* of bits in which only one is on amongst those in which at least one is on. `minkowski`: The *p* norm, the *p*th root of the sum of the *p*th powers of the differences of the components. Missing values are allowed, and are excluded from all computations involving the rows within which they occur. Further, when `Inf` values are involved, all pairs of values are excluded when their contribution to the distance gave `NaN` or `NA`. If some columns are excluded in calculating a Euclidean, Manhattan, Canberra or Minkowski distance, the sum is scaled up proportionally to the number of columns used. If all pairs are excluded when calculating a particular distance, the value is `NA`. The `"dist"` method of `as.matrix()` and `as.dist()` can be used for conversion between objects of class `"dist"` and conventional distance matrices. `as.dist()` is a generic function. Its default method handles objects inheriting from class `"dist"`, or coercible to matrices using `[as.matrix](../../base/html/matrix)()`. Support for classes representing distances (also known as dissimilarities) can be added by providing an `[as.matrix](../../base/html/matrix)()` or, more directly, an `as.dist` method for such a class. ### Value `dist` returns an object of class `"dist"`. The lower triangle of the distance matrix stored by columns in a vector, say `do`. If `n` is the number of observations, i.e., `n <- attr(do, "Size")`, then for *i < j ≤ n*, the dissimilarity between (row) i and j is `do[n*(i-1) - i*(i-1)/2 + j-i]`. The length of the vector is *n\*(n-1)/2*, i.e., of order *n^2*. The object has the following attributes (besides `"class"` equal to `"dist"`): | | | | --- | --- | | `Size` | integer, the number of observations in the dataset. | | `Labels` | optionally, contains the labels, if any, of the observations of the dataset. | | `Diag, Upper` | logicals corresponding to the arguments `diag` and `upper` above, specifying how the object should be printed. | | `call` | optionally, the `[call](../../base/html/call)` used to create the object. | | `method` | optionally, the distance method used; resulting from `<dist>()`, the (`[match.arg](../../base/html/match.arg)()`ed) `method` argument. | ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Mardia, K. V., Kent, J. T. and Bibby, J. M. (1979) *Multivariate Analysis.* Academic Press. Borg, I. and Groenen, P. (1997) *Modern Multidimensional Scaling. Theory and Applications.* Springer. ### See Also `[daisy](../../cluster/html/daisy)` in the [cluster](https://CRAN.R-project.org/package=cluster) package with more possibilities in the case of *mixed* (continuous / categorical) variables. `<hclust>`. ### Examples ``` require(graphics) x <- matrix(rnorm(100), nrow = 5) dist(x) dist(x, diag = TRUE) dist(x, upper = TRUE) m <- as.matrix(dist(x)) d <- as.dist(m) stopifnot(d == dist(x)) ## Use correlations between variables "as distance" dd <- as.dist((1 - cor(USJudgeRatings))/2) round(1000 * dd) # (prints more nicely) plot(hclust(dd)) # to see a dendrogram of clustered variables ## example of binary and canberra distances. x <- c(0, 0, 1, 1, 1, 1) y <- c(1, 0, 1, 1, 0, 1) dist(rbind(x, y), method = "binary") ## answer 0.4 = 2/5 dist(rbind(x, y), method = "canberra") ## answer 2 * (6/5) ## To find the names labels(eurodist) ## Examples involving "Inf" : ## 1) x[6] <- Inf (m2 <- rbind(x, y)) dist(m2, method = "binary") # warning, answer 0.5 = 2/4 ## These all give "Inf": stopifnot(Inf == dist(m2, method = "euclidean"), Inf == dist(m2, method = "maximum"), Inf == dist(m2, method = "manhattan")) ## "Inf" is same as very large number: x1 <- x; x1[6] <- 1e100 stopifnot(dist(cbind(x, y), method = "canberra") == print(dist(cbind(x1, y), method = "canberra"))) ## 2) y[6] <- Inf #-> 6-th pair is excluded dist(rbind(x, y), method = "binary" ) # warning; 0.5 dist(rbind(x, y), method = "canberra" ) # 3 dist(rbind(x, y), method = "maximum") # 1 dist(rbind(x, y), method = "manhattan") # 2.4 ```
programming_docs
r None `residuals` Extract Model Residuals ------------------------------------ ### Description `residuals` is a generic function which extracts model residuals from objects returned by modeling functions. The abbreviated form `resid` is an alias for `residuals`. It is intended to encourage users to access object components through an accessor function rather than by directly referencing an object slot. All object classes which are returned by model fitting functions should provide a `residuals` method. (Note that the method is for residuals and not resid.) Methods can make use of `[naresid](nafns)` methods to compensate for the omission of missing values. The default, `<nls>` and `<smooth.spline>` methods do. ### Usage ``` residuals(object, ...) resid(object, ...) ``` ### Arguments | | | | --- | --- | | `object` | an object for which the extraction of model residuals is meaningful. | | `...` | other arguments. | ### Value Residuals extracted from the object `object`. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S*. Wadsworth & Brooks/Cole. ### See Also `[coefficients](coef)`, `<fitted.values>`, `<glm>`, `<lm>`. <influence.measures> for standardized (`[rstandard](influence.measures)`) and studentized (`[rstudent](influence.measures)`) residuals. r None `time` Sampling Times of Time Series ------------------------------------- ### Description `time` creates the vector of times at which a time series was sampled. `cycle` gives the positions in the cycle of each observation. `frequency` returns the number of samples per unit time and `deltat` the time interval between observations (see `<ts>`). ### Usage ``` time(x, ...) ## Default S3 method: time(x, offset = 0, ...) cycle(x, ...) frequency(x, ...) deltat(x, ...) ``` ### Arguments | | | | --- | --- | | `x` | a univariate or multivariate time-series, or a vector or matrix. | | `offset` | can be used to indicate when sampling took place in the time unit. `0` (the default) indicates the start of the unit, `0.5` the middle and `1` the end of the interval. | | `...` | extra arguments for future methods. | ### Details These are all generic functions, which will use the `<tsp>` attribute of `x` if it exists. `time` and `cycle` have methods for class `<ts>` that coerce the result to that class. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<ts>`, `<start>`, `<tsp>`, `<window>`. `[date](../../base/html/date)` for clock time, `[system.time](../../base/html/system.time)` for CPU usage. ### Examples ``` require(graphics) cycle(presidents) # a simple series plot plot(as.vector(time(presidents)), as.vector(presidents), type = "l") ``` r None `convolve` Convolution of Sequences via FFT -------------------------------------------- ### Description Use the Fast Fourier Transform to compute the several kinds of convolutions of two sequences. ### Usage ``` convolve(x, y, conj = TRUE, type = c("circular", "open", "filter")) ``` ### Arguments | | | | --- | --- | | `x, y` | numeric sequences *of the same length* to be convolved. | | `conj` | logical; if `TRUE`, take the complex *conjugate* before back-transforming (default, and used for usual convolution). | | `type` | character; partially matched to `"circular"`, `"open"`, `"filter"`. For `"circular"`, the two sequences are treated as *circular*, i.e., periodic. For `"open"` and `"filter"`, the sequences are padded with `0`s (from left and right) first; `"filter"` returns the middle sub-vector of `"open"`, namely, the result of running a weighted mean of `x` with weights `y`. | ### Details The Fast Fourier Transform, `<fft>`, is used for efficiency. The input sequences `x` and `y` must have the same length if `circular` is true. Note that the usual definition of convolution of two sequences `x` and `y` is given by `convolve(x, rev(y), type = "o")`. ### Value If `r <- convolve(x, y, type = "open")` and `n <- length(x)`, `m <- length(y)`, then *r[k] = sum(i; x[k-m+i] \* y[i])* where the sum is over all valid indices *i*, for *k = 1, …, n+m-1*. If `type == "circular"`, *n = m* is required, and the above is true for *i , k = 1,…,n* when *x[j] := x[n+j]* for *j < 1*. ### References Brillinger, D. R. (1981) *Time Series: Data Analysis and Theory*, Second Edition. San Francisco: Holden-Day. ### See Also `<fft>`, `<nextn>`, and particularly `<filter>` (from the stats package) which may be more appropriate. ### Examples ``` require(graphics) x <- c(0,0,0,100,0,0,0) y <- c(0,0,1, 2 ,1,0,0)/4 zapsmall(convolve(x, y)) # *NOT* what you first thought. zapsmall(convolve(x, y[3:5], type = "f")) # rather x <- rnorm(50) y <- rnorm(50) # Circular convolution *has* this symmetry: all.equal(convolve(x, y, conj = FALSE), rev(convolve(rev(y),x))) n <- length(x <- -20:24) y <- (x-10)^2/1000 + rnorm(x)/8 Han <- function(y) # Hanning convolve(y, c(1,2,1)/4, type = "filter") plot(x, y, main = "Using convolve(.) for Hanning filters") lines(x[-c(1 , n) ], Han(y), col = "red") lines(x[-c(1:2, (n-1):n)], Han(Han(y)), lwd = 2, col = "dark blue") ``` r None `aggregate` Compute Summary Statistics of Data Subsets ------------------------------------------------------- ### Description Splits the data into subsets, computes summary statistics for each, and returns the result in a convenient form. ### Usage ``` aggregate(x, ...) ## Default S3 method: aggregate(x, ...) ## S3 method for class 'data.frame' aggregate(x, by, FUN, ..., simplify = TRUE, drop = TRUE) ## S3 method for class 'formula' aggregate(formula, data, FUN, ..., subset, na.action = na.omit) ## S3 method for class 'ts' aggregate(x, nfrequency = 1, FUN = sum, ndeltat = 1, ts.eps = getOption("ts.eps"), ...) ``` ### Arguments | | | | --- | --- | | `x` | an R object. | | `by` | a list of grouping elements, each as long as the variables in the data frame `x`. The elements are coerced to factors before use. | | `FUN` | a function to compute the summary statistics which can be applied to all data subsets. | | `simplify` | a logical indicating whether results should be simplified to a vector or matrix if possible. | | `drop` | a logical indicating whether to drop unused combinations of grouping values. The non-default case `drop=FALSE` has been amended for **R** 3.5.0 to drop unused combinations. | | `formula` | a <formula>, such as `y ~ x` or `cbind(y1, y2) ~ x1 + x2`, where the `y` variables are numeric data to be split into groups according to the grouping `x` variables (usually factors). | | `data` | a data frame (or list) from which the variables in formula should be taken. | | `subset` | an optional vector specifying a subset of observations to be used. | | `na.action` | a function which indicates what should happen when the data contain `NA` values. The default is to ignore missing values in the given variables. | | `nfrequency` | new number of observations per unit of time; must be a divisor of the frequency of `x`. | | `ndeltat` | new fraction of the sampling period between successive observations; must be a divisor of the sampling interval of `x`. | | `ts.eps` | tolerance used to decide if `nfrequency` is a sub-multiple of the original frequency. | | `...` | further arguments passed to or used by methods. | ### Details `aggregate` is a generic function with methods for data frames and time series. The default method, `aggregate.default`, uses the time series method if `x` is a time series, and otherwise coerces `x` to a data frame and calls the data frame method. `aggregate.data.frame` is the data frame method. If `x` is not a data frame, it is coerced to one, which must have a non-zero number of rows. Then, each of the variables (columns) in `x` is split into subsets of cases (rows) of identical combinations of the components of `by`, and `FUN` is applied to each such subset with further arguments in `...` passed to it. The result is reformatted into a data frame containing the variables in `by` and `x`. The ones arising from `by` contain the unique combinations of grouping values used for determining the subsets, and the ones arising from `x` the corresponding summaries for the subset of the respective variables in `x`. If `simplify` is true, summaries are simplified to vectors or matrices if they have a common length of one or greater than one, respectively; otherwise, lists of summary results according to subsets are obtained. Rows with missing values in any of the `by` variables will be omitted from the result. (Note that versions of **R** prior to 2.11.0 required `FUN` to be a scalar function.) `aggregate.formula` is a standard formula interface to `aggregate.data.frame`. `aggregate.ts` is the time series method, and requires `FUN` to be a scalar function. If `x` is not a time series, it is coerced to one. Then, the variables in `x` are split into appropriate blocks of length `frequency(x) / nfrequency`, and `FUN` is applied to each such block, with further (named) arguments in `...` passed to it. The result returned is a time series with frequency `nfrequency` holding the aggregated values. Note that this make most sense for a quarterly or yearly result when the original series covers a whole number of quarters or years: in particular aggregating a monthly series to quarters starting in February does not give a conventional quarterly series. `FUN` is passed to `[match.fun](../../base/html/match.fun)`, and hence it can be a function or a symbol or character string naming a function. ### Value For the time series method, a time series of class `"ts"` or class `c("mts", "ts")`. For the data frame method, a data frame with columns corresponding to the grouping variables in `by` followed by aggregated columns from `x`. If the `by` has names, the non-empty times are used to label the columns in the results, with unnamed grouping variables being named `Group.i` for `by[[i]]`. ### Author(s) Kurt Hornik, with contributions by Arni Magnusson. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[apply](../../base/html/apply)`, `[lapply](../../base/html/lapply)`, `[tapply](../../base/html/tapply)`. ### Examples ``` ## Compute the averages for the variables in 'state.x77', grouped ## according to the region (Northeast, South, North Central, West) that ## each state belongs to. aggregate(state.x77, list(Region = state.region), mean) ## Compute the averages according to region and the occurrence of more ## than 130 days of frost. aggregate(state.x77, list(Region = state.region, Cold = state.x77[,"Frost"] > 130), mean) ## (Note that no state in 'South' is THAT cold.) ## example with character variables and NAs testDF <- data.frame(v1 = c(1,3,5,7,8,3,5,NA,4,5,7,9), v2 = c(11,33,55,77,88,33,55,NA,44,55,77,99) ) by1 <- c("red", "blue", 1, 2, NA, "big", 1, 2, "red", 1, NA, 12) by2 <- c("wet", "dry", 99, 95, NA, "damp", 95, 99, "red", 99, NA, NA) aggregate(x = testDF, by = list(by1, by2), FUN = "mean") # and if you want to treat NAs as a group fby1 <- factor(by1, exclude = "") fby2 <- factor(by2, exclude = "") aggregate(x = testDF, by = list(fby1, fby2), FUN = "mean") ## Formulas, one ~ one, one ~ many, many ~ one, and many ~ many: aggregate(weight ~ feed, data = chickwts, mean) aggregate(breaks ~ wool + tension, data = warpbreaks, mean) aggregate(cbind(Ozone, Temp) ~ Month, data = airquality, mean) aggregate(cbind(ncases, ncontrols) ~ alcgp + tobgp, data = esoph, sum) ## Dot notation: aggregate(. ~ Species, data = iris, mean) aggregate(len ~ ., data = ToothGrowth, mean) ## Often followed by xtabs(): ag <- aggregate(len ~ ., data = ToothGrowth, mean) xtabs(len ~ ., data = ag) ## Compute the average annual approval ratings for American presidents. aggregate(presidents, nfrequency = 1, FUN = mean) ## Give the summer less weight. aggregate(presidents, nfrequency = 1, FUN = weighted.mean, w = c(1, 1, 0.5, 1)) ``` r None `mahalanobis` Mahalanobis Distance ----------------------------------- ### Description Returns the squared Mahalanobis distance of all rows in `x` and the vector *mu* = `center` with respect to *Sigma* = `cov`. This is (for vector `x`) defined as *D^2 = (x - μ)' Σ^-1 (x - μ)* ### Usage ``` mahalanobis(x, center, cov, inverted = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `x` | vector or matrix of data with, say, *p* columns. | | `center` | mean vector of the distribution or second data vector of length *p* or recyclable to that length. If set to `[FALSE](../../base/html/logical)`, the centering step is skipped. | | `cov` | covariance matrix (*p x p*) of the distribution. | | `inverted` | logical. If `TRUE`, `cov` is supposed to contain the *inverse* of the covariance matrix. | | `...` | passed to `[solve](../../matrix/html/solve-methods)` for computing the inverse of the covariance matrix (if `inverted` is false). | ### See Also `[cov](cor)`, `[var](cor)` ### Examples ``` require(graphics) ma <- cbind(1:6, 1:3) (S <- var(ma)) mahalanobis(c(0, 0), 1:2, S) x <- matrix(rnorm(100*3), ncol = 3) stopifnot(mahalanobis(x, 0, diag(ncol(x))) == rowSums(x*x)) ##- Here, D^2 = usual squared Euclidean distances Sx <- cov(x) D2 <- mahalanobis(x, colMeans(x), Sx) plot(density(D2, bw = 0.5), main="Squared Mahalanobis distances, n=100, p=3") ; rug(D2) qqplot(qchisq(ppoints(100), df = 3), D2, main = expression("Q-Q plot of Mahalanobis" * ~D^2 * " vs. quantiles of" * ~ chi[3]^2)) abline(0, 1, col = 'gray') ``` r None `dendrogram` General Tree Structures ------------------------------------- ### Description Class `"dendrogram"` provides general functions for handling tree-like structures. It is intended as a replacement for similar functions in hierarchical clustering and classification/regression trees, such that all of these can use the same engine for plotting or cutting trees. ### Usage ``` as.dendrogram(object, ...) ## S3 method for class 'hclust' as.dendrogram(object, hang = -1, check = TRUE, ...) ## S3 method for class 'dendrogram' as.hclust(x, ...) ## S3 method for class 'dendrogram' plot(x, type = c("rectangle", "triangle"), center = FALSE, edge.root = is.leaf(x) || !is.null(attr(x,"edgetext")), nodePar = NULL, edgePar = list(), leaflab = c("perpendicular", "textlike", "none"), dLeaf = NULL, xlab = "", ylab = "", xaxt = "n", yaxt = "s", horiz = FALSE, frame.plot = FALSE, xlim, ylim, ...) ## S3 method for class 'dendrogram' cut(x, h, ...) ## S3 method for class 'dendrogram' merge(x, y, ..., height, adjust = c("auto", "add.max", "none")) ## S3 method for class 'dendrogram' nobs(object, ...) ## S3 method for class 'dendrogram' print(x, digits, ...) ## S3 method for class 'dendrogram' rev(x) ## S3 method for class 'dendrogram' str(object, max.level = NA, digits.d = 3, give.attr = FALSE, wid = getOption("width"), nest.lev = 0, indent.str = "", last.str = getOption("str.dendrogram.last"), stem = "--", ...) is.leaf(object) ``` ### Arguments | | | | --- | --- | | `object` | any **R** object that can be made into one of class `"dendrogram"`. | | `x, y` | object(s) of class `"dendrogram"`. | | `hang` | numeric scalar indicating how the *height* of leaves should be computed from the heights of their parents; see `[plot.hclust](hclust)`. | | `check` | logical indicating if `object` should be checked for validity. This check is not necessary when `x` is known to be valid such as when it is the direct result of `hclust()`. The default is `check=TRUE`, e.g. for protecting against memory explosion with invalid inputs. | | `type` | type of plot. | | `center` | logical; if `TRUE`, nodes are plotted centered with respect to the leaves in the branch. Otherwise (default), plot them in the middle of all direct child nodes. | | `edge.root` | logical; if true, draw an edge to the root node. | | `nodePar` | a `list` of plotting parameters to use for the nodes (see `[points](../../graphics/html/points)`) or `NULL` by default which does not draw symbols at the nodes. The list may contain components named `pch`, `cex`, `col`, `xpd`, and/or `bg` each of which can have length two for specifying separate attributes for *inner* nodes and *leaves*. Note that the default of `pch` is `1:2`, so you may want to use `pch = NA` if you specify `nodePar`. | | `edgePar` | a `list` of plotting parameters to use for the edge `[segments](../../graphics/html/segments)` and labels (if there's an `edgetext`). The list may contain components named `col`, `lty` and `lwd` (for the segments), `p.col`, `p.lwd`, and `p.lty` (for the `[polygon](../../graphics/html/polygon)` around the text) and `t.col` for the text color. As with `nodePar`, each can have length two for differentiating leaves and inner nodes. | | `leaflab` | a string specifying how leaves are labeled. The default `"perpendicular"` write text vertically (by default). `"textlike"` writes text horizontally (in a rectangle), and `"none"` suppresses leaf labels. | | `dLeaf` | a number specifying the **d**istance in user coordinates between the tip of a leaf and its label. If `NULL` as per default, 3/4 of a letter width or height is used. | | `horiz` | logical indicating if the dendrogram should be drawn *horizontally* or not. | | `frame.plot` | logical indicating if a box around the plot should be drawn, see `[plot.default](../../graphics/html/plot.default)`. | | `h` | height at which the tree is cut. | | `height` | height at which the two dendrograms should be merged. If not specified (or `NULL`), the default is ten percent larger than the (larger of the) two component heights. | | `adjust` | a string determining if the leaf values should be adjusted. The default, `"auto"`, checks if the (first) two dendrograms both start at `1`; if they do, code"add.max" is chosen, which adds the maximum of the previous dendrogram leaf values to each leaf of the “next” dendrogram. Specifying `adjust` to another value skips the check and hence is a tad more efficient. | | `xlim, ylim` | optional x- and y-limits of the plot, passed to `[plot.default](../../graphics/html/plot.default)`. The defaults for these show the full dendrogram. | | `..., xlab, ylab, xaxt, yaxt` | graphical parameters, or arguments for other methods. | | `digits` | integer specifying the precision for printing, see `[print.default](../../base/html/print.default)`. | | `max.level, digits.d, give.attr, wid, nest.lev, indent.str` | arguments to `str`, see `[str.default](../../utils/html/str)()`. Note that `give.attr = FALSE` still shows `height` and `members` attributes for each node. | | `last.str, stem` | strings used for `str()` specifying how the last branch (at each level) should start and the *stem* to use for each dendrogram branch. In some environments, using `last.str = "'"` will provide much nicer looking output, than the historical default `last.str = "`"`. | ### Details The dendrogram is directly represented as a nested list where each component corresponds to a branch of the tree. Hence, the first branch of tree `z` is `z[[1]]`, the second branch of the corresponding subtree is `z[[1]][[2]]`, or shorter `z[[c(1,2)]]`, etc.. Each node of the tree carries some information needed for efficient plotting or cutting as attributes, of which only `members`, `height` and `leaf` for leaves are compulsory: `members` total number of leaves in the branch `height` numeric non-negative height at which the node is plotted. `midpoint` numeric horizontal distance of the node from the left border (the leftmost leaf) of the branch (unit 1 between all leaves). This is used for `plot(*, center = FALSE)`. `label` character; the label of the node `x.member` for `cut()$upper`, the number of *former* members; more generally a substitute for the `members` component used for ‘horizontal’ (when `horiz = FALSE`, else ‘vertical’) alignment. `edgetext` character; the label for the edge leading to the node `nodePar` a named list (of length-1 components) specifying node-specific attributes for `[points](../../graphics/html/points)` plotting, see the `nodePar` argument above. `edgePar` a named list (of length-1 components) specifying attributes for `[segments](../../graphics/html/segments)` plotting of the edge leading to the node, and drawing of the `edgetext` if available, see the `edgePar` argument above. `leaf` logical, if `TRUE`, the node is a leaf of the tree. `cut.dendrogram()` returns a list with components `$upper` and `$lower`, the first is a truncated version of the original tree, also of class `dendrogram`, the latter a list with the branches obtained from cutting the tree, each a `dendrogram`. There are `[[[](../../base/html/extract)`, `[print](../../base/html/print)`, and `[str](../../utils/html/str)` methods for `"dendrogram"` objects where the first one (extraction) ensures that selecting sub-branches keeps the class, i.e., returns a dendrogram even if only a leaf. On the other hand, `[[](../../base/html/extract)` (*single* bracket) extraction returns the underlying list structure. Objects of class `"hclust"` can be converted to class `"dendrogram"` using method `as.dendrogram()`, and since R 2.13.0, there is also a `<as.hclust>()` method as an inverse. `rev.dendrogram` simply returns the dendrogram `x` with reversed nodes, see also `<reorder.dendrogram>`. The `[merge](../../base/html/merge)(x, y, ...)` method merges two or more dendrograms into a new one which has `x` and `y` (and optional further arguments) as branches. Note that before **R** 3.1.2, `adjust = "none"` was used implicitly, which is invalid when, e.g., the dendrograms are from `[as.dendrogram](dendrogram)(hclust(..))`. `<nobs>(object)` returns the total number of leaves (the `members` attribute, see above). `is.leaf(object)` returns logical indicating if `object` is a leaf (the most simple dendrogram). `plotNode()` and `plotNodeLimit()` are helper functions. ### Warning Some operations on dendrograms such as `merge()` make use of recursion. For deep trees it may be necessary to increase `[options](../../base/html/options)("expressions")`: if you do, you are likely to need to set the C stack size (`[Cstack\_info](../../base/html/cstack_info)()[["size"]]`) larger than the default where possible. ### Note `plot()`: When using `type = "triangle"`, `center = TRUE` often looks better. `str(d)`: If you really want to see the *internal* structure, use `str(unclass(d))` instead. ### See Also `<dendrapply>` for applying a function to *each* node. `<order.dendrogram>` and `<reorder.dendrogram>`; further, the `[labels](../../base/html/labels)` method. ### Examples ``` require(graphics); require(utils) hc <- hclust(dist(USArrests), "ave") (dend1 <- as.dendrogram(hc)) # "print()" method str(dend1) # "str()" method str(dend1, max.level = 2, last.str = "'") # only the first two sub-levels oo <- options(str.dendrogram.last = "\\") # yet another possibility str(dend1, max.level = 2) # only the first two sub-levels options(oo) # .. resetting them op <- par(mfrow = c(2,2), mar = c(5,2,1,4)) plot(dend1) ## "triangle" type and show inner nodes: plot(dend1, nodePar = list(pch = c(1,NA), cex = 0.8, lab.cex = 0.8), type = "t", center = TRUE) plot(dend1, edgePar = list(col = 1:2, lty = 2:3), dLeaf = 1, edge.root = TRUE) plot(dend1, nodePar = list(pch = 2:1, cex = .4*2:1, col = 2:3), horiz = TRUE) ## simple test for as.hclust() as the inverse of as.dendrogram(): stopifnot(identical(as.hclust(dend1)[1:4], hc[1:4])) dend2 <- cut(dend1, h = 70) ## leaves are wrong horizontally in R 4.0 and earlier: plot(dend2$upper) plot(dend2$upper, nodePar = list(pch = c(1,7), col = 2:1)) ## dend2$lower is *NOT* a dendrogram, but a list of .. : plot(dend2$lower[[3]], nodePar = list(col = 4), horiz = TRUE, type = "tr") ## "inner" and "leaf" edges in different type & color : plot(dend2$lower[[2]], nodePar = list(col = 1), # non empty list edgePar = list(lty = 1:2, col = 2:1), edge.root = TRUE) par(op) d3 <- dend2$lower[[2]][[2]][[1]] stopifnot(identical(d3, dend2$lower[[2]][[c(2,1)]])) str(d3, last.str = "'") ## to peek at the inner structure "if you must", use '[..]' indexing : str(d3[2][[1]]) ## or the full str(d3[]) ## merge() to join dendrograms: (d13 <- merge(dend2$lower[[1]], dend2$lower[[3]])) ## merge() all parts back (using default 'height' instead of original one): den.1 <- Reduce(merge, dend2$lower) ## or merge() all four parts at same height --> 4 branches (!) d. <- merge(dend2$lower[[1]], dend2$lower[[2]], dend2$lower[[3]], dend2$lower[[4]]) ## (with a warning) or the same using do.call : stopifnot(identical(d., do.call(merge, dend2$lower))) plot(d., main = "merge(d1, d2, d3, d4) |-> dendrogram with a 4-split") ## "Zoom" in to the first dendrogram : plot(dend1, xlim = c(1,20), ylim = c(1,50)) nP <- list(col = 3:2, cex = c(2.0, 0.75), pch = 21:22, bg = c("light blue", "pink"), lab.cex = 0.75, lab.col = "tomato") plot(d3, nodePar= nP, edgePar = list(col = "gray", lwd = 2), horiz = TRUE) addE <- function(n) { if(!is.leaf(n)) { attr(n, "edgePar") <- list(p.col = "plum") attr(n, "edgetext") <- paste(attr(n,"members"),"members") } n } d3e <- dendrapply(d3, addE) plot(d3e, nodePar = nP) plot(d3e, nodePar = nP, leaflab = "textlike") ```
programming_docs
r None `stat.anova` GLM Anova Statistics ---------------------------------- ### Description This is a utility function, used in `lm` and `glm` methods for `<anova>(..., test != NULL)` and should not be used by the average user. ### Usage ``` stat.anova(table, test = c("Rao","LRT", "Chisq", "F", "Cp"), scale, df.scale, n) ``` ### Arguments | | | | --- | --- | | `table` | numeric matrix as results from `<anova.glm>(..., test = NULL)`. | | `test` | a character string, partially matching one of `"Rao"`, `"LRT"`, `"Chisq"`, `"F"` or `"Cp"`. | | `scale` | a residual mean square or other scale estimate to be used as the denominator in an F test. | | `df.scale` | degrees of freedom corresponding to `scale`. | | `n` | number of observations. | ### Value A matrix which is the original `table`, augmented by a column of test statistics, depending on the `test` argument. ### References Hastie, T. J. and Pregibon, D. (1992) *Generalized linear models.* Chapter 6 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<anova.lm>`, `<anova.glm>`. ### Examples ``` ##-- Continued from '?glm': print(ag <- anova(glm.D93)) stat.anova(ag$table, test = "Cp", scale = sum(resid(glm.D93, "pearson")^2)/4, df.scale = 4, n = 9) ``` r None `family` Family Objects for Models ----------------------------------- ### Description Family objects provide a convenient way to specify the details of the models used by functions such as `<glm>`. See the documentation for `<glm>` for the details on how such model fitting takes place. ### Usage ``` family(object, ...) binomial(link = "logit") gaussian(link = "identity") Gamma(link = "inverse") inverse.gaussian(link = "1/mu^2") poisson(link = "log") quasi(link = "identity", variance = "constant") quasibinomial(link = "logit") quasipoisson(link = "log") ``` ### Arguments | | | | --- | --- | | `link` | a specification for the model link function. This can be a name/expression, a literal character string, a length-one character vector, or an object of class `"[link-glm](make.link)"` (such as generated by `<make.link>`) provided it is not specified *via* one of the standard names given next. The `gaussian` family accepts the links (as names) `identity`, `log` and `inverse`; the `binomial` family the links `logit`, `probit`, `cauchit`, (corresponding to logistic, normal and Cauchy CDFs respectively) `log` and `cloglog` (complementary log-log); the `Gamma` family the links `inverse`, `identity` and `log`; the `poisson` family the links `log`, `identity`, and `sqrt`; and the `inverse.gaussian` family the links `1/mu^2`, `inverse`, `identity` and `log`. The `quasi` family accepts the links `logit`, `probit`, `cloglog`, `identity`, `inverse`, `log`, `1/mu^2` and `sqrt`, and the function `<power>` can be used to create a power link function. | | `variance` | for all families other than `quasi`, the variance function is determined by the family. The `quasi` family will accept the literal character string (or unquoted as a name/expression) specifications `"constant"`, `"mu(1-mu)"`, `"mu"`, `"mu^2"` and `"mu^3"`, a length-one character vector taking one of those values, or a list containing components `varfun`, `validmu`, `dev.resids`, `initialize` and `name`. | | `object` | the function `family` accesses the `family` objects which are stored within objects created by modelling functions (e.g., `glm`). | | `...` | further arguments passed to methods. | ### Details `family` is a generic function with methods for classes `"glm"` and `"lm"` (the latter returning `gaussian()`). For the `binomial` and `quasibinomial` families the response can be specified in one of three ways: 1. As a factor: ‘success’ is interpreted as the factor not having the first level (and hence usually of having the second level). 2. As a numerical vector with values between `0` and `1`, interpreted as the proportion of successful cases (with the total number of cases given by the `weights`). 3. As a two-column integer matrix: the first column gives the number of successes and the second the number of failures. The `quasibinomial` and `quasipoisson` families differ from the `binomial` and `poisson` families only in that the dispersion parameter is not fixed at one, so they can model over-dispersion. For the binomial case see McCullagh and Nelder (1989, pp. 124–8). Although they show that there is (under some restrictions) a model with variance proportional to mean as in the quasi-binomial model, note that `glm` does not compute maximum-likelihood estimates in that model. The behaviour of S is closer to the quasi- variants. ### Value An object of class `"family"` (which has a concise print method). This is a list with elements | | | | --- | --- | | `family` | character: the family name. | | `link` | character: the link name. | | `linkfun` | function: the link. | | `linkinv` | function: the inverse of the link function. | | `variance` | function: the variance as a function of the mean. | | `dev.resids` | function giving the deviance for each observation as a function of `(y, mu, wt)`, used by the `[residuals](glm.summaries)` method when computing deviance residuals. | | `aic` | function giving the AIC value if appropriate (but `NA` for the quasi- families). More precisely, this function returns *-2 ll + 2 s*, where *ll* is the log-likelihood and *s* is the number of estimated scale parameters. Note that the penalty term for the location parameters (typically the “regression coefficients”) is added elsewhere, e.g., in `[glm.fit](glm)()`, or `[AIC](aic)()`, see the AIC example in `<glm>`. See `[logLik](loglik)` for the assumptions made about the dispersion parameter. | | `mu.eta` | function: derivative of the inverse-link function with respect to the linear predictor. If the inverse-link function is *mu = ginv(eta)* where *eta* is the value of the linear predictor, then this function returns *d(ginv(eta))/d(eta) = d(mu)/d(eta)*. | | `initialize` | expression. This needs to set up whatever data objects are needed for the family as well as `n` (needed for AIC in the binomial family) and `mustart` (see `<glm>`). | | `validmu` | logical function. Returns `TRUE` if a mean vector `mu` is within the domain of `variance`. | | `valideta` | logical function. Returns `TRUE` if a linear predictor `eta` is within the domain of `linkinv`. | | `simulate` | (optional) function `simulate(object, nsim)` to be called by the `"lm"` method of `<simulate>`. It will normally return a matrix with `nsim` columns and one row for each fitted value, but it can also return a list of length `nsim`. Clearly this will be missing for ‘quasi-’ families. | ### Note The `link` and `variance` arguments have rather awkward semantics for back-compatibility. The recommended way is to supply them as quoted character strings, but they can also be supplied unquoted (as names or expressions). Additionally, they can be supplied as a length-one character vector giving the name of one of the options, or as a list (for `link`, of class `"link-glm"`). The restrictions apply only to links given as names: when given as a character string all the links known to `<make.link>` are accepted. This is potentially ambiguous: supplying `link = logit` could mean the unquoted name of a link or the value of object `logit`. It is interpreted if possible as the name of an allowed link, then as an object. (You can force the interpretation to always be the value of an object via `logit[1]`.) ### Author(s) The design was inspired by S functions of the same names described in Hastie & Pregibon (1992) (except `quasibinomial` and `quasipoisson`). ### References McCullagh P. and Nelder, J. A. (1989) *Generalized Linear Models.* London: Chapman and Hall. Dobson, A. J. (1983) *An Introduction to Statistical Modelling.* London: Chapman and Hall. Cox, D. R. and Snell, E. J. (1981). *Applied Statistics; Principles and Examples.* London: Chapman and Hall. Hastie, T. J. and Pregibon, D. (1992) *Generalized linear models.* Chapter 6 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<glm>`, `<power>`, `<make.link>`. For binomial *coefficients*, `[choose](../../base/html/special)`; the binomial and negative binomial *distributions*, `[Binomial](family)`, and `[NegBinomial](negbinomial)`. ### Examples ``` require(utils) # for str nf <- gaussian() # Normal family nf str(nf) gf <- Gamma() gf str(gf) gf$linkinv gf$variance(-3:4) #- == (.)^2 ## Binomial with default 'logit' link: Check some properties visually: bi <- binomial() et <- seq(-10,10, by=1/8) plot(et, bi$mu.eta(et), type="l") ## show that mu.eta() is derivative of linkinv() : lines((et[-1]+et[-length(et)])/2, col=adjustcolor("red", 1/4), diff(bi$linkinv(et))/diff(et), type="l", lwd=4) ## which here is the logistic density: lines(et, dlogis(et), lwd=3, col=adjustcolor("blue", 1/4)) stopifnot(exprs = { all.equal(bi$ mu.eta(et), dlogis(et)) all.equal(bi$linkinv(et), plogis(et) -> m) all.equal(bi$linkfun(m ), qlogis(m)) # logit(.) == qlogis(.) ! }) ## Data from example(glm) : d.AD <- data.frame(treatment = gl(3,3), outcome = gl(3,1,9), counts = c(18,17,15, 20,10,20, 25,13,12)) glm.D93 <- glm(counts ~ outcome + treatment, d.AD, family = poisson()) ## Quasipoisson: compare with above / example(glm) : glm.qD93 <- glm(counts ~ outcome + treatment, d.AD, family = quasipoisson()) glm.qD93 anova (glm.qD93, test = "F") summary(glm.qD93) ## for Poisson results (same as from 'glm.D93' !) use anova (glm.qD93, dispersion = 1, test = "Chisq") summary(glm.qD93, dispersion = 1) ## Example of user-specified link, a logit model for p^days ## See Shaffer, T. 2004. Auk 121(2): 526-540. logexp <- function(days = 1) { linkfun <- function(mu) qlogis(mu^(1/days)) linkinv <- function(eta) plogis(eta)^days mu.eta <- function(eta) days * plogis(eta)^(days-1) * binomial()$mu.eta(eta) valideta <- function(eta) TRUE link <- paste0("logexp(", days, ")") structure(list(linkfun = linkfun, linkinv = linkinv, mu.eta = mu.eta, valideta = valideta, name = link), class = "link-glm") } (bil3 <- binomial(logexp(3))) ## in practice this would be used with a vector of 'days', in ## which case use an offset of 0 in the corresponding formula ## to get the null deviance right. ## Binomial with identity link: often not a good idea, as both ## computationally and conceptually difficult: binomial(link = "identity") ## is exactly the same as binomial(link = make.link("identity")) ## tests of quasi x <- rnorm(100) y <- rpois(100, exp(1+x)) glm(y ~ x, family = quasi(variance = "mu", link = "log")) # which is the same as glm(y ~ x, family = poisson) glm(y ~ x, family = quasi(variance = "mu^2", link = "log")) ## Not run: glm(y ~ x, family = quasi(variance = "mu^3", link = "log")) # fails y <- rbinom(100, 1, plogis(x)) # need to set a starting value for the next fit glm(y ~ x, family = quasi(variance = "mu(1-mu)", link = "logit"), start = c(0,1)) ``` r None `logLik` Extract Log-Likelihood -------------------------------- ### Description This function is generic; method functions can be written to handle specific classes of objects. Classes which have methods for this function include: `"glm"`, `"lm"`, `"nls"` and `"Arima"`. Packages contain methods for other classes, such as `"fitdistr"`, `"negbin"` and `"polr"` in package [MASS](https://CRAN.R-project.org/package=MASS), `"multinom"` in package [nnet](https://CRAN.R-project.org/package=nnet) and `"gls"`, `"gnls"` `"lme"` and others in package [nlme](https://CRAN.R-project.org/package=nlme). ### Usage ``` logLik(object, ...) ## S3 method for class 'lm' logLik(object, REML = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `object` | any object from which a log-likelihood value, or a contribution to a log-likelihood value, can be extracted. | | `...` | some methods for this generic function require additional arguments. | | `REML` | an optional logical value. If `TRUE` the restricted log-likelihood is returned, else, if `FALSE`, the log-likelihood is returned. Defaults to `FALSE`. | ### Details `logLik` is most commonly used for a model fitted by maximum likelihood, and some uses, e.g. by `[AIC](aic)`, assume this. So care is needed where other fit criteria have been used, for example REML (the default for `"lme"`). For a `"glm"` fit the `<family>` does not have to specify how to calculate the log-likelihood, so this is based on using the family's `aic()` function to compute the AIC. For the `[gaussian](family)`, `[Gamma](family)` and `[inverse.gaussian](family)` families it assumed that the dispersion of the GLM is estimated and has been counted as a parameter in the AIC value, and for all other families it is assumed that the dispersion is known. Note that this procedure does not give the maximized likelihood for `"glm"` fits from the Gamma and inverse gaussian families, as the estimate of dispersion used is not the MLE. For `"lm"` fits it is assumed that the scale has been estimated (by maximum likelihood or REML), and all the constants in the log-likelihood are included. That method is only applicable to single-response fits. ### Value Returns an object of class `logLik`. This is a number with at least one attribute, `"df"` (**d**egrees of **f**reedom), giving the number of (estimated) parameters in the model. There is a simple `print` method for `"logLik"` objects. There may be other attributes depending on the method used: see the appropriate documentation. One that is used by several methods is `"nobs"`, the number of observations used in estimation (after the restrictions if `REML = TRUE`). ### Author(s) José Pinheiro and Douglas Bates ### References For `logLik.lm`: Harville, D.A. (1974). Bayesian inference for variance components using only error contrasts. *Biometrika*, **61**, 383–385. doi: [10.2307/2334370](https://doi.org/10.2307/2334370). ### See Also `[logLik.gls](../../nlme/html/loglik.lme)`, `[logLik.lme](../../nlme/html/loglik.lme)`, in package [nlme](https://CRAN.R-project.org/package=nlme), etc. `[AIC](aic)` ### Examples ``` x <- 1:5 lmx <- lm(x ~ 1) logLik(lmx) # using print.logLik() method utils::str(logLik(lmx)) ## lm method (fm1 <- lm(rating ~ ., data = attitude)) logLik(fm1) logLik(fm1, REML = TRUE) utils::data(Orthodont, package = "nlme") fm1 <- lm(distance ~ Sex * age, Orthodont) logLik(fm1) logLik(fm1, REML = TRUE) ``` r None `nextn` Find Highly Composite Numbers -------------------------------------- ### Description `nextn` returns the smallest integer, greater than or equal to `n`, which can be obtained as a product of powers of the values contained in `factors`. `nextn()` is intended to be used to find a suitable length to zero-pad the argument of `<fft>` so that the transform is computed quickly. The default value for `factors` ensures this. ### Usage ``` nextn(n, factors = c(2,3,5)) ``` ### Arguments | | | | --- | --- | | `n` | a vector of integer numbers (of type `"integer"` *or* `"double"`). | | `factors` | a vector of positive integer factors (at least *2* and preferably relative prime, see the note). | ### Value a vector of the same `[length](../../base/html/length)` as `n`, of type `"integer"` when the values are small enough (determined before computing them) and `"double"` otherwise. ### Note If the factors in `factors` are *not* relative prime, i.e., have themselves a common factor larger than one, the result may be wrong in the sense that it may not be the *smallest* integer. E.g., `nextn(91, c(2,6))` returns 128 instead of 96 as `nextn(91, c(2,3))` returns. When the resulting `N <- nextn(..)` is larger than `2^53`, a warning with the true 64-bit integer value is signalled, as integers above that range may not be representable in double precision. If you really need to deal with such large integers, it may be advisable to use package [gmp](https://CRAN.R-project.org/package=gmp). ### See Also `<convolve>`, `<fft>`. ### Examples ``` nextn(1001) # 1024 table(nextn(599:630)) n <- 1:100 ; plot(n, nextn(n) - n, type = "o", lwd=2, cex=1/2) ``` r None `predict.arima` Forecast from ARIMA fits ----------------------------------------- ### Description Forecast from models fitted by `<arima>`. ### Usage ``` ## S3 method for class 'Arima' predict(object, n.ahead = 1, newxreg = NULL, se.fit = TRUE, ...) ``` ### Arguments | | | | --- | --- | | `object` | The result of an `arima` fit. | | `n.ahead` | The number of steps ahead for which prediction is required. | | `newxreg` | New values of `xreg` to be used for prediction. Must have at least `n.ahead` rows. | | `se.fit` | Logical: should standard errors of prediction be returned? | | `...` | arguments passed to or from other methods. | ### Details Finite-history prediction is used, via `[KalmanForecast](kalmanlike)`. This is only statistically efficient if the MA part of the fit is invertible, so `predict.Arima` will give a warning for non-invertible MA models. The standard errors of prediction exclude the uncertainty in the estimation of the ARMA model and the regression coefficients. According to Harvey (1993, pp. 58–9) the effect is small. ### Value A time series of predictions, or if `se.fit = TRUE`, a list with components `pred`, the predictions, and `se`, the estimated standard errors. Both components are time series. ### References Durbin, J. and Koopman, S. J. (2001). *Time Series Analysis by State Space Methods*. Oxford University Press. Harvey, A. C. and McKenzie, C. R. (1982). Algorithm AS 182: An algorithm for finite sample prediction from ARIMA processes. *Applied Statistics*, **31**, 180–187. doi: [10.2307/2347987](https://doi.org/10.2307/2347987). Harvey, A. C. (1993). *Time Series Models*, 2nd Edition. Harvester Wheatsheaf. Sections 3.3 and 4.4. ### See Also `<arima>` ### Examples ``` od <- options(digits = 5) # avoid too much spurious accuracy predict(arima(lh, order = c(3,0,0)), n.ahead = 12) (fit <- arima(USAccDeaths, order = c(0,1,1), seasonal = list(order = c(0,1,1)))) predict(fit, n.ahead = 6) options(od) ``` r None `stats-package` The R Stats Package ------------------------------------ ### Description R statistical functions ### Details This package contains functions for statistical calculations and random number generation. For a complete list of functions, use `library(help = "stats")`. ### Author(s) R Core Team and contributors worldwide Maintainer: R Core Team [[email protected]](mailto:[email protected]) r None `pairwise.prop.test` Pairwise comparisons for proportions ---------------------------------------------------------- ### Description Calculate pairwise comparisons between pairs of proportions with correction for multiple testing ### Usage ``` pairwise.prop.test(x, n, p.adjust.method = p.adjust.methods, ...) ``` ### Arguments | | | | --- | --- | | `x` | Vector of counts of successes or a matrix with 2 columns giving the counts of successes and failures, respectively. | | `n` | Vector of counts of trials; ignored if `x` is a matrix. | | `p.adjust.method` | Method for adjusting p values (see `<p.adjust>`). Can be abbreviated. | | `...` | Additional arguments to pass to `prop.test` | ### Value Object of class `"pairwise.htest"` ### See Also `<prop.test>`, `<p.adjust>` ### Examples ``` smokers <- c( 83, 90, 129, 70 ) patients <- c( 86, 93, 136, 82 ) pairwise.prop.test(smokers, patients) ``` r None `anova.lm` ANOVA for Linear Model Fits --------------------------------------- ### Description Compute an analysis of variance table for one or more linear model fits. ### Usage ``` ## S3 method for class 'lm' anova(object, ...) ## S3 method for class 'lmlist' anova(object, ..., scale = 0, test = "F") ``` ### Arguments | | | | --- | --- | | `object, ...` | objects of class `lm`, usually, a result of a call to `<lm>`. | | `test` | a character string specifying the test statistic to be used. Can be one of `"F"`, `"Chisq"` or `"Cp"`, with partial matching allowed, or `NULL` for no test. | | `scale` | numeric. An estimate of the noise variance *σ^2*. If zero this will be estimated from the largest model considered. | ### Details Specifying a single object gives a sequential analysis of variance table for that fit. That is, the reductions in the residual sum of squares as each term of the formula is added in turn are given in as the rows of a table, plus the residual sum of squares. The table will contain F statistics (and P values) comparing the mean square for the row to the residual mean square. If more than one object is specified, the table has a row for the residual degrees of freedom and sum of squares for each model. For all but the first model, the change in degrees of freedom and sum of squares is also given. (This only make statistical sense if the models are nested.) It is conventional to list the models from smallest to largest, but this is up to the user. Optionally the table can include test statistics. Normally the F statistic is most appropriate, which compares the mean square for a row to the residual sum of squares for the largest model considered. If `scale` is specified chi-squared tests can be used. Mallows' *Cp* statistic is the residual sum of squares plus twice the estimate of *sigma^2* times the residual degrees of freedom. ### Value An object of class `"anova"` inheriting from class `"data.frame"`. ### Warning The comparison between two or more models will only be valid if they are fitted to the same dataset. This may be a problem if there are missing values and **R**'s default of `na.action = na.omit` is used, and `anova.lmlist` will detect this with an error. ### References Chambers, J. M. (1992) *Linear models.* Chapter 4 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also The model fitting function `<lm>`, `<anova>`. `[drop1](add1)` for so-called ‘type II’ anova where each term is dropped one at a time respecting their hierarchy. ### Examples ``` ## sequential table fit <- lm(sr ~ ., data = LifeCycleSavings) anova(fit) ## same effect via separate models fit0 <- lm(sr ~ 1, data = LifeCycleSavings) fit1 <- update(fit0, . ~ . + pop15) fit2 <- update(fit1, . ~ . + pop75) fit3 <- update(fit2, . ~ . + dpi) fit4 <- update(fit3, . ~ . + ddpi) anova(fit0, fit1, fit2, fit3, fit4, test = "F") anova(fit4, fit2, fit0, test = "F") # unconventional order ```
programming_docs
r None `toeplitz` Form Symmetric Toeplitz Matrix ------------------------------------------ ### Description Forms a symmetric Toeplitz matrix given its first row. ### Usage ``` toeplitz(x) ``` ### Arguments | | | | --- | --- | | `x` | the first row to form the Toeplitz matrix. | ### Value The Toeplitz matrix. ### Author(s) A. Trapletti ### Examples ``` x <- 1:5 toeplitz (x) ``` r None `dendrapply` Apply a Function to All Nodes of a Dendrogram ----------------------------------------------------------- ### Description Apply function `FUN` to each node of a `<dendrogram>` recursively. When `y <- dendrapply(x, fn)`, then `y` is a dendrogram of the same graph structure as `x` and for each node, `y.node[j] <- FUN( x.node[j], ...)` (where `y.node[j]` is an (invalid!) notation for the j-th node of y). ### Usage ``` dendrapply(X, FUN, ...) ``` ### Arguments | | | | --- | --- | | `X` | an object of class `"<dendrogram>"`. | | `FUN` | an **R** function to be applied to each dendrogram node, typically working on its `[attributes](../../base/html/attributes)` alone, returning an altered version of the same node. | | `...` | potential further arguments passed to `FUN`. | ### Value Usually a dendrogram of the same (graph) structure as `X`. For that, the function must be conceptually of the form `FUN <- function(X) { attributes(X) <- .....; X }`, i.e., returning the node with some attributes added or changed. ### Note The implementation is somewhat experimental and suggestions for enhancements (or nice examples of usage) are very welcome. The current implementation is *recursive* and inefficient for dendrograms with many non-leaves. See the ‘Warning’ in `<dendrogram>`. ### Author(s) Martin Maechler ### See Also `[as.dendrogram](dendrogram)`, `[lapply](../../base/html/lapply)` for applying a function to each component of a `list`, `[rapply](../../base/html/rapply)` for doing so to each non-list component of a nested list. ### Examples ``` require(graphics) ## a smallish simple dendrogram dhc <- as.dendrogram(hc <- hclust(dist(USArrests), "ave")) (dhc21 <- dhc[[2]][[1]]) ## too simple: dendrapply(dhc21, function(n) utils::str(attributes(n))) ## toy example to set colored leaf labels : local({ colLab <<- function(n) { if(is.leaf(n)) { a <- attributes(n) i <<- i+1 attr(n, "nodePar") <- c(a$nodePar, list(lab.col = mycols[i], lab.font = i%%3)) } n } mycols <- grDevices::rainbow(attr(dhc21,"members")) i <- 0 }) dL <- dendrapply(dhc21, colLab) op <- par(mfrow = 2:1) plot(dhc21) plot(dL) ## --> colored labels! par(op) ``` r None `lmfit` Fitter Functions for Linear Models ------------------------------------------- ### Description These are the basic computing engines called by `<lm>` used to fit linear models. These should usually *not* be used directly unless by experienced users. `.lm.fit()` is bare bone wrapper to the innermost QR-based C code, on which `[glm.fit](glm)` and `<lsfit>` are based as well, for even more experienced users. ### Usage ``` lm.fit (x, y, offset = NULL, method = "qr", tol = 1e-7, singular.ok = TRUE, ...) lm.wfit(x, y, w, offset = NULL, method = "qr", tol = 1e-7, singular.ok = TRUE, ...) .lm.fit(x, y, tol = 1e-7) ``` ### Arguments | | | | --- | --- | | `x` | design matrix of dimension `n * p`. | | `y` | vector of observations of length `n`, or a matrix with `n` rows. | | `w` | vector of weights (length `n`) to be used in the fitting process for the `wfit` functions. Weighted least squares is used with weights `w`, i.e., `sum(w * e^2)` is minimized. | | `offset` | (numeric of length `n`). This can be used to specify an *a priori* known component to be included in the linear predictor during fitting. | | `method` | currently, only `method = "qr"` is supported. | | `tol` | tolerance for the `[qr](../../matrix/html/qr-methods)` decomposition. Default is 1e-7. | | `singular.ok` | logical. If `FALSE`, a singular model is an error. | | `...` | currently disregarded. | ### Value a `[list](../../base/html/list)` with components (for `lm.fit` and `lm.wfit`) | | | | --- | --- | | `coefficients` | `p` vector | | `residuals` | `n` vector or matrix | | `fitted.values` | `n` vector or matrix | | `effects` | `n` vector of orthogonal single-df effects. The first `rank` of them correspond to non-aliased coefficients, and are named accordingly. | | `weights` | `n` vector — *only* for the `*wfit*` functions. | | `rank` | integer, giving the rank | | `df.residual` | degrees of freedom of residuals | | `qr` | the QR decomposition, see `[qr](../../matrix/html/qr-methods)`. | Fits without any columns or non-zero weights do not have the `effects` and `qr` components. `.lm.fit()` returns a subset of the above, the `qr` part unwrapped, plus a logical component `pivoted` indicating if the underlying QR algorithm did pivot. ### See Also `<lm>` which you should use for linear least squares regression, unless you know better. ### Examples ``` require(utils) set.seed(129) n <- 7 ; p <- 2 X <- matrix(rnorm(n * p), n, p) # no intercept! y <- rnorm(n) w <- rnorm(n)^2 str(lmw <- lm.wfit(x = X, y = y, w = w)) str(lm. <- lm.fit (x = X, y = y)) if(require("microbenchmark")) { mb <- microbenchmark(lm(y~X), lm.fit(X,y), .lm.fit(X,y)) print(mb) boxplot(mb, notch=TRUE) } ``` r None `cov.wt` Weighted Covariance Matrices -------------------------------------- ### Description Returns a list containing estimates of the weighted covariance matrix and the mean of the data, and optionally of the (weighted) correlation matrix. ### Usage ``` cov.wt(x, wt = rep(1/nrow(x), nrow(x)), cor = FALSE, center = TRUE, method = c("unbiased", "ML")) ``` ### Arguments | | | | --- | --- | | `x` | a matrix or data frame. As usual, rows are observations and columns are variables. | | `wt` | a non-negative and non-zero vector of weights for each observation. Its length must equal the number of rows of `x`. | | `cor` | a logical indicating whether the estimated correlation weighted matrix will be returned as well. | | `center` | either a logical or a numeric vector specifying the centers to be used when computing covariances. If `TRUE`, the (weighted) mean of each variable is used, if `FALSE`, zero is used. If `center` is numeric, its length must equal the number of columns of `x`. | | `method` | string specifying how the result is scaled, see ‘Details’ below. Can be abbreviated. | ### Details By default, `method = "unbiased"`, The covariance matrix is divided by one minus the sum of squares of the weights, so if the weights are the default (*1/n*) the conventional unbiased estimate of the covariance matrix with divisor *(n - 1)* is obtained. This differs from the behaviour in S-PLUS which corresponds to `method = "ML"` and does not divide. ### Value A list containing the following named components: | | | | --- | --- | | `cov` | the estimated (weighted) covariance matrix | | `center` | an estimate for the center (mean) of the data. | | `n.obs` | the number of observations (rows) in `x`. | | `wt` | the weights used in the estimation. Only returned if given as an argument. | | `cor` | the estimated correlation matrix. Only returned if `cor` is `TRUE`. | ### See Also `[cov](cor)` and `[var](cor)`. ### Examples ``` (xy <- cbind(x = 1:10, y = c(1:3, 8:5, 8:10))) w1 <- c(0,0,0,1,1,1,1,1,0,0) cov.wt(xy, wt = w1) # i.e. method = "unbiased" cov.wt(xy, wt = w1, method = "ML", cor = TRUE) ``` r None `sd` Standard Deviation ------------------------ ### Description This function computes the standard deviation of the values in `x`. If `na.rm` is `TRUE` then missing values are removed before computation proceeds. ### Usage ``` sd(x, na.rm = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector or an **R** object but not a `[factor](../../base/html/factor)` coercible to numeric by `as.double(x)`. | | `na.rm` | logical. Should missing values be removed? | ### Details Like `[var](cor)` this uses denominator *n - 1*. The standard deviation of a length-one or zero-length vector is `NA`. ### See Also `[var](cor)` for its square, and `<mad>`, the most robust alternative. ### Examples ``` sd(1:2) ^ 2 ``` r None `spec.taper` Taper a Time Series by a Cosine Bell -------------------------------------------------- ### Description Apply a cosine-bell taper to a time series. ### Usage ``` spec.taper(x, p = 0.1) ``` ### Arguments | | | | --- | --- | | `x` | A univariate or multivariate time series | | `p` | The proportion to be tapered at each end of the series, either a scalar (giving the proportion for all series) or a vector of the length of the number of series (giving the proportion for each series). | ### Details The cosine-bell taper is applied to the first and last `p[i]` observations of time series `x[, i]`. ### Value A new time series object. ### See Also `<spec.pgram>`, `<cpgram>` r None `mood.test` Mood Two-Sample Test of Scale ------------------------------------------ ### Description Performs Mood's two-sample test for a difference in scale parameters. ### Usage ``` mood.test(x, ...) ## Default S3 method: mood.test(x, y, alternative = c("two.sided", "less", "greater"), ...) ## S3 method for class 'formula' mood.test(formula, data, subset, na.action, ...) ``` ### Arguments | | | | --- | --- | | `x, y` | numeric vectors of data values. | | `alternative` | indicates the alternative hypothesis and must be one of `"two.sided"` (default), `"greater"` or `"less"` all of which can be abbreviated. | | `formula` | a formula of the form `lhs ~ rhs` where `lhs` is a numeric variable giving the data values and `rhs` a factor with two levels giving the corresponding groups. | | `data` | an optional matrix or data frame (or similar: see `<model.frame>`) containing the variables in the formula `formula`. By default the variables are taken from `environment(formula)`. | | `subset` | an optional vector specifying a subset of observations to be used. | | `na.action` | a function which indicates what should happen when the data contain `NA`s. Defaults to `getOption("na.action")`. | | `...` | further arguments to be passed to or from methods. | ### Details The underlying model is that the two samples are drawn from *f(x-l)* and *f((x-l)/s)/s*, respectively, where *l* is a common location parameter and *s* is a scale parameter. The null hypothesis is *s = 1*. There are more useful tests for this problem. In the case of ties, the formulation of Mielke (1967) is employed. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the value of the test statistic. | | `p.value` | the p-value of the test. | | `alternative` | a character string describing the alternative hypothesis. You can specify just the initial letter. | | `method` | the character string `"Mood two-sample test of scale"`. | | `data.name` | a character string giving the names of the data. | ### References William J. Conover (1971), *Practical nonparametric statistics*. New York: John Wiley & Sons. Pages 234f. Paul W. Mielke, Jr. (1967). Note on some squared rank tests with existing ties. *Technometrics*, **9**/2, 312–314. doi: [10.2307/1266427](https://doi.org/10.2307/1266427). ### See Also `<fligner.test>` for a rank-based (nonparametric) k-sample test for homogeneity of variances; `<ansari.test>` for another rank-based two-sample test for a difference in scale parameters; `<var.test>` and `<bartlett.test>` for parametric tests for the homogeneity in variance. ### Examples ``` ## Same data as for the Ansari-Bradley test: ## Serum iron determination using Hyland control sera ramsay <- c(111, 107, 100, 99, 102, 106, 109, 108, 104, 99, 101, 96, 97, 102, 107, 113, 116, 113, 110, 98) jung.parekh <- c(107, 108, 106, 98, 105, 103, 110, 105, 104, 100, 96, 108, 103, 104, 114, 114, 113, 108, 106, 99) mood.test(ramsay, jung.parekh) ## Compare this to ansari.test(ramsay, jung.parekh) ``` r None `plot.spec` Plotting Spectral Densities ---------------------------------------- ### Description Plotting method for objects of class `"spec"`. For multivariate time series it plots the marginal spectra of the series or pairs plots of the coherency and phase of the cross-spectra. ### Usage ``` ## S3 method for class 'spec' plot(x, add = FALSE, ci = 0.95, log = c("yes", "dB", "no"), xlab = "frequency", ylab = NULL, type = "l", ci.col = "blue", ci.lty = 3, main = NULL, sub = NULL, plot.type = c("marginal", "coherency", "phase"), ...) plot.spec.phase(x, ci = 0.95, xlab = "frequency", ylab = "phase", ylim = c(-pi, pi), type = "l", main = NULL, ci.col = "blue", ci.lty = 3, ...) plot.spec.coherency(x, ci = 0.95, xlab = "frequency", ylab = "squared coherency", ylim = c(0, 1), type = "l", main = NULL, ci.col = "blue", ci.lty = 3, ...) ``` ### Arguments | | | | --- | --- | | `x` | an object of class `"spec"`. | | `add` | logical. If `TRUE`, add to already existing plot. Only valid for `plot.type = "marginal"`. | | `ci` | coverage probability for confidence interval. Plotting of the confidence bar/limits is omitted unless `ci` is strictly positive. | | `log` | If `"dB"`, plot on log10 (decibel) scale (as S-PLUS), otherwise use conventional log scale or linear scale. Logical values are also accepted. The default is `"yes"` unless `options(ts.S.compat = TRUE)` has been set, when it is `"dB"`. Only valid for `plot.type = "marginal"`. | | `xlab` | the x label of the plot. | | `ylab` | the y label of the plot. If missing a suitable label will be constructed. | | `type` | the type of plot to be drawn, defaults to lines. | | `ci.col` | colour for plotting confidence bar or confidence intervals for coherency and phase. | | `ci.lty` | line type for confidence intervals for coherency and phase. | | `main` | overall title for the plot. If missing, a suitable title is constructed. | | `sub` | a sub title for the plot. Only used for `plot.type = "marginal"`. If missing, a description of the smoothing is used. | | `plot.type` | For multivariate time series, the type of plot required. Only the first character is needed. | | `ylim, ...` | Graphical parameters. | ### See Also `<spectrum>` r None `isoreg` Isotonic / Monotone Regression ---------------------------------------- ### Description Compute the isotonic (monotonely increasing nonparametric) least squares regression which is piecewise constant. ### Usage ``` isoreg(x, y = NULL) ``` ### Arguments | | | | --- | --- | | `x, y` | coordinate vectors of the regression points. Alternatively a single plotting structure can be specified: see `[xy.coords](../../grdevices/html/xy.coords)`. | ### Details The algorithm determines the convex minorant *m(x)* of the *cumulative* data (i.e., `cumsum(y)`) which is piecewise linear and the result is *m'(x)*, a step function with level changes at locations where the convex *m(x)* touches the cumulative data polygon and changes slope. `[as.stepfun](stepfun)()` returns a `<stepfun>` object which can be more parsimonious. ### Value `isoreg()` returns an object of class `isoreg` which is basically a list with components | | | | --- | --- | | `x` | original (constructed) abscissa values `x`. | | `y` | corresponding y values. | | `yf` | fitted values corresponding to *ordered* x values. | | `yc` | cumulative y values corresponding to *ordered* x values. | | `iKnots` | integer vector giving indices where the fitted curve jumps, i.e., where the convex minorant has kinks. | | `isOrd` | logical indicating if original x values were ordered increasingly already. | | `ord` | `if(!isOrd)`: integer permutation `[order](../../base/html/order)(x)` of *original* `x`. | | `call` | the `[call](../../base/html/call)` to `isoreg()` used. | ### Note The code should be improved to accept *weights* additionally and solve the corresponding weighted least squares problem. ‘Patches are welcome!’ ### References Barlow, R. E., Bartholomew, D. J., Bremner, J. M., and Brunk, H. D. (1972) *Statistical inference under order restrictions*; Wiley, London. Robertson, T., Wright, F. T. and Dykstra, R. L. (1988) *Order Restricted Statistical Inference*; Wiley, New York. ### See Also the plotting method `<plot.isoreg>` with more examples; `[isoMDS](../../mass/html/isomds)()` from the [MASS](https://CRAN.R-project.org/package=MASS) package internally uses isotonic regression. ### Examples ``` require(graphics) (ir <- isoreg(c(1,0,4,3,3,5,4,2,0))) plot(ir, plot.type = "row") (ir3 <- isoreg(y3 <- c(1,0,4,3,3,5,4,2, 3))) # last "3", not "0" (fi3 <- as.stepfun(ir3)) (ir4 <- isoreg(1:10, y4 <- c(5, 9, 1:2, 5:8, 3, 8))) cat(sprintf("R^2 = %.2f\n", 1 - sum(residuals(ir4)^2) / ((10-1)*var(y4)))) ## If you are interested in the knots alone : with(ir4, cbind(iKnots, yf[iKnots])) ## Example of unordered x[] with ties: x <- sample((0:30)/8) y <- exp(x) x. <- round(x) # ties! plot(m <- isoreg(x., y)) stopifnot(all.equal(with(m, yf[iKnots]), as.vector(tapply(y, x., mean)))) ``` r None `contrast` (Possibly Sparse) Contrast Matrices ----------------------------------------------- ### Description Return a matrix of contrasts. ### Usage ``` contr.helmert(n, contrasts = TRUE, sparse = FALSE) contr.poly(n, scores = 1:n, contrasts = TRUE, sparse = FALSE) contr.sum(n, contrasts = TRUE, sparse = FALSE) contr.treatment(n, base = 1, contrasts = TRUE, sparse = FALSE) contr.SAS(n, contrasts = TRUE, sparse = FALSE) ``` ### Arguments | | | | --- | --- | | `n` | a vector of levels for a factor, or the number of levels. | | `contrasts` | a logical indicating whether contrasts should be computed. | | `sparse` | logical indicating if the result should be sparse (of class `[dgCMatrix](../../matrix/html/dgcmatrix-class)`), using package [Matrix](https://CRAN.R-project.org/package=Matrix). | | `scores` | the set of values over which orthogonal polynomials are to be computed. | | `base` | an integer specifying which group is considered the baseline group. Ignored if `contrasts` is `FALSE`. | ### Details These functions are used for creating contrast matrices for use in fitting analysis of variance and regression models. The columns of the resulting matrices contain contrasts which can be used for coding a factor with `n` levels. The returned value contains the computed contrasts. If the argument `contrasts` is `FALSE` a square indicator matrix (the dummy coding) is returned **except** for `contr.poly` (which includes the 0-degree, i.e. constant, polynomial when `contrasts = FALSE`). `contr.helmert` returns Helmert contrasts, which contrast the second level with the first, the third with the average of the first two, and so on. `contr.poly` returns contrasts based on orthogonal polynomials. `contr.sum` uses ‘sum to zero contrasts’. `contr.treatment` contrasts each level with the baseline level (specified by `base`): the baseline level is omitted. Note that this does not produce ‘contrasts’ as defined in the standard theory for linear models as they are not orthogonal to the intercept. `contr.SAS` is a wrapper for `contr.treatment` that sets the base level to be the last level of the factor. The coefficients produced when using these contrasts should be equivalent to those produced by many (but not all) SAS procedures. For consistency, `sparse` is an argument to all these contrast functions, however `sparse = TRUE` for `contr.poly` is typically pointless and is rarely useful for `contr.helmert`. ### Value A matrix with `n` rows and `k` columns, with `k=n-1` if `contrasts` is `TRUE` and `k=n` if `contrasts` is `FALSE`. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical models.* Chapter 2 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<contrasts>`, `[C](zc)`, and `<aov>`, `<glm>`, `<lm>`. ### Examples ``` (cH <- contr.helmert(4)) apply(cH, 2, sum) # column sums are 0 crossprod(cH) # diagonal -- columns are orthogonal contr.helmert(4, contrasts = FALSE) # just the 4 x 4 identity matrix (cT <- contr.treatment(5)) all(crossprod(cT) == diag(4)) # TRUE: even orthonormal (cT. <- contr.SAS(5)) all(crossprod(cT.) == diag(4)) # TRUE zapsmall(cP <- contr.poly(3)) # Linear and Quadratic zapsmall(crossprod(cP), digits = 15) # orthonormal up to fuzz ```
programming_docs
r None `spectrum` Spectral Density Estimation --------------------------------------- ### Description The `spectrum` function estimates the spectral density of a time series. ### Usage ``` spectrum(x, ..., method = c("pgram", "ar")) ``` ### Arguments | | | | --- | --- | | `x` | A univariate or multivariate time series. | | `method` | String specifying the method used to estimate the spectral density. Allowed methods are `"pgram"` (the default) and `"ar"`. Can be abbreviated. | | `...` | Further arguments to specific spec methods or `plot.spec`. | ### Details `spectrum` is a wrapper function which calls the methods `<spec.pgram>` and `<spec.ar>`. The spectrum here is defined with scaling `1/[frequency](time)(x)`, following S-PLUS. This makes the spectral density a density over the range `(-frequency(x)/2, +frequency(x)/2]`, whereas a more common scaling is *2pi* and range *(-0.5, 0.5]* (e.g., Bloomfield) or 1 and range *(-pi, pi]*. If available, a confidence interval will be plotted by `plot.spec`: this is asymmetric, and the width of the centre mark indicates the equivalent bandwidth. ### Value An object of class `"spec"`, which is a list containing at least the following components: | | | | --- | --- | | `freq` | vector of frequencies at which the spectral density is estimated. (Possibly approximate Fourier frequencies.) The units are the reciprocal of cycles per unit time (and not per observation spacing): see ‘Details’ below. | | `spec` | Vector (for univariate series) or matrix (for multivariate series) of estimates of the spectral density at frequencies corresponding to `freq`. | | `coh` | `NULL` for univariate series. For multivariate time series, a matrix containing the *squared* coherency between different series. Column *i + (j - 1) \* (j - 2)/2* of `coh` contains the squared coherency between columns *i* and *j* of `x`, where *i < j*. | | `phase` | `NULL` for univariate series. For multivariate time series a matrix containing the cross-spectrum phase between different series. The format is the same as `coh`. | | `series` | The name of the time series. | | `snames` | For multivariate input, the names of the component series. | | `method` | The method used to calculate the spectrum. | The result is returned invisibly if `plot` is true. ### Note The default plot for objects of class `"spec"` is quite complex, including an error bar and default title, subtitle and axis labels. The defaults can all be overridden by supplying the appropriate graphical parameters. ### Author(s) Martyn Plummer, B.D. Ripley ### References Bloomfield, P. (1976) *Fourier Analysis of Time Series: An Introduction.* Wiley. Brockwell, P. J. and Davis, R. A. (1991) *Time Series: Theory and Methods.* Second edition. Springer. Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S-PLUS.* Fourth edition. Springer. (Especially pages 392–7.) ### See Also `<spec.ar>`, `<spec.pgram>`; `<plot.spec>`. ### Examples ``` require(graphics) ## Examples from Venables & Ripley ## spec.pgram par(mfrow = c(2,2)) spectrum(lh) spectrum(lh, spans = 3) spectrum(lh, spans = c(3,3)) spectrum(lh, spans = c(3,5)) spectrum(ldeaths) spectrum(ldeaths, spans = c(3,3)) spectrum(ldeaths, spans = c(3,5)) spectrum(ldeaths, spans = c(5,7)) spectrum(ldeaths, spans = c(5,7), log = "dB", ci = 0.8) # for multivariate examples see the help for spec.pgram ## spec.ar spectrum(lh, method = "ar") spectrum(ldeaths, method = "ar") ``` r None `ts.plot` Plot Multiple Time Series ------------------------------------ ### Description Plot several time series on a common plot. Unlike `<plot.ts>` the series can have a different time bases, but they should have the same frequency. ### Usage ``` ts.plot(..., gpars = list()) ``` ### Arguments | | | | --- | --- | | `...` | one or more univariate or multivariate time series. | | `gpars` | list of named graphics parameters to be passed to the plotting functions. Those commonly used can be supplied directly in `...`. | ### Value None. ### Note Although this can be used for a single time series, `plot` is easier to use and is preferred. ### See Also `<plot.ts>` ### Examples ``` require(graphics) ts.plot(ldeaths, mdeaths, fdeaths, gpars=list(xlab="year", ylab="deaths", lty=c(1:3))) ``` r None `getInitial` Get Initial Parameter Estimates --------------------------------------------- ### Description This function evaluates initial parameter estimates for a nonlinear regression model. If `data` is a parameterized data frame or `pframe` object, its `parameters` attribute is returned. Otherwise the object is examined to see if it contains a call to a `selfStart` object whose `initial` attribute can be evaluated. ### Usage ``` getInitial(object, data, ...) ``` ### Arguments | | | | --- | --- | | `object` | a formula or a `[selfStart](selfstart)` model that defines a nonlinear regression model | | `data` | a data frame in which the expressions in the formula or arguments to the `selfStart` model can be evaluated | | `...` | optional additional arguments | ### Value A named numeric vector or list of starting estimates for the parameters. The construction of many `selfStart` models is such that these "starting" estimates are, in fact, the converged parameter estimates. ### Author(s) José Pinheiro and Douglas Bates ### See Also `<nls>`, `[selfStart](selfstart)`, `[selfStart.default](selfstart)`, `[selfStart.formula](selfstart)`. Further, `[nlsList](../../nlme/html/nlslist)` from [nlme](https://CRAN.R-project.org/package=nlme). ### Examples ``` PurTrt <- Puromycin[ Puromycin$state == "treated", ] print(getInitial( rate ~ SSmicmen( conc, Vm, K ), PurTrt ), digits = 3) ``` r None `varimax` Rotation Methods for Factor Analysis ----------------------------------------------- ### Description These functions ‘rotate’ loading matrices in factor analysis. ### Usage ``` varimax(x, normalize = TRUE, eps = 1e-5) promax(x, m = 4) ``` ### Arguments | | | | --- | --- | | `x` | A loadings matrix, with *p* rows and *k < p* columns | | `m` | The power used the target for `promax`. Values of 2 to 4 are recommended. | | `normalize` | logical. Should Kaiser normalization be performed? If so the rows of `x` are re-scaled to unit length before rotation, and scaled back afterwards. | | `eps` | The tolerance for stopping: the relative change in the sum of singular values. | ### Details These seek a ‘rotation’ of the factors `x %*% T` that aims to clarify the structure of the loadings matrix. The matrix `T` is a rotation (possibly with reflection) for `varimax`, but a general linear transformation for `promax`, with the variance of the factors being preserved. ### Value A list with components | | | | --- | --- | | `loadings` | The ‘rotated’ loadings matrix, `x %*% rotmat`, of class `"loadings"`. | | `rotmat` | The ‘rotation’ matrix. | ### References Hendrickson, A. E. and White, P. O. (1964). Promax: a quick method for rotation to orthogonal oblique structure. *British Journal of Statistical Psychology*, **17**, 65–70. doi: [10.1111/j.2044-8317.1964.tb00244.x](https://doi.org/10.1111/j.2044-8317.1964.tb00244.x). Horst, P. (1965). *Factor Analysis of Data Matrices*. Holt, Rinehart and Winston. Chapter 10. Kaiser, H. F. (1958). The varimax criterion for analytic rotation in factor analysis. *Psychometrika*, **23**, 187–200. doi: [10.1007/BF02289233](https://doi.org/10.1007/BF02289233). Lawley, D. N. and Maxwell, A. E. (1971). *Factor Analysis as a Statistical Method*, second edition. Butterworths. ### See Also `<factanal>`, `[Harman74.cor](../../datasets/html/harman74.cor)`. ### Examples ``` ## varimax with normalize = TRUE is the default fa <- factanal( ~., 2, data = swiss) varimax(loadings(fa), normalize = FALSE) promax(loadings(fa)) ``` r None `zC` Sets Contrasts for a Factor --------------------------------- ### Description Sets the `"contrasts"` attribute for the factor. ### Usage ``` C(object, contr, how.many, ...) ``` ### Arguments | | | | --- | --- | | `object` | a factor or ordered factor | | `contr` | which contrasts to use. Can be a matrix with one row for each level of the factor or a suitable function like `contr.poly` or a character string giving the name of the function | | `how.many` | the number of contrasts to set, by default one less than `nlevels(object)`. | | `...` | additional arguments for the function `contr`. | ### Details For compatibility with S, `contr` can be `treatment`, `helmert`, `sum` or `poly` (without quotes) as shorthand for `contr.treatment` and so on. ### Value The factor `object` with the `"contrasts"` attribute set. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical models.* Chapter 2 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<contrasts>`, `[contr.sum](contrast)`, etc. ### Examples ``` ## reset contrasts to defaults options(contrasts = c("contr.treatment", "contr.poly")) tens <- with(warpbreaks, C(tension, poly, 1)) attributes(tens) ## tension SHOULD be an ordered factor, but as it is not we can use aov(breaks ~ wool + tens + tension, data = warpbreaks) ## show the use of ... The default contrast is contr.treatment here summary(lm(breaks ~ wool + C(tension, base = 2), data = warpbreaks)) # following on from help(esoph) model3 <- glm(cbind(ncases, ncontrols) ~ agegp + C(tobgp, , 1) + C(alcgp, , 1), data = esoph, family = binomial()) summary(model3) ``` r None `add1` Add or Drop All Possible Single Terms to a Model -------------------------------------------------------- ### Description Compute all the single terms in the `scope` argument that can be added to or dropped from the model, fit those models and compute a table of the changes in fit. ### Usage ``` add1(object, scope, ...) ## Default S3 method: add1(object, scope, scale = 0, test = c("none", "Chisq"), k = 2, trace = FALSE, ...) ## S3 method for class 'lm' add1(object, scope, scale = 0, test = c("none", "Chisq", "F"), x = NULL, k = 2, ...) ## S3 method for class 'glm' add1(object, scope, scale = 0, test = c("none", "Rao", "LRT", "Chisq", "F"), x = NULL, k = 2, ...) drop1(object, scope, ...) ## Default S3 method: drop1(object, scope, scale = 0, test = c("none", "Chisq"), k = 2, trace = FALSE, ...) ## S3 method for class 'lm' drop1(object, scope, scale = 0, all.cols = TRUE, test = c("none", "Chisq", "F"), k = 2, ...) ## S3 method for class 'glm' drop1(object, scope, scale = 0, test = c("none", "Rao", "LRT", "Chisq", "F"), k = 2, ...) ``` ### Arguments | | | | --- | --- | | `object` | a fitted model object. | | `scope` | a formula giving the terms to be considered for adding or dropping. | | `scale` | an estimate of the residual mean square to be used in computing *Cp*. Ignored if `0` or `NULL`. | | `test` | should the results include a test statistic relative to the original model? The F test is only appropriate for `<lm>` and `<aov>` models or perhaps for `<glm>` fits with estimated dispersion. The *Chisq* test can be an exact test (`lm` models with known scale) or a likelihood-ratio test or a test of the reduction in scaled deviance depending on the method. For `<glm>` fits, you can also choose `"LRT"` and `"Rao"` for likelihood ratio tests and Rao's efficient score test. The former is synonymous with `"Chisq"` (although both have an asymptotic chi-square distribution). Values can be abbreviated. | | `k` | the penalty constant in AIC / *Cp*. | | `trace` | if `TRUE`, print out progress reports. | | `x` | a model matrix containing columns for the fitted model and all terms in the upper scope. Useful if `add1` is to be called repeatedly. **Warning:** no checks are done on its validity. | | `all.cols` | (Provided for compatibility with S.) Logical to specify whether all columns of the design matrix should be used. If `FALSE` then non-estimable columns are dropped, but the result is not usually statistically meaningful. | | `...` | further arguments passed to or from other methods. | ### Details For `drop1` methods, a missing `scope` is taken to be all terms in the model. The hierarchy is respected when considering terms to be added or dropped: all main effects contained in a second-order interaction must remain, and so on. In a `scope` formula `.` means ‘what is already there’. The methods for `<lm>` and `<glm>` are more efficient in that they do not recompute the model matrix and call the `fit` methods directly. The default output table gives AIC, defined as minus twice log likelihood plus *2p* where *p* is the rank of the model (the number of effective parameters). This is only defined up to an additive constant (like log-likelihoods). For linear Gaussian models with fixed scale, the constant is chosen to give Mallows' *Cp*, *RSS/scale + 2p - n*. Where *Cp* is used, the column is labelled as `Cp` rather than `AIC`. The F tests for the `"glm"` methods are based on analysis of deviance tests, so if the dispersion is estimated it is based on the residual deviance, unlike the F tests of `<anova.glm>`. ### Value An object of class `"anova"` summarizing the differences in fit between the models. ### Warning The model fitting must apply the models to the same dataset. Most methods will attempt to use a subset of the data with no missing values for any of the variables if `na.action = na.omit`, but this may give biased results. Only use these functions with data containing missing values with great care. The default methods make calls to the function `<nobs>` to check that the number of observations involved in the fitting process remained unchanged. ### Note These are not fully equivalent to the functions in S. There is no `keep` argument, and the methods used are not quite so computationally efficient. Their authors' definitions of Mallows' *Cp* and Akaike's AIC are used, not those of the authors of the models chapter of S. ### Author(s) The design was inspired by the S functions of the same names described in Chambers (1992). ### References Chambers, J. M. (1992) *Linear models.* Chapter 4 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<step>`, `<aov>`, `<lm>`, `[extractAIC](extractaic)`, `<anova>` ### Examples ``` require(graphics); require(utils) ## following example(swiss) lm1 <- lm(Fertility ~ ., data = swiss) add1(lm1, ~ I(Education^2) + .^2) drop1(lm1, test = "F") # So called 'type II' anova ## following example(glm) drop1(glm.D93, test = "Chisq") drop1(glm.D93, test = "F") add1(glm.D93, scope = ~outcome*treatment, test = "Rao") ## Pearson Chi-square ``` r None `Fdist` The F Distribution --------------------------- ### Description Density, distribution function, quantile function and random generation for the F distribution with `df1` and `df2` degrees of freedom (and optional non-centrality parameter `ncp`). ### Usage ``` df(x, df1, df2, ncp, log = FALSE) pf(q, df1, df2, ncp, lower.tail = TRUE, log.p = FALSE) qf(p, df1, df2, ncp, lower.tail = TRUE, log.p = FALSE) rf(n, df1, df2, ncp) ``` ### Arguments | | | | --- | --- | | `x, q` | vector of quantiles. | | `p` | vector of probabilities. | | `n` | number of observations. If `length(n) > 1`, the length is taken to be the number required. | | `df1, df2` | degrees of freedom. `Inf` is allowed. | | `ncp` | non-centrality parameter. If omitted the central F is assumed. | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details The F distribution with `df1 =` *n1* and `df2 =` *n2* degrees of freedom has density *f(x) = Γ((n1 + n2)/2) / (Γ(n1/2) Γ(n2/2)) (n1/n2)^(n1/2) x^(n1/2 - 1) (1 + (n1/n2) x)^-(n1 + n2)/2* for *x > 0*. It is the distribution of the ratio of the mean squares of *n1* and *n2* independent standard normals, and hence of the ratio of two independent chi-squared variates each divided by its degrees of freedom. Since the ratio of a normal and the root mean-square of *m* independent normals has a Student's *t\_m* distribution, the square of a *t\_m* variate has a F distribution on 1 and *m* degrees of freedom. The non-central F distribution is again the ratio of mean squares of independent normals of unit variance, but those in the numerator are allowed to have non-zero means and `ncp` is the sum of squares of the means. See [Chisquare](chisquare) for further details on non-central distributions. ### Value `df` gives the density, `pf` gives the distribution function `qf` gives the quantile function, and `rf` generates random deviates. Invalid arguments will result in return value `NaN`, with a warning. The length of the result is determined by `n` for `rf`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `n` are recycled to the length of the result. Only the first elements of the logical arguments are used. ### Note Supplying `ncp = 0` uses the algorithm for the non-central distribution, which is not the same algorithm used if `ncp` is omitted. This is to give consistent behaviour in extreme cases with values of `ncp` very near zero. The code for non-zero `ncp` is principally intended to be used for moderate values of `ncp`: it will not be highly accurate, especially in the tails, for large values. ### Source For the central case of `df`, computed *via* a binomial probability, code contributed by Catherine Loader (see `[dbinom](family)`); for the non-central case computed *via* `[dbeta](beta)`, code contributed by Peter Ruckdeschel. For `pf`, *via* `[pbeta](beta)` (or for large `df2`, *via* `[pchisq](chisquare)`). For `qf`, *via* `[qchisq](chisquare)` for large `df2`, else *via* `[qbeta](beta)`. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Johnson, N. L., Kotz, S. and Balakrishnan, N. (1995) *Continuous Univariate Distributions*, volume 2, chapters 27 and 30. Wiley, New York. ### See Also [Distributions](distributions) for other standard distributions, including `[dchisq](chisquare)` for chi-squared and `[dt](tdist)` for Student's t distributions. ### Examples ``` ## Equivalence of pt(.,nu) with pf(.^2, 1,nu): x <- seq(0.001, 5, length.out = 100) nu <- 4 stopifnot(all.equal(2*pt(x,nu) - 1, pf(x^2, 1,nu)), ## upper tails: all.equal(2*pt(x, nu, lower.tail=FALSE), pf(x^2, 1,nu, lower.tail=FALSE))) ## the density of the square of a t_m is 2*dt(x, m)/(2*x) # check this is the same as the density of F_{1,m} all.equal(df(x^2, 1, 5), dt(x, 5)/x) ## Identity: qf(2*p - 1, 1, df) == qt(p, df)^2 for p >= 1/2 p <- seq(1/2, .99, length.out = 50); df <- 10 rel.err <- function(x, y) ifelse(x == y, 0, abs(x-y)/mean(abs(c(x,y)))) quantile(rel.err(qf(2*p - 1, df1 = 1, df2 = df), qt(p, df)^2), .90) # ~= 7e-9 ``` r None `loess` Local Polynomial Regression Fitting -------------------------------------------- ### Description Fit a polynomial surface determined by one or more numerical predictors, using local fitting. ### Usage ``` loess(formula, data, weights, subset, na.action, model = FALSE, span = 0.75, enp.target, degree = 2, parametric = FALSE, drop.square = FALSE, normalize = TRUE, family = c("gaussian", "symmetric"), method = c("loess", "model.frame"), control = loess.control(...), ...) ``` ### Arguments | | | | --- | --- | | `formula` | a <formula> specifying the numeric response and one to four numeric predictors (best specified via an interaction, but can also be specified additively). Will be coerced to a formula if necessary. | | `data` | an optional data frame, list or environment (or object coercible by `[as.data.frame](../../base/html/as.data.frame)` to a data frame) containing the variables in the model. If not found in `data`, the variables are taken from `environment(formula)`, typically the environment from which `loess` is called. | | `weights` | optional weights for each case. | | `subset` | an optional specification of a subset of the data to be used. | | `na.action` | the action to be taken with missing values in the response or predictors. The default is given by `getOption("na.action")`. | | `model` | should the model frame be returned? | | `span` | the parameter *α* which controls the degree of smoothing. | | `enp.target` | an alternative way to specify `span`, as the approximate equivalent number of parameters to be used. | | `degree` | the degree of the polynomials to be used, normally 1 or 2. (Degree 0 is also allowed, but see the ‘Note’.) | | `parametric` | should any terms be fitted globally rather than locally? Terms can be specified by name, number or as a logical vector of the same length as the number of predictors. | | `drop.square` | for fits with more than one predictor and `degree = 2`, should the quadratic term be dropped for particular predictors? Terms are specified in the same way as for `parametric`. | | `normalize` | should the predictors be normalized to a common scale if there is more than one? The normalization used is to set the 10% trimmed standard deviation to one. Set to false for spatial coordinate predictors and others known to be on a common scale. | | `family` | if `"gaussian"` fitting is by least-squares, and if `"symmetric"` a re-descending M estimator is used with Tukey's biweight function. Can be abbreviated. | | `method` | fit the model or just extract the model frame. Can be abbreviated. | | `control` | control parameters: see `<loess.control>`. | | `...` | control parameters can also be supplied directly (*if* `control` is not specified). | ### Details Fitting is done locally. That is, for the fit at point *x*, the fit is made using points in a neighbourhood of *x*, weighted by their distance from *x* (with differences in ‘parametric’ variables being ignored when computing the distance). The size of the neighbourhood is controlled by *α* (set by `span` or `enp.target`). For *α < 1*, the neighbourhood includes proportion *α* of the points, and these have tricubic weighting (proportional to *(1 - (dist/maxdist)^3)^3*). For *α > 1*, all points are used, with the ‘maximum distance’ assumed to be *α^(1/p)* times the actual maximum distance for *p* explanatory variables. For the default family, fitting is by (weighted) least squares. For `family="symmetric"` a few iterations of an M-estimation procedure with Tukey's biweight are used. Be aware that as the initial value is the least-squares fit, this need not be a very resistant fit. It can be important to tune the control list to achieve acceptable speed. See `<loess.control>` for details. ### Value An object of class `"loess"`. ### Note As this is based on `cloess`, it is similar to but not identical to the `loess` function of S. In particular, conditioning is not implemented. The memory usage of this implementation of `loess` is roughly quadratic in the number of points, with 1000 points taking about 10Mb. `degree = 0`, local constant fitting, is allowed in this implementation but not documented in the reference. It seems very little tested, so use with caution. ### Author(s) B. D. Ripley, based on the `cloess` package of Cleveland, Grosse and Shyu. ### Source The 1998 version of `cloess` package of Cleveland, Grosse and Shyu. A later version is available as `dloess` at <https://www.netlib.org/a/>. ### References W. S. Cleveland, E. Grosse and W. M. Shyu (1992) Local regression models. Chapter 8 of *Statistical Models in S* eds J.M. Chambers and T.J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<loess.control>`, `<predict.loess>`. `<lowess>`, the ancestor of `loess` (with different defaults!). ### Examples ``` cars.lo <- loess(dist ~ speed, cars) predict(cars.lo, data.frame(speed = seq(5, 30, 1)), se = TRUE) # to allow extrapolation cars.lo2 <- loess(dist ~ speed, cars, control = loess.control(surface = "direct")) predict(cars.lo2, data.frame(speed = seq(5, 30, 1)), se = TRUE) ```
programming_docs
r None `TukeyHSD` Compute Tukey Honest Significant Differences -------------------------------------------------------- ### Description Create a set of confidence intervals on the differences between the means of the levels of a factor with the specified family-wise probability of coverage. The intervals are based on the Studentized range statistic, Tukey's ‘Honest Significant Difference’ method. ### Usage ``` TukeyHSD(x, which, ordered = FALSE, conf.level = 0.95, ...) ``` ### Arguments | | | | --- | --- | | `x` | A fitted model object, usually an `<aov>` fit. | | `which` | A character vector listing terms in the fitted model for which the intervals should be calculated. Defaults to all the terms. | | `ordered` | A logical value indicating if the levels of the factor should be ordered according to increasing average in the sample before taking differences. If `ordered` is true then the calculated differences in the means will all be positive. The significant differences will be those for which the `lwr` end point is positive. | | `conf.level` | A numeric value between zero and one giving the family-wise confidence level to use. | | `...` | Optional additional arguments. None are used at present. | ### Details This is a generic function: the description here applies to the method for fits of class `"aov"`. When comparing the means for the levels of a factor in an analysis of variance, a simple comparison using t-tests will inflate the probability of declaring a significant difference when it is not in fact present. This because the intervals are calculated with a given coverage probability for each interval but the interpretation of the coverage is usually with respect to the entire family of intervals. John Tukey introduced intervals based on the range of the sample means rather than the individual differences. The intervals returned by this function are based on this Studentized range statistics. The intervals constructed in this way would only apply exactly to balanced designs where there are the same number of observations made at each level of the factor. This function incorporates an adjustment for sample size that produces sensible intervals for mildly unbalanced designs. If `which` specifies non-factor terms these will be dropped with a warning: if no terms are left this is an error. ### Value A list of class `c("multicomp", "TukeyHSD")`, with one component for each term requested in `which`. Each component is a matrix with columns `diff` giving the difference in the observed means, `lwr` giving the lower end point of the interval, `upr` giving the upper end point and `p adj` giving the p-value after adjustment for the multiple comparisons. There are `print` and `plot` methods for class `"TukeyHSD"`. The `plot` method does not accept `xlab`, `ylab` or `main` arguments and creates its own values for each plot. ### Author(s) Douglas Bates ### References Miller, R. G. (1981) *Simultaneous Statistical Inference*. Springer. Yandell, B. S. (1997) *Practical Data Analysis for Designed Experiments*. Chapman & Hall. ### See Also `<aov>`, `[qtukey](tukey)`, `<model.tables>`, `[glht](../../multcomp/html/glht)` in package [multcomp](https://CRAN.R-project.org/package=multcomp). ### Examples ``` require(graphics) summary(fm1 <- aov(breaks ~ wool + tension, data = warpbreaks)) TukeyHSD(fm1, "tension", ordered = TRUE) plot(TukeyHSD(fm1, "tension")) ``` r None `weights` Extract Model Weights -------------------------------- ### Description `weights` is a generic function which extracts fitting weights from objects returned by modeling functions. Methods can make use of `[napredict](nafns)` methods to compensate for the omission of missing values. The default methods does so. ### Usage ``` weights(object, ...) ``` ### Arguments | | | | --- | --- | | `object` | an object for which the extraction of model weights is meaningful. | | `...` | other arguments passed to methods. | ### Value Weights extracted from the object `object`: the default method looks for component `"weights"` and if not `NULL` calls `[napredict](nafns)` on it. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S*. Wadsworth & Brooks/Cole. ### See Also `[weights.glm](glm)` r None `summary.aov` Summarize an Analysis of Variance Model ------------------------------------------------------ ### Description Summarize an analysis of variance model. ### Usage ``` ## S3 method for class 'aov' summary(object, intercept = FALSE, split, expand.split = TRUE, keep.zero.df = TRUE, ...) ## S3 method for class 'aovlist' summary(object, ...) ``` ### Arguments | | | | --- | --- | | `object` | An object of class `"aov"` or `"aovlist"`. | | `intercept` | logical: should intercept terms be included? | | `split` | an optional named list, with names corresponding to terms in the model. Each component is itself a list with integer components giving contrasts whose contributions are to be summed. | | `expand.split` | logical: should the split apply also to interactions involving the factor? | | `keep.zero.df` | logical: should terms with no degrees of freedom be included? | | `...` | Arguments to be passed to or from other methods, for `summary.aovlist` including those for `summary.aov`. | ### Value An object of class `c("summary.aov", "listof")` or `"summary.aovlist"` respectively. For fits with a single stratum the result will be a list of ANOVA tables, one for each response (even if there is only one response): the tables are of class `"anova"` inheriting from class `"data.frame"`. They have columns `"Df"`, `"Sum Sq"`, `"Mean Sq"`, as well as `"F value"` and `"Pr(>F)"` if there are non-zero residual degrees of freedom. There is a row for each term in the model, plus one for `"Residuals"` if there are any. For multistratum fits the return value is a list of such summaries, one for each stratum. ### Note The use of `expand.split = TRUE` is little tested: it is always possible to set it to `FALSE` and specify exactly all the splits required. ### See Also `<aov>`, `[summary](../../base/html/summary)`, `<model.tables>`, `[TukeyHSD](tukeyhsd)` ### Examples ``` ## For a simple example see example(aov) # Cochran and Cox (1957, p.164) # 3x3 factorial with ordered factors, each is average of 12. CC <- data.frame( y = c(449, 413, 326, 409, 358, 291, 341, 278, 312)/12, P = ordered(gl(3, 3)), N = ordered(gl(3, 1, 9)) ) CC.aov <- aov(y ~ N * P, data = CC , weights = rep(12, 9)) summary(CC.aov) # Split both main effects into linear and quadratic parts. summary(CC.aov, split = list(N = list(L = 1, Q = 2), P = list(L = 1, Q = 2))) # Split only the interaction summary(CC.aov, split = list("N:P" = list(L.L = 1, Q = 2:4))) # split on just one var summary(CC.aov, split = list(P = list(lin = 1, quad = 2))) summary(CC.aov, split = list(P = list(lin = 1, quad = 2)), expand.split = FALSE) ``` r None `Weibull` The Weibull Distribution ----------------------------------- ### Description Density, distribution function, quantile function and random generation for the Weibull distribution with parameters `shape` and `scale`. ### Usage ``` dweibull(x, shape, scale = 1, log = FALSE) pweibull(q, shape, scale = 1, lower.tail = TRUE, log.p = FALSE) qweibull(p, shape, scale = 1, lower.tail = TRUE, log.p = FALSE) rweibull(n, shape, scale = 1) ``` ### Arguments | | | | --- | --- | | `x, q` | vector of quantiles. | | `p` | vector of probabilities. | | `n` | number of observations. If `length(n) > 1`, the length is taken to be the number required. | | `shape, scale` | shape and scale parameters, the latter defaulting to 1. | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details The Weibull distribution with `shape` parameter *a* and `scale` parameter *b* has density given by *f(x) = (a/b) (x/b)^(a-1) exp(- (x/b)^a)* for *x > 0*. The cumulative distribution function is *F(x) = 1 - exp(- (x/b)^a)* on *x > 0*, the mean is *E(X) = b Γ(1 + 1/a)*, and the *Var(X) = b^2 \* (Γ(1 + 2/a) - (Γ(1 + 1/a))^2)*. ### Value `dweibull` gives the density, `pweibull` gives the distribution function, `qweibull` gives the quantile function, and `rweibull` generates random deviates. Invalid arguments will result in return value `NaN`, with a warning. The length of the result is determined by `n` for `rweibull`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `n` are recycled to the length of the result. Only the first elements of the logical arguments are used. ### Note The cumulative hazard *H(t) = - log(1 - F(t))* is ``` -pweibull(t, a, b, lower = FALSE, log = TRUE) ``` which is just *H(t) = (t/b)^a*. ### Source `[dpq]weibull` are calculated directly from the definitions. `rweibull` uses inversion. ### References Johnson, N. L., Kotz, S. and Balakrishnan, N. (1995) *Continuous Univariate Distributions*, volume 1, chapter 21. Wiley, New York. ### See Also [Distributions](distributions) for other standard distributions, including the [Exponential](exponential) which is a special case of the Weibull distribution. ### Examples ``` x <- c(0, rlnorm(50)) all.equal(dweibull(x, shape = 1), dexp(x)) all.equal(pweibull(x, shape = 1, scale = pi), pexp(x, rate = 1/pi)) ## Cumulative hazard H(): all.equal(pweibull(x, 2.5, pi, lower.tail = FALSE, log.p = TRUE), -(x/pi)^2.5, tolerance = 1e-15) all.equal(qweibull(x/11, shape = 1, scale = pi), qexp(x/11, rate = 1/pi)) ``` r None `pairwise.t.test` Pairwise t tests ----------------------------------- ### Description Calculate pairwise comparisons between group levels with corrections for multiple testing ### Usage ``` pairwise.t.test(x, g, p.adjust.method = p.adjust.methods, pool.sd = !paired, paired = FALSE, alternative = c("two.sided", "less", "greater"), ...) ``` ### Arguments | | | | --- | --- | | `x` | response vector. | | `g` | grouping vector or factor. | | `p.adjust.method` | Method for adjusting p values (see `<p.adjust>`). | | `pool.sd` | switch to allow/disallow the use of a pooled SD | | `paired` | a logical indicating whether you want paired t-tests. | | `alternative` | a character string specifying the alternative hypothesis, must be one of `"two.sided"` (default), `"greater"` or `"less"`. Can be abbreviated. | | `...` | additional arguments to pass to `t.test`. | ### Details The `pool.sd` switch calculates a common SD for all groups and uses that for all comparisons (this can be useful if some groups are small). This method does not actually call `t.test`, so extra arguments are ignored. Pooling does not generalize to paired tests so `pool.sd` and `paired` cannot both be `TRUE`. Only the lower triangle of the matrix of possible comparisons is being calculated, so setting `alternative` to anything other than `"two.sided"` requires that the levels of `g` are ordered sensibly. ### Value Object of class `"pairwise.htest"` ### See Also `<t.test>`, `<p.adjust>` ### Examples ``` attach(airquality) Month <- factor(Month, labels = month.abb[5:9]) pairwise.t.test(Ozone, Month) pairwise.t.test(Ozone, Month, p.adjust.method = "bonf") pairwise.t.test(Ozone, Month, pool.sd = FALSE) detach() ``` r None `AIC` Akaike's An Information Criterion ---------------------------------------- ### Description Generic function calculating Akaike's ‘An Information Criterion’ for one or several fitted model objects for which a log-likelihood value can be obtained, according to the formula *-2\*log-likelihood + k\*npar*, where *npar* represents the number of parameters in the fitted model, and *k = 2* for the usual AIC, or *k = log(n)* (*n* being the number of observations) for the so-called BIC or SBC (Schwarz's Bayesian criterion). ### Usage ``` AIC(object, ..., k = 2) BIC(object, ...) ``` ### Arguments | | | | --- | --- | | `object` | a fitted model object for which there exists a `logLik` method to extract the corresponding log-likelihood, or an object inheriting from class `logLik`. | | `...` | optionally more fitted model objects. | | `k` | numeric, the *penalty* per parameter to be used; the default `k = 2` is the classical AIC. | ### Details When comparing models fitted by maximum likelihood to the same data, the smaller the AIC or BIC, the better the fit. The theory of AIC requires that the log-likelihood has been maximized: whereas AIC can be computed for models not fitted by maximum likelihood, their AIC values should not be compared. Examples of models not ‘fitted to the same data’ are where the response is transformed (accelerated-life models are fitted to log-times) and where contingency tables have been used to summarize data. These are generic functions (with S4 generics defined in package stats4): however methods should be defined for the log-likelihood function `[logLik](loglik)` rather than these functions: the action of their default methods is to call `logLik` on all the supplied objects and assemble the results. Note that in several common cases `[logLik](loglik)` does not return the value at the MLE: see its help page. The log-likelihood and hence the AIC/BIC is only defined up to an additive constant. Different constants have conventionally been used for different purposes and so `[extractAIC](extractaic)` and `AIC` may give different values (and do for models of class `"lm"`: see the help for `[extractAIC](extractaic)`). Particular care is needed when comparing fits of different classes (with, for example, a comparison of a Poisson and gamma GLM being meaningless since one has a discrete response, the other continuous). `BIC` is defined as `AIC(object, ..., k = log(nobs(object)))`. This needs the number of observations to be known: the default method looks first for a `"nobs"` attribute on the return value from the `[logLik](loglik)` method, then tries the `<nobs>` generic, and if neither succeed returns BIC as `NA`. ### Value If just one object is provided, a numeric value with the corresponding AIC (or BIC, or ..., depending on `k`). If multiple objects are provided, a `data.frame` with rows corresponding to the objects and columns representing the number of parameters in the model (`df`) and the AIC or BIC. ### Author(s) Originally by José Pinheiro and Douglas Bates, more recent revisions by R-core. ### References Sakamoto, Y., Ishiguro, M., and Kitagawa G. (1986). *Akaike Information Criterion Statistics*. D. Reidel Publishing Company. ### See Also `[extractAIC](extractaic)`, `[logLik](loglik)`, `<nobs>`. ### Examples ``` lm1 <- lm(Fertility ~ . , data = swiss) AIC(lm1) stopifnot(all.equal(AIC(lm1), AIC(logLik(lm1)))) BIC(lm1) lm2 <- update(lm1, . ~ . -Examination) AIC(lm1, lm2) BIC(lm1, lm2) ``` r None `pp.test` Phillips-Perron Test for Unit Roots ---------------------------------------------- ### Description Computes the Phillips-Perron test for the null hypothesis that `x` has a unit root against a stationary alternative. ### Usage ``` PP.test(x, lshort = TRUE) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector or univariate time series. | | `lshort` | a logical indicating whether the short or long version of the truncation lag parameter is used. | ### Details The general regression equation which incorporates a constant and a linear trend is used and the corrected t-statistic for a first order autoregressive coefficient equals one is computed. To estimate `sigma^2` the Newey-West estimator is used. If `lshort` is `TRUE`, then the truncation lag parameter is set to `trunc(4*(n/100)^0.25)`, otherwise `trunc(12*(n/100)^0.25)` is used. The p-values are interpolated from Table 4.2, page 103 of Banerjee *et al* (1993). Missing values are not handled. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the value of the test statistic. | | `parameter` | the truncation lag parameter. | | `p.value` | the p-value of the test. | | `method` | a character string indicating what type of test was performed. | | `data.name` | a character string giving the name of the data. | ### Author(s) A. Trapletti ### References A. Banerjee, J. J. Dolado, J. W. Galbraith, and D. F. Hendry (1993). *Cointegration, Error Correction, and the Econometric Analysis of Non-Stationary Data*. Oxford University Press, Oxford. P. Perron (1988). Trends and random walks in macroeconomic time series. *Journal of Economic Dynamics and Control*, **12**, 297–332. doi: [10.1016/0165-1889(88)90043-7](https://doi.org/10.1016/0165-1889(88)90043-7). ### Examples ``` x <- rnorm(1000) PP.test(x) y <- cumsum(x) # has unit root PP.test(y) ``` r None `terms.formula` Construct a terms Object from a Formula -------------------------------------------------------- ### Description This function takes a formula and some optional arguments and constructs a terms object. The terms object can then be used to construct a `<model.matrix>`. ### Usage ``` ## S3 method for class 'formula' terms(x, specials = NULL, abb = NULL, data = NULL, neg.out = TRUE, keep.order = FALSE, simplify = FALSE, ..., allowDotAsName = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | a formula. | | `specials` | which functions in the formula should be marked as special in the `terms` object? A character vector or `NULL`. | | `abb` | Not implemented in **R**. | | `data` | a data frame from which the meaning of the special symbol `.` can be inferred. It is unused if there is no `.` in the formula. | | `neg.out` | Not implemented in **R**. | | `keep.order` | a logical value indicating whether the terms should keep their positions. If `FALSE` the terms are reordered so that main effects come first, followed by the interactions, all second-order, all third-order and so on. Effects of a given order are kept in the order specified. | | `simplify` | should the formula be expanded and simplified, the pre-1.7.0 behaviour? | | `...` | further arguments passed to or from other methods. | | `allowDotAsName` | normally `.` in a formula refers to the remaining variables contained in `data`. Exceptionally, `.` can be treated as a name for non-standard uses of formulae. | ### Details Not all of the options work in the same way that they do in S and not all are implemented. ### Value A `<terms.object>` object is returned. The object itself is the re-ordered (unless `keep.order = TRUE`) formula. In all cases variables within an interaction term in the formula are re-ordered by the ordering of the `"variables"` attribute, which is the order in which the variables occur in the formula. ### See Also `<terms>`, `<terms.object>` r None `reorder.factor` Reorder Levels of a Factor -------------------------------------------- ### Description `reorder` is a generic function. The `"default"` method treats its first argument as a categorical variable, and reorders its levels based on the values of a second variable, usually numeric. ### Usage ``` reorder(x, ...) ## Default S3 method: reorder(x, X, FUN = mean, ..., order = is.ordered(x)) ``` ### Arguments | | | | --- | --- | | `x` | an atomic vector, usually a `[factor](../../base/html/factor)` (possibly ordered). The vector is treated as a categorical variable whose levels will be reordered. If `x` is not a factor, its unique values will be used as the implicit levels. | | `X` | a vector of the same length as `x`, whose subset of values for each unique level of `x` determines the eventual order of that level. | | `FUN` | a `[function](../../base/html/function)` whose first argument is a vector and returns a scalar, to be applied to each subset of `X` determined by the levels of `x`. | | `...` | optional: extra arguments supplied to `FUN` | | `order` | logical, whether return value will be an ordered factor rather than a factor. | ### Details This, as `<relevel>()`, is a special case of simply calling `[factor](../../base/html/factor)(x, levels = levels(x)[....])`. ### Value A factor or an ordered factor (depending on the value of `order`), with the order of the levels determined by `FUN` applied to `X` grouped by `x`. The levels are ordered such that the values returned by `FUN` are in increasing order. Empty levels will be dropped. Additionally, the values of `FUN` applied to the subsets of `X` (in the original order of the levels of `x`) is returned as the `"scores"` attribute. ### Author(s) Deepayan Sarkar [[email protected]](mailto:[email protected]) ### See Also `<reorder.dendrogram>`, `[levels](../../base/html/levels)`, `<relevel>`. ### Examples ``` require(graphics) bymedian <- with(InsectSprays, reorder(spray, count, median)) boxplot(count ~ bymedian, data = InsectSprays, xlab = "Type of spray", ylab = "Insect count", main = "InsectSprays data", varwidth = TRUE, col = "lightgray") ```
programming_docs
r None `friedman.test` Friedman Rank Sum Test --------------------------------------- ### Description Performs a Friedman rank sum test with unreplicated blocked data. ### Usage ``` friedman.test(y, ...) ## Default S3 method: friedman.test(y, groups, blocks, ...) ## S3 method for class 'formula' friedman.test(formula, data, subset, na.action, ...) ``` ### Arguments | | | | --- | --- | | `y` | either a numeric vector of data values, or a data matrix. | | `groups` | a vector giving the group for the corresponding elements of `y` if this is a vector; ignored if `y` is a matrix. If not a factor object, it is coerced to one. | | `blocks` | a vector giving the block for the corresponding elements of `y` if this is a vector; ignored if `y` is a matrix. If not a factor object, it is coerced to one. | | `formula` | a formula of the form `a ~ b | c`, where `a`, `b` and `c` give the data values and corresponding groups and blocks, respectively. | | `data` | an optional matrix or data frame (or similar: see `<model.frame>`) containing the variables in the formula `formula`. By default the variables are taken from `environment(formula)`. | | `subset` | an optional vector specifying a subset of observations to be used. | | `na.action` | a function which indicates what should happen when the data contain `NA`s. Defaults to `getOption("na.action")`. | | `...` | further arguments to be passed to or from methods. | ### Details `friedman.test` can be used for analyzing unreplicated complete block designs (i.e., there is exactly one observation in `y` for each combination of levels of `groups` and `blocks`) where the normality assumption may be violated. The null hypothesis is that apart from an effect of `blocks`, the location parameter of `y` is the same in each of the `groups`. If `y` is a matrix, `groups` and `blocks` are obtained from the column and row indices, respectively. `NA`'s are not allowed in `groups` or `blocks`; if `y` contains `NA`'s, corresponding blocks are removed. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the value of Friedman's chi-squared statistic. | | `parameter` | the degrees of freedom of the approximate chi-squared distribution of the test statistic. | | `p.value` | the p-value of the test. | | `method` | the character string `"Friedman rank sum test"`. | | `data.name` | a character string giving the names of the data. | ### References Myles Hollander and Douglas A. Wolfe (1973), *Nonparametric Statistical Methods.* New York: John Wiley & Sons. Pages 139–146. ### See Also `<quade.test>`. ### Examples ``` ## Hollander & Wolfe (1973), p. 140ff. ## Comparison of three methods ("round out", "narrow angle", and ## "wide angle") for rounding first base. For each of 18 players ## and the three method, the average time of two runs from a point on ## the first base line 35ft from home plate to a point 15ft short of ## second base is recorded. RoundingTimes <- matrix(c(5.40, 5.50, 5.55, 5.85, 5.70, 5.75, 5.20, 5.60, 5.50, 5.55, 5.50, 5.40, 5.90, 5.85, 5.70, 5.45, 5.55, 5.60, 5.40, 5.40, 5.35, 5.45, 5.50, 5.35, 5.25, 5.15, 5.00, 5.85, 5.80, 5.70, 5.25, 5.20, 5.10, 5.65, 5.55, 5.45, 5.60, 5.35, 5.45, 5.05, 5.00, 4.95, 5.50, 5.50, 5.40, 5.45, 5.55, 5.50, 5.55, 5.55, 5.35, 5.45, 5.50, 5.55, 5.50, 5.45, 5.25, 5.65, 5.60, 5.40, 5.70, 5.65, 5.55, 6.30, 6.30, 6.25), nrow = 22, byrow = TRUE, dimnames = list(1 : 22, c("Round Out", "Narrow Angle", "Wide Angle"))) friedman.test(RoundingTimes) ## => strong evidence against the null that the methods are equivalent ## with respect to speed wb <- aggregate(warpbreaks$breaks, by = list(w = warpbreaks$wool, t = warpbreaks$tension), FUN = mean) wb friedman.test(wb$x, wb$w, wb$t) friedman.test(x ~ w | t, data = wb) ``` r None `deviance` Model Deviance -------------------------- ### Description Returns the deviance of a fitted model object. ### Usage ``` deviance(object, ...) ``` ### Arguments | | | | --- | --- | | `object` | an object for which the deviance is desired. | | `...` | additional optional argument. | ### Details This is a generic function which can be used to extract deviances for fitted models. Consult the individual modeling functions for details on how to use this function. ### Value The value of the deviance extracted from the object `object`. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S.* Wadsworth & Brooks/Cole. ### See Also `<df.residual>`, `[extractAIC](extractaic)`, `<glm>`, `<lm>`. r None `Binomial` The Binomial Distribution ------------------------------------- ### Description Density, distribution function, quantile function and random generation for the binomial distribution with parameters `size` and `prob`. This is conventionally interpreted as the number of ‘successes’ in `size` trials. ### Usage ``` dbinom(x, size, prob, log = FALSE) pbinom(q, size, prob, lower.tail = TRUE, log.p = FALSE) qbinom(p, size, prob, lower.tail = TRUE, log.p = FALSE) rbinom(n, size, prob) ``` ### Arguments | | | | --- | --- | | `x, q` | vector of quantiles. | | `p` | vector of probabilities. | | `n` | number of observations. If `length(n) > 1`, the length is taken to be the number required. | | `size` | number of trials (zero or more). | | `prob` | probability of success on each trial. | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details The binomial distribution with `size` *= n* and `prob` *= p* has density *p(x) = choose(n, x) p^x (1-p)^(n-x)* for *x = 0, …, n*. Note that binomial *coefficients* can be computed by `[choose](../../base/html/special)` in **R**. If an element of `x` is not integer, the result of `dbinom` is zero, with a warning. *p(x)* is computed using Loader's algorithm, see the reference below. The quantile is defined as the smallest value *x* such that *F(x) ≥ p*, where *F* is the distribution function. ### Value `dbinom` gives the density, `pbinom` gives the distribution function, `qbinom` gives the quantile function and `rbinom` generates random deviates. If `size` is not an integer, `NaN` is returned. The length of the result is determined by `n` for `rbinom`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `n` are recycled to the length of the result. Only the first elements of the logical arguments are used. ### Source For `dbinom` a saddle-point expansion is used: see Catherine Loader (2000). *Fast and Accurate Computation of Binomial Probabilities*; available as <https://www.r-project.org/doc/reports/CLoader-dbinom-2002.pdf> `pbinom` uses `[pbeta](beta)`. `qbinom` uses the Cornish–Fisher Expansion to include a skewness correction to a normal approximation, followed by a search. `rbinom` (for `size < .Machine$integer.max`) is based on Kachitvichyanukul, V. and Schmeiser, B. W. (1988) Binomial random variate generation. *Communications of the ACM*, **31**, 216–222. For larger values it uses inversion. ### See Also [Distributions](distributions) for other standard distributions, including `[dnbinom](negbinomial)` for the negative binomial, and `[dpois](family)` for the Poisson distribution. ### Examples ``` require(graphics) # Compute P(45 < X < 55) for X Binomial(100,0.5) sum(dbinom(46:54, 100, 0.5)) ## Using "log = TRUE" for an extended range : n <- 2000 k <- seq(0, n, by = 20) plot (k, dbinom(k, n, pi/10, log = TRUE), type = "l", ylab = "log density", main = "dbinom(*, log=TRUE) is better than log(dbinom(*))") lines(k, log(dbinom(k, n, pi/10)), col = "red", lwd = 2) ## extreme points are omitted since dbinom gives 0. mtext("dbinom(k, log=TRUE)", adj = 0) mtext("extended range", adj = 0, line = -1, font = 4) mtext("log(dbinom(k))", col = "red", adj = 1) ``` r None `Hypergeometric` The Hypergeometric Distribution ------------------------------------------------- ### Description Density, distribution function, quantile function and random generation for the hypergeometric distribution. ### Usage ``` dhyper(x, m, n, k, log = FALSE) phyper(q, m, n, k, lower.tail = TRUE, log.p = FALSE) qhyper(p, m, n, k, lower.tail = TRUE, log.p = FALSE) rhyper(nn, m, n, k) ``` ### Arguments | | | | --- | --- | | `x, q` | vector of quantiles representing the number of white balls drawn without replacement from an urn which contains both black and white balls. | | `m` | the number of white balls in the urn. | | `n` | the number of black balls in the urn. | | `k` | the number of balls drawn from the urn, hence must be in *0,1,…, m+n*. | | `p` | probability, it must be between 0 and 1. | | `nn` | number of observations. If `length(nn) > 1`, the length is taken to be the number required. | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details The hypergeometric distribution is used for sampling *without* replacement. The density of this distribution with parameters `m`, `n` and `k` (named *Np*, *N-Np*, and *n*, respectively in the reference below, where *N := m+n* is also used in other references) is given by *p(x) = choose(m, x) choose(n, k-x) / choose(m+n, k)* for *x = 0, …, k*. Note that *p(x)* is non-zero only for *max(0, k-n) <= x <= min(k, m)*. With *p := m/(m+n)* (hence *Np = N \times p* in the reference's notation), the first two moments are mean *E[X] = μ = k p* and variance *Var(X) = k p (1 - p) \* (m+n-k)/(m+n-1),* which shows the closeness to the Binomial*(k,p)* (where the hypergeometric has smaller variance unless *k = 1*). The quantile is defined as the smallest value *x* such that *F(x) ≥ p*, where *F* is the distribution function. In `rhyper()`, if one of *m, n, k* exceeds `[.Machine](../../base/html/zmachine)$integer.max`, currently the equivalent of `qhyper(runif(nn), m,n,k)` is used which is comparably slow while instead a binomial approximation may be considerably more efficient. ### Value `dhyper` gives the density, `phyper` gives the distribution function, `qhyper` gives the quantile function, and `rhyper` generates random deviates. Invalid arguments will result in return value `NaN`, with a warning. The length of the result is determined by `n` for `rhyper`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `n` are recycled to the length of the result. Only the first elements of the logical arguments are used. ### Source `dhyper` computes via binomial probabilities, using code contributed by Catherine Loader (see `[dbinom](family)`). `phyper` is based on calculating `dhyper` and `phyper(...)/dhyper(...)` (as a summation), based on ideas of Ian Smith and Morten Welinder. `qhyper` is based on inversion (of an earlier `phyper()` algorithm). `rhyper` is based on a corrected version of Kachitvichyanukul, V. and Schmeiser, B. (1985). Computer generation of hypergeometric random variates. *Journal of Statistical Computation and Simulation*, **22**, 127–145. ### References Johnson, N. L., Kotz, S., and Kemp, A. W. (1992) *Univariate Discrete Distributions*, Second Edition. New York: Wiley. ### See Also [Distributions](distributions) for other standard distributions. ### Examples ``` m <- 10; n <- 7; k <- 8 x <- 0:(k+1) rbind(phyper(x, m, n, k), dhyper(x, m, n, k)) all(phyper(x, m, n, k) == cumsum(dhyper(x, m, n, k))) # FALSE ## but error is very small: signif(phyper(x, m, n, k) - cumsum(dhyper(x, m, n, k)), digits = 3) ``` r None `read.ftable` Manipulate Flat Contingency Tables ------------------------------------------------- ### Description Read, write and coerce ‘flat’ (contingency) tables, aka `ftable`s. ### Usage ``` read.ftable(file, sep = "", quote = "\"", row.var.names, col.vars, skip = 0) write.ftable(x, file = "", quote = TRUE, append = FALSE, digits = getOption("digits"), ...) ## S3 method for class 'ftable' format(x, quote = TRUE, digits = getOption("digits"), method = c("non.compact", "row.compact", "col.compact", "compact"), lsep = " | ", justify = c("left", "right"), ...) ## S3 method for class 'ftable' print(x, digits = getOption("digits"), ...) ``` ### Arguments | | | | --- | --- | | `file` | either a character string naming a file or a `[connection](../../base/html/connections)` which the data are to be read from or written to. `""` indicates input from the console for reading and output to the console for writing. | | `sep` | the field separator string. Values on each line of the file are separated by this string. | | `quote` | a character string giving the set of quoting characters for `read.ftable`; to disable quoting altogether, use `quote=""`. For `write.table`, a logical indicating whether strings in the data will be surrounded by double quotes. | | `row.var.names` | a character vector with the names of the row variables, in case these cannot be determined automatically. | | `col.vars` | a list giving the names and levels of the column variables, in case these cannot be determined automatically. | | `skip` | the number of lines of the data file to skip before beginning to read data. | | `x` | an object of class `"ftable"`. | | `append` | logical. If `TRUE` and `file` is the name of a file (and not a connection or `"|cmd"`), the output from `write.ftable` is appended to the file. If `FALSE`, the contents of `file` will be overwritten. | | `digits` | an integer giving the number of significant digits to use for (the cell entries of) `x`. | | `method` | string specifying how the `"ftable"` object is formatted (and printed if used as in `write.ftable()` or the `print` method). Can be abbreviated. Available methods are (see the examples): "non.compact" the default representation of an `"ftable"` object. "row.compact" a row-compact version without empty cells below the column labels. "col.compact" a column-compact version without empty cells to the right of the row labels. "compact" a row- and column-compact version. This may imply a row and a column label sharing the same cell. They are then separated by the string `lsep`. | | `lsep` | only for `method = "compact"`, the separation string for row and column labels. | | `justify` | `[character](../../base/html/character)` vector of length (one or) two, specifying how string justification should happen in `format(..)`, first for the labels, then the table entries. | | `...` | further arguments to be passed to or from methods; for `write()` and `print()`, notably arguments such as `method`, passed to `format()`. | ### Details `read.ftable` reads in a flat-like contingency table from a file. If the file contains the written representation of a flat table (more precisely, a header with all information on names and levels of column variables, followed by a line with the names of the row variables), no further arguments are needed. Similarly, flat tables with only one column variable the name of which is the only entry in the first line are handled automatically. Other variants can be dealt with by skipping all header information using `skip`, and providing the names of the row variables and the names and levels of the column variable using `row.var.names` and `col.vars`, respectively. See the examples below. Note that flat tables are characterized by their ‘ragged’ display of row (and maybe also column) labels. If the full grid of levels of the row variables is given, one should instead use `[read.table](../../utils/html/read.table)` to read in the data, and create the contingency table from this using `<xtabs>`. `write.ftable` writes a flat table to a file, which is useful for generating ‘pretty’ ASCII representations of contingency tables. Different versions are available via the `method` argument, which may be useful, for example, for constructing LaTeX tables. ### References Agresti, A. (1990) *Categorical data analysis*. New York: Wiley. ### See Also `<ftable>` for more information on flat contingency tables. ### Examples ``` ## Agresti (1990), page 157, Table 5.8. ## Not in ftable standard format, but o.k. file <- tempfile() cat(" Intercourse\n", "Race Gender Yes No\n", "White Male 43 134\n", " Female 26 149\n", "Black Male 29 23\n", " Female 22 36\n", file = file) file.show(file) ft1 <- read.ftable(file) ft1 unlink(file) ## Agresti (1990), page 297, Table 8.16. ## Almost o.k., but misses the name of the row variable. file <- tempfile() cat(" \"Tonsil Size\"\n", " \"Not Enl.\" \"Enl.\" \"Greatly Enl.\"\n", "Noncarriers 497 560 269\n", "Carriers 19 29 24\n", file = file) file.show(file) ft <- read.ftable(file, skip = 2, row.var.names = "Status", col.vars = list("Tonsil Size" = c("Not Enl.", "Enl.", "Greatly Enl."))) ft unlink(file) ft22 <- ftable(Titanic, row.vars = 2:1, col.vars = 4:3) write.ftable(ft22, quote = FALSE) # is the same as print(ft22)#method="non.compact" is default print(ft22, method="row.compact") print(ft22, method="col.compact") print(ft22, method="compact") ## using 'justify' and 'quote' : format(ftable(wool + tension ~ breaks, warpbreaks), justify = "none", quote = FALSE) ``` r None `cophenetic` Cophenetic Distances for a Hierarchical Clustering ---------------------------------------------------------------- ### Description Computes the cophenetic distances for a hierarchical clustering. ### Usage ``` cophenetic(x) ## Default S3 method: cophenetic(x) ## S3 method for class 'dendrogram' cophenetic(x) ``` ### Arguments | | | | --- | --- | | `x` | an R object representing a hierarchical clustering. For the default method, an object of class `"<hclust>"` or with a method for `<as.hclust>()` such as `"[agnes](../../cluster/html/agnes)"` in package [cluster](https://CRAN.R-project.org/package=cluster). | ### Details The cophenetic distance between two observations that have been clustered is defined to be the intergroup dissimilarity at which the two observations are first combined into a single cluster. Note that this distance has many ties and restrictions. It can be argued that a dendrogram is an appropriate summary of some data if the correlation between the original distances and the cophenetic distances is high. Otherwise, it should simply be viewed as the description of the output of the clustering algorithm. `cophenetic` is a generic function. Support for classes which represent hierarchical clusterings (total indexed hierarchies) can be added by providing an `<as.hclust>()` or, more directly, a `cophenetic()` method for such a class. The method for objects of class `"<dendrogram>"` requires that all leaves of the dendrogram object have non-null labels. ### Value An object of class `"dist"`. ### Author(s) Robert Gentleman ### References Sneath, P.H.A. and Sokal, R.R. (1973) *Numerical Taxonomy: The Principles and Practice of Numerical Classification*, p. 278 ff; Freeman, San Francisco. ### See Also `<dist>`, `<hclust>` ### Examples ``` require(graphics) d1 <- dist(USArrests) hc <- hclust(d1, "ave") d2 <- cophenetic(hc) cor(d1, d2) # 0.7659 ## Example from Sneath & Sokal, Fig. 5-29, p.279 d0 <- c(1,3.8,4.4,5.1, 4,4.2,5, 2.6,5.3, 5.4) attributes(d0) <- list(Size = 5, diag = TRUE) class(d0) <- "dist" names(d0) <- letters[1:5] d0 utils::str(upgma <- hclust(d0, method = "average")) plot(upgma, hang = -1) # (d.coph <- cophenetic(upgma)) cor(d0, d.coph) # 0.9911 ```
programming_docs
r None `quantile` Sample Quantiles ---------------------------- ### Description The generic function `quantile` produces sample quantiles corresponding to the given probabilities. The smallest observation corresponds to a probability of 0 and the largest to a probability of 1. ### Usage ``` quantile(x, ...) ## Default S3 method: quantile(x, probs = seq(0, 1, 0.25), na.rm = FALSE, names = TRUE, type = 7, digits = 7, ...) ``` ### Arguments | | | | --- | --- | | `x` | numeric vector whose sample quantiles are wanted, or an object of a class for which a method has been defined (see also ‘details’). `[NA](../../base/html/na)` and `NaN` values are not allowed in numeric vectors unless `na.rm` is `TRUE`. | | `probs` | numeric vector of probabilities with values in *[0,1]*. (Values up to 2e-14 outside that range are accepted and moved to the nearby endpoint.) | | `na.rm` | logical; if true, any `[NA](../../base/html/na)` and `NaN`'s are removed from `x` before the quantiles are computed. | | `names` | logical; if true, the result has a `[names](../../base/html/names)` attribute. Set to `FALSE` for speedup with many `probs`. | | `type` | an integer between 1 and 9 selecting one of the nine quantile algorithms detailed below to be used. | | `digits` | used only when `names` is true: the precision to use when formatting the percentages. In **R** versions up to 4.0.x, this had been set to `max(2, getOption("digits"))`, internally. | | `...` | further arguments passed to or from other methods. | ### Details A vector of length `length(probs)` is returned; if `names = TRUE`, it has a `[names](../../base/html/names)` attribute. `[NA](../../base/html/na)` and `[NaN](../../base/html/is.finite)` values in `probs` are propagated to the result. The default method works with classed objects sufficiently like numeric vectors that `sort` and (not needed by types 1 and 3) addition of elements and multiplication by a number work correctly. Note that as this is in a namespace, the copy of `sort` in base will be used, not some S4 generic of that name. Also note that that is no check on the ‘correctly’, and so e.g. `quantile` can be applied to complex vectors which (apart from ties) will be ordered on their real parts. There is a method for the date-time classes (see `"[POSIXt](../../base/html/datetimeclasses)"`). Types 1 and 3 can be used for class `"[Date](../../base/html/dates)"` and for ordered factors. ### Types `quantile` returns estimates of underlying distribution quantiles based on one or two order statistics from the supplied elements in `x` at probabilities in `probs`. One of the nine quantile algorithms discussed in Hyndman and Fan (1996), selected by `type`, is employed. All sample quantiles are defined as weighted averages of consecutive order statistics. Sample quantiles of type *i* are defined by: *Q[i](p) = (1 - γ) x[j] + γ x[j+1],* where *1 ≤ i ≤ 9*, *(j-m)/n ≤ p < (j-m+1)/n*, *x[j]* is the *j*th order statistic, *n* is the sample size, the value of *γ* is a function of *j = floor(np + m)* and *g = np + m - j*, and *m* is a constant determined by the sample quantile type. **Discontinuous sample quantile types 1, 2, and 3** For types 1, 2 and 3, *Q[i](p)* is a discontinuous function of *p*, with *m = 0* when *i = 1* and *i = 2*, and *m = -1/2* when *i = 3*. Type 1 Inverse of empirical distribution function. *γ = 0* if *g = 0*, and 1 otherwise. Type 2 Similar to type 1 but with averaging at discontinuities. *γ = 0.5* if *g = 0*, and 1 otherwise (SAS default, see Wicklin(2017)). Type 3 Nearest even order statistic (SAS default till ca. 2010). *γ = 0* if *g = 0* and *j* is even, and 1 otherwise. **Continuous sample quantile types 4 through 9** For types 4 through 9, *Q[i](p)* is a continuous function of *p*, with *gamma = g* and *m* given below. The sample quantiles can be obtained equivalently by linear interpolation between the points *(p[k],x[k])* where *x[k]* is the *k*th order statistic. Specific expressions for *p[k]* are given below. Type 4 *m = 0*. *p[k] = k / n*. That is, linear interpolation of the empirical cdf. Type 5 *m = 1/2*. *p[k] = (k - 0.5) / n*. That is a piecewise linear function where the knots are the values midway through the steps of the empirical cdf. This is popular amongst hydrologists. Type 6 *m = p*. *p[k] = k / (n + 1)*. Thus *p[k] = E[F(x[k])]*. This is used by Minitab and by SPSS. Type 7 *m = 1-p*. *p[k] = (k - 1) / (n - 1)*. In this case, *p[k] = mode[F(x[k])]*. This is used by S. Type 8 *m = (p+1)/3*. *p[k] = (k - 1/3) / (n + 1/3)*. Then *p[k] =~ median[F(x[k])]*. The resulting quantile estimates are approximately median-unbiased regardless of the distribution of `x`. Type 9 *m = p/4 + 3/8*. *p[k] = (k - 3/8) / (n + 1/4)*. The resulting quantile estimates are approximately unbiased for the expected order statistics if `x` is normally distributed. Further details are provided in Hyndman and Fan (1996) who recommended type 8. The default method is type 7, as used by S and by **R** < 2.0.0. Makkonen argues for type 6, also as already proposed by Weibull in 1939. The Wikipedia page contains further information about availability of these 9 types in software. ### Author(s) of the version used in **R** >= 2.0.0, Ivan Frohne and Rob J Hyndman. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Hyndman, R. J. and Fan, Y. (1996) Sample quantiles in statistical packages, *American Statistician* **50**, 361–365. doi: [10.2307/2684934](https://doi.org/10.2307/2684934). Wicklin, R. (2017) Sample quantiles: A comparison of 9 definitions; SAS Blog. <https://blogs.sas.com/content/iml/2017/05/24/definitions-sample-quantiles.html> Wikipedia: <https://en.wikipedia.org/wiki/Quantile#Estimating_quantiles_from_a_sample> ### See Also `<ecdf>` for empirical distributions of which `quantile` is an inverse; `[boxplot.stats](../../grdevices/html/boxplot.stats)` and `<fivenum>` for computing other versions of quartiles, etc. ### Examples ``` quantile(x <- rnorm(1001)) # Extremes & Quartiles by default quantile(x, probs = c(0.1, 0.5, 1, 2, 5, 10, 50, NA)/100) ### Compare different types quantAll <- function(x, prob, ...) t(vapply(1:9, function(typ) quantile(x, probs = prob, type = typ, ...), quantile(x, prob, type=1, ...))) p <- c(0.1, 0.5, 1, 2, 5, 10, 50)/100 signif(quantAll(x, p), 4) ## 0% and 100% are equal to min(), max() for all types: stopifnot(t(quantAll(x, prob=0:1)) == range(x)) ## for complex numbers: z <- complex(real = x, imaginary = -10*x) signif(quantAll(z, p), 4) ``` r None `weighted.mean` Weighted Arithmetic Mean ----------------------------------------- ### Description Compute a weighted mean. ### Usage ``` weighted.mean(x, w, ...) ## Default S3 method: weighted.mean(x, w, ..., na.rm = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | an object containing the values whose weighted mean is to be computed. | | `w` | a numerical vector of weights the same length as `x` giving the weights to use for elements of `x`. | | `...` | arguments to be passed to or from methods. | | `na.rm` | a logical value indicating whether `NA` values in `x` should be stripped before the computation proceeds. | ### Details This is a generic function and methods can be defined for the first argument `x`: apart from the default methods there are methods for the date-time classes `"POSIXct"`, `"POSIXlt"`, `"difftime"` and `"Date"`. The default method will work for any numeric-like object for which `[`, multiplication, division and `[sum](../../base/html/sum)` have suitable methods, including complex vectors. If `w` is missing then all elements of `x` are given the same weight, otherwise the weights coerced to numeric by `[as.numeric](../../base/html/numeric)` and normalized to sum to one (if possible: if their sum is zero or infinite the value is likely to be `NaN`). Missing values in `w` are not handled specially and so give a missing value as the result. However, zero weights *are* handled specially and the corresponding `x` values are omitted from the sum. ### Value For the default method, a length-one numeric vector. ### See Also `[mean](../../base/html/mean)` ### Examples ``` ## GPA from Siegel 1994 wt <- c(5, 5, 4, 1)/15 x <- c(3.7,3.3,3.5,2.8) xm <- weighted.mean(x, wt) ``` r None `nlm` Non-Linear Minimization ------------------------------ ### Description This function carries out a minimization of the function `f` using a Newton-type algorithm. See the references for details. ### Usage ``` nlm(f, p, ..., hessian = FALSE, typsize = rep(1, length(p)), fscale = 1, print.level = 0, ndigit = 12, gradtol = 1e-6, stepmax = max(1000 * sqrt(sum((p/typsize)^2)), 1000), steptol = 1e-6, iterlim = 100, check.analyticals = TRUE) ``` ### Arguments | | | | --- | --- | | `f` | the function to be minimized, returning a single numeric value. This should be a function with first argument a vector of the length of `p` followed by any other arguments specified by the `...` argument. If the function value has an attribute called `gradient` or both `gradient` and `hessian` attributes, these will be used in the calculation of updated parameter values. Otherwise, numerical derivatives are used. `<deriv>` returns a function with suitable `gradient` attribute and optionally a `hessian` attribute. | | `p` | starting parameter values for the minimization. | | `...` | additional arguments to be passed to `f`. | | `hessian` | if `TRUE`, the hessian of `f` at the minimum is returned. | | `typsize` | an estimate of the size of each parameter at the minimum. | | `fscale` | an estimate of the size of `f` at the minimum. | | `print.level` | this argument determines the level of printing which is done during the minimization process. The default value of `0` means that no printing occurs, a value of `1` means that initial and final details are printed and a value of 2 means that full tracing information is printed. | | `ndigit` | the number of significant digits in the function `f`. | | `gradtol` | a positive scalar giving the tolerance at which the scaled gradient is considered close enough to zero to terminate the algorithm. The scaled gradient is a measure of the relative change in `f` in each direction `p[i]` divided by the relative change in `p[i]`. | | `stepmax` | a positive scalar which gives the maximum allowable scaled step length. `stepmax` is used to prevent steps which would cause the optimization function to overflow, to prevent the algorithm from leaving the area of interest in parameter space, or to detect divergence in the algorithm. `stepmax` would be chosen small enough to prevent the first two of these occurrences, but should be larger than any anticipated reasonable step. | | `steptol` | A positive scalar providing the minimum allowable relative step length. | | `iterlim` | a positive integer specifying the maximum number of iterations to be performed before the program is terminated. | | `check.analyticals` | a logical scalar specifying whether the analytic gradients and Hessians, if they are supplied, should be checked against numerical derivatives at the initial parameter values. This can help detect incorrectly formulated gradients or Hessians. | ### Details Note that arguments after `...` must be matched exactly. If a gradient or hessian is supplied but evaluates to the wrong mode or length, it will be ignored if `check.analyticals = TRUE` (the default) with a warning. The hessian is not even checked unless the gradient is present and passes the sanity checks. The C code for the “perturbed” cholesky, `choldc()` has had a bug in all **R** versions before 3.4.1. From the three methods available in the original source, we always use method “1” which is line search. The functions supplied should always return finite (including not `NA` and not `NaN`) values: for the function value itself non-finite values are replaced by the maximum positive value with a warning. ### Value A list containing the following components: | | | | --- | --- | | `minimum` | the value of the estimated minimum of `f`. | | `estimate` | the point at which the minimum value of `f` is obtained. | | `gradient` | the gradient at the estimated minimum of `f`. | | `hessian` | the hessian at the estimated minimum of `f` (if requested). | | `code` | an integer indicating why the optimization process terminated. 1: relative gradient is close to zero, current iterate is probably solution. 2: successive iterates within tolerance, current iterate is probably solution. 3: last global step failed to locate a point lower than `estimate`. Either `estimate` is an approximate local minimum of the function or `steptol` is too small. 4: iteration limit exceeded. 5: maximum step size `stepmax` exceeded five consecutive times. Either the function is unbounded below, becomes asymptotic to a finite value from above in some direction or `stepmax` is too small. | | `iterations` | the number of iterations performed. | ### Source The current code is by Saikat DebRoy and the R Core team, using a C translation of Fortran code by Richard H. Jones. ### References Dennis, J. E. and Schnabel, R. B. (1983). *Numerical Methods for Unconstrained Optimization and Nonlinear Equations*. Prentice-Hall, Englewood Cliffs, NJ. Schnabel, R. B., Koontz, J. E. and Weiss, B. E. (1985). A modular system of algorithms for unconstrained minimization. *ACM Transactions on Mathematical Software*, **11**, 419–440. doi: [10.1145/6187.6192](https://doi.org/10.1145/6187.6192). ### See Also `<optim>` and `<nlminb>`. `[constrOptim](constroptim)` for constrained optimization, `<optimize>` for one-dimensional minimization and `<uniroot>` for root finding. `<deriv>` to calculate analytical derivatives. For nonlinear regression, `<nls>` may be better. ### Examples ``` f <- function(x) sum((x-1:length(x))^2) nlm(f, c(10,10)) nlm(f, c(10,10), print.level = 2) utils::str(nlm(f, c(5), hessian = TRUE)) f <- function(x, a) sum((x-a)^2) nlm(f, c(10,10), a = c(3,5)) f <- function(x, a) { res <- sum((x-a)^2) attr(res, "gradient") <- 2*(x-a) res } nlm(f, c(10,10), a = c(3,5)) ## more examples, including the use of derivatives. ## Not run: demo(nlm) ``` r None `tsdiag` Diagnostic Plots for Time-Series Fits ----------------------------------------------- ### Description A generic function to plot time-series diagnostics. ### Usage ``` tsdiag(object, gof.lag, ...) ``` ### Arguments | | | | --- | --- | | `object` | a fitted time-series model | | `gof.lag` | the maximum number of lags for a Portmanteau goodness-of-fit test | | `...` | further arguments to be passed to particular methods | ### Details This is a generic function. It will generally plot the residuals, often standardized, the autocorrelation function of the residuals, and the p-values of a Portmanteau test for all lags up to `gof.lag`. The methods for `<arima>` and `[StructTS](structts)` objects plots residuals scaled by the estimate of their (individual) variance, and use the Ljung–Box version of the portmanteau test. ### Value None. Diagnostics are plotted. ### See Also `<arima>`, `[StructTS](structts)`, `[Box.test](box.test)` ### Examples ``` require(graphics) fit <- arima(lh, c(1,0,0)) tsdiag(fit) ## see also examples(arima) (fit <- StructTS(log10(JohnsonJohnson), type = "BSM")) tsdiag(fit) ``` r None `mantelhaen.test` Cochran-Mantel-Haenszel Chi-Squared Test for Count Data -------------------------------------------------------------------------- ### Description Performs a Cochran-Mantel-Haenszel chi-squared test of the null that two nominal variables are conditionally independent in each stratum, assuming that there is no three-way interaction. ### Usage ``` mantelhaen.test(x, y = NULL, z = NULL, alternative = c("two.sided", "less", "greater"), correct = TRUE, exact = FALSE, conf.level = 0.95) ``` ### Arguments | | | | --- | --- | | `x` | either a 3-dimensional contingency table in array form where each dimension is at least 2 and the last dimension corresponds to the strata, or a factor object with at least 2 levels. | | `y` | a factor object with at least 2 levels; ignored if `x` is an array. | | `z` | a factor object with at least 2 levels identifying to which stratum the corresponding elements in `x` and `y` belong; ignored if `x` is an array. | | `alternative` | indicates the alternative hypothesis and must be one of `"two.sided"`, `"greater"` or `"less"`. You can specify just the initial letter. Only used in the 2 by 2 by *K* case. | | `correct` | a logical indicating whether to apply continuity correction when computing the test statistic. Only used in the 2 by 2 by *K* case. | | `exact` | a logical indicating whether the Mantel-Haenszel test or the exact conditional test (given the strata margins) should be computed. Only used in the 2 by 2 by *K* case. | | `conf.level` | confidence level for the returned confidence interval. Only used in the 2 by 2 by *K* case. | ### Details If `x` is an array, each dimension must be at least 2, and the entries should be nonnegative integers. `NA`'s are not allowed. Otherwise, `x`, `y` and `z` must have the same length. Triples containing `NA`'s are removed. All variables must take at least two different values. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | Only present if no exact test is performed. In the classical case of a 2 by 2 by *K* table (i.e., of dichotomous underlying variables), the Mantel-Haenszel chi-squared statistic; otherwise, the generalized Cochran-Mantel-Haenszel statistic. | | `parameter` | the degrees of freedom of the approximate chi-squared distribution of the test statistic (*1* in the classical case). Only present if no exact test is performed. | | `p.value` | the p-value of the test. | | `conf.int` | a confidence interval for the common odds ratio. Only present in the 2 by 2 by *K* case. | | `estimate` | an estimate of the common odds ratio. If an exact test is performed, the conditional Maximum Likelihood Estimate is given; otherwise, the Mantel-Haenszel estimate. Only present in the 2 by 2 by *K* case. | | `null.value` | the common odds ratio under the null of independence, `1`. Only present in the 2 by 2 by *K* case. | | `alternative` | a character string describing the alternative hypothesis. Only present in the 2 by 2 by *K* case. | | `method` | a character string indicating the method employed, and whether or not continuity correction was used. | | `data.name` | a character string giving the names of the data. | ### Note The asymptotic distribution is only valid if there is no three-way interaction. In the classical 2 by 2 by *K* case, this is equivalent to the conditional odds ratios in each stratum being identical. Currently, no inference on homogeneity of the odds ratios is performed. See also the example below. ### References Alan Agresti (1990). *Categorical data analysis*. New York: Wiley. Pages 230–235. Alan Agresti (2002). *Categorical data analysis* (second edition). New York: Wiley. ### Examples ``` ## Agresti (1990), pages 231--237, Penicillin and Rabbits ## Investigation of the effectiveness of immediately injected or 1.5 ## hours delayed penicillin in protecting rabbits against a lethal ## injection with beta-hemolytic streptococci. Rabbits <- array(c(0, 0, 6, 5, 3, 0, 3, 6, 6, 2, 0, 4, 5, 6, 1, 0, 2, 5, 0, 0), dim = c(2, 2, 5), dimnames = list( Delay = c("None", "1.5h"), Response = c("Cured", "Died"), Penicillin.Level = c("1/8", "1/4", "1/2", "1", "4"))) Rabbits ## Classical Mantel-Haenszel test mantelhaen.test(Rabbits) ## => p = 0.047, some evidence for higher cure rate of immediate ## injection ## Exact conditional test mantelhaen.test(Rabbits, exact = TRUE) ## => p - 0.040 ## Exact conditional test for one-sided alternative of a higher ## cure rate for immediate injection mantelhaen.test(Rabbits, exact = TRUE, alternative = "greater") ## => p = 0.020 ## UC Berkeley Student Admissions mantelhaen.test(UCBAdmissions) ## No evidence for association between admission and gender ## when adjusted for department. However, apply(UCBAdmissions, 3, function(x) (x[1,1]*x[2,2])/(x[1,2]*x[2,1])) ## This suggests that the assumption of homogeneous (conditional) ## odds ratios may be violated. The traditional approach would be ## using the Woolf test for interaction: woolf <- function(x) { x <- x + 1 / 2 k <- dim(x)[3] or <- apply(x, 3, function(x) (x[1,1]*x[2,2])/(x[1,2]*x[2,1])) w <- apply(x, 3, function(x) 1 / sum(1 / x)) 1 - pchisq(sum(w * (log(or) - weighted.mean(log(or), w)) ^ 2), k - 1) } woolf(UCBAdmissions) ## => p = 0.003, indicating that there is significant heterogeneity. ## (And hence the Mantel-Haenszel test cannot be used.) ## Agresti (2002), p. 287f and p. 297. ## Job Satisfaction example. Satisfaction <- as.table(array(c(1, 2, 0, 0, 3, 3, 1, 2, 11, 17, 8, 4, 2, 3, 5, 2, 1, 0, 0, 0, 1, 3, 0, 1, 2, 5, 7, 9, 1, 1, 3, 6), dim = c(4, 4, 2), dimnames = list(Income = c("<5000", "5000-15000", "15000-25000", ">25000"), "Job Satisfaction" = c("V_D", "L_S", "M_S", "V_S"), Gender = c("Female", "Male")))) ## (Satisfaction categories abbreviated for convenience.) ftable(. ~ Gender + Income, Satisfaction) ## Table 7.8 in Agresti (2002), p. 288. mantelhaen.test(Satisfaction) ## See Table 7.12 in Agresti (2002), p. 297. ```
programming_docs
r None `print.ts` Printing and Formatting of Time-Series Objects ---------------------------------------------------------- ### Description Notably for calendar related time series objects, `[format](../../base/html/format)` and `[print](../../base/html/print)` methods showing years, months and or quarters respectively. ### Usage ``` ## S3 method for class 'ts' print(x, calendar, ...) .preformat.ts(x, calendar, ...) ``` ### Arguments | | | | --- | --- | | `x` | a time series object. | | `calendar` | enable/disable the display of information about month names, quarter names or year when printing. The default is `TRUE` for a frequency of 4 or 12, `FALSE` otherwise. | | `...` | additional arguments to `[print](../../base/html/print)` (or `[format](../../base/html/format)` methods). | ### Details The `[print](../../base/html/print)` method for `"ts"` objects prints a header (basically of `<tsp>(x)`), if `calendar` is false, and then prints the result of `.preformat.ts(x, *)`, which is typically a `[matrix](../../base/html/matrix)` with `[rownames](../../base/html/colnames)` built from the calendar times where applicable. ### See Also `[print](../../base/html/print)`, `<ts>`. ### Examples ``` print(ts(1:10, frequency = 7, start = c(12, 2)), calendar = TRUE) print(sunsp.1 <- window(sunspot.month, end=c(1756, 12))) m <- .preformat.ts(sunsp.1) # a character matrix ``` r None `formula.nls` Extract Model Formula from nls Object ---------------------------------------------------- ### Description Returns the model used to fit `object`. ### Usage ``` ## S3 method for class 'nls' formula(x, ...) ``` ### Arguments | | | | --- | --- | | `x` | an object inheriting from class `"nls"`, representing a nonlinear least squares fit. | | `...` | further arguments passed to or from other methods. | ### Value a formula representing the model used to obtain `object`. ### Author(s) José Pinheiro and Douglas Bates ### See Also `<nls>`, `<formula>` ### Examples ``` fm1 <- nls(circumference ~ A/(1+exp((B-age)/C)), Orange, start = list(A = 160, B = 700, C = 350)) formula(fm1) ``` r None `summary.lm` Summarizing Linear Model Fits ------------------------------------------- ### Description `summary` method for class `"lm"`. ### Usage ``` ## S3 method for class 'lm' summary(object, correlation = FALSE, symbolic.cor = FALSE, ...) ## S3 method for class 'summary.lm' print(x, digits = max(3, getOption("digits") - 3), symbolic.cor = x$symbolic.cor, signif.stars = getOption("show.signif.stars"), ...) ``` ### Arguments | | | | --- | --- | | `object` | an object of class `"lm"`, usually, a result of a call to `<lm>`. | | `x` | an object of class `"summary.lm"`, usually, a result of a call to `summary.lm`. | | `correlation` | logical; if `TRUE`, the correlation matrix of the estimated parameters is returned and printed. | | `digits` | the number of significant digits to use when printing. | | `symbolic.cor` | logical. If `TRUE`, print the correlations in a symbolic form (see `<symnum>`) rather than as numbers. | | `signif.stars` | logical. If `TRUE`, ‘significance stars’ are printed for each coefficient. | | `...` | further arguments passed to or from other methods. | ### Details `print.summary.lm` tries to be smart about formatting the coefficients, standard errors, etc. and additionally gives ‘significance stars’ if `signif.stars` is `TRUE`. Aliased coefficients are omitted in the returned object but restored by the `print` method. Correlations are printed to two decimal places (or symbolically): to see the actual correlations print `summary(object)$correlation` directly. ### Value The function `summary.lm` computes and returns a list of summary statistics of the fitted linear model given in `object`, using the components (list elements) `"call"` and `"terms"` from its argument, plus | | | | --- | --- | | `residuals` | the *weighted* residuals, the usual residuals rescaled by the square root of the weights specified in the call to `lm`. | | `coefficients` | a *p x 4* matrix with columns for the estimated coefficient, its standard error, t-statistic and corresponding (two-sided) p-value. Aliased coefficients are omitted. | | `aliased` | named logical vector showing if the original coefficients are aliased. | | `sigma` | the square root of the estimated variance of the random error *σ^2 = 1/(n-p) Sum(w[i] R[i]^2),* where *R[i]* is the *i*-th residual, `residuals[i]`. | | `df` | degrees of freedom, a 3-vector *(p, n-p, p\*)*, the first being the number of non-aliased coefficients, the last being the total number of coefficients. | | `fstatistic` | (for models including non-intercept terms) a 3-vector with the value of the F-statistic with its numerator and denominator degrees of freedom. | | `r.squared` | *R^2*, the ‘fraction of variance explained by the model’, *R^2 = 1 - Sum(R[i]^2) / Sum((y[i]- y\*)^2),* where *y\** is the mean of *y[i]* if there is an intercept and zero otherwise. | | `adj.r.squared` | the above *R^2* statistic ‘*adjusted*’, penalizing for higher *p*. | | `cov.unscaled` | a *p x p* matrix of (unscaled) covariances of the *coef[j]*, *j=1, …, p*. | | `correlation` | the correlation matrix corresponding to the above `cov.unscaled`, if `correlation = TRUE` is specified. | | `symbolic.cor` | (only if `correlation` is true.) The value of the argument `symbolic.cor`. | | `na.action` | from `object`, if present there. | ### See Also The model fitting function `<lm>`, `[summary](../../base/html/summary)`. Function `<coef>` will extract the matrix of coefficients with standard errors, t-statistics and p-values. ### Examples ``` ##-- Continuing the lm(.) example: coef(lm.D90) # the bare coefficients sld90 <- summary(lm.D90 <- lm(weight ~ group -1)) # omitting intercept sld90 coef(sld90) # much more ## model with *aliased* coefficient: lm.D9. <- lm(weight ~ group + I(group != "Ctl")) Sm.D9. <- summary(lm.D9.) Sm.D9. # shows the NA NA NA NA line stopifnot(length(cc <- coef(lm.D9.)) == 3, is.na(cc[3]), dim(coef(Sm.D9.)) == c(2,4), Sm.D9.$df == c(2, 18, 3)) ``` r None `addmargins` Puts Arbitrary Margins on Multidimensional Tables or Arrays ------------------------------------------------------------------------- ### Description For a given table one can specify which of the classifying factors to expand by one or more levels to hold margins to be calculated. One may for example form sums and means over the first dimension and medians over the second. The resulting table will then have two extra levels for the first dimension and one extra level for the second. The default is to sum over all margins in the table. Other possibilities may give results that depend on the order in which the margins are computed. This is flagged in the printed output from the function. ### Usage ``` addmargins(A, margin = seq_along(dim(A)), FUN = sum, quiet = FALSE) ``` ### Arguments | | | | --- | --- | | `A` | table or array. The function uses the presence of the `"dim"` and `"dimnames"` attributes of `A`. | | `margin` | vector of dimensions over which to form margins. Margins are formed in the order in which dimensions are specified in `margin`. | | `FUN` | `[list](../../base/html/list)` of the same length as `margin`, each element of the list being either a `[function](../../base/html/function)` or a list of functions. In the length-1 case, can be a function instead of a list of one. Names of the list elements will appear as levels in dimnames of the result. Unnamed list elements will have names constructed: the name of a function or a constructed name based on the position in the table. | | `quiet` | logical which suppresses the message telling the order in which the margins were computed. | ### Details If the functions used to form margins are not commutative the result depends on the order in which margins are computed. Annotation of margins is done via naming the `FUN` list. ### Value A `[table](../../base/html/table)` or `[array](../../base/html/array)` with the same number of dimensions as `A`, but with extra levels of the dimensions mentioned in `margin`. The number of levels added to each dimension is the length of the entries in `FUN`. A message with the order of computation of margins is printed. ### Author(s) Bendix Carstensen, Steno Diabetes Center & Department of Biostatistics, University of Copenhagen, <https://BendixCarstensen.com>, autumn 2003. Margin naming enhanced by Duncan Murdoch. ### See Also `[table](../../base/html/table)`, `<ftable>`, `[margin.table](../../base/html/marginsums)`. ### Examples ``` Aye <- sample(c("Yes", "Si", "Oui"), 177, replace = TRUE) Bee <- sample(c("Hum", "Buzz"), 177, replace = TRUE) Sea <- sample(c("White", "Black", "Red", "Dead"), 177, replace = TRUE) (A <- table(Aye, Bee, Sea)) (aA <- addmargins(A)) ftable(A) ftable(aA) # Non-commutative functions - note differences between resulting tables: ftable( addmargins(A, c(3, 1), FUN = list(list(Min = min, Max = max), Sum = sum))) ftable( addmargins(A, c(1, 3), FUN = list(Sum = sum, list(Min = min, Max = max)))) # Weird function needed to return the N when computing percentages sqsm <- function(x) sum(x)^2/100 B <- table(Sea, Bee) round(sweep(addmargins(B, 1, list(list(All = sum, N = sqsm))), 2, apply(B, 2, sum)/100, "/"), 1) round(sweep(addmargins(B, 2, list(list(All = sum, N = sqsm))), 1, apply(B, 1, sum)/100, "/"), 1) # A total over Bee requires formation of the Bee-margin first: mB <- addmargins(B, 2, FUN = list(list(Total = sum))) round(ftable(sweep(addmargins(mB, 1, list(list(All = sum, N = sqsm))), 2, apply(mB, 2, sum)/100, "/")), 1) ## Zero.Printing table+margins: set.seed(1) x <- sample( 1:7, 20, replace = TRUE) y <- sample( 1:7, 20, replace = TRUE) tx <- addmargins( table(x, y) ) print(tx, zero.print = ".") ``` r None `plot.ts` Plotting Time-Series Objects --------------------------------------- ### Description Plotting method for objects inheriting from class `"ts"`. ### Usage ``` ## S3 method for class 'ts' plot(x, y = NULL, plot.type = c("multiple", "single"), xy.labels, xy.lines, panel = lines, nc, yax.flip = FALSE, mar.multi = c(0, 5.1, 0, if(yax.flip) 5.1 else 2.1), oma.multi = c(6, 0, 5, 0), axes = TRUE, ...) ## S3 method for class 'ts' lines(x, ...) ``` ### Arguments | | | | --- | --- | | `x, y` | time series objects, usually inheriting from class `"ts"`. | | `plot.type` | for multivariate time series, should the series by plotted separately (with a common time axis) or on a single plot? Can be abbreviated. | | `xy.labels` | logical, indicating if `[text](../../graphics/html/text)()` labels should be used for an x-y plot, *or* character, supplying a vector of labels to be used. The default is to label for up to 150 points, and not for more. | | `xy.lines` | logical, indicating if `[lines](../../graphics/html/lines)` should be drawn for an x-y plot. Defaults to the value of `xy.labels` if that is logical, otherwise to `TRUE`. | | `panel` | a `function(x, col, bg, pch, type, ...)` which gives the action to be carried out in each panel of the display for `plot.type = "multiple"`. The default is `lines`. | | `nc` | the number of columns to use when `type = "multiple"`. Defaults to 1 for up to 4 series, otherwise to 2. | | `yax.flip` | logical indicating if the y-axis (ticks and numbering) should flip from side 2 (left) to 4 (right) from series to series when `type = "multiple"`. | | `mar.multi, oma.multi` | the (default) `[par](../../graphics/html/par)` settings for `plot.type = "multiple"`. Modify with care! | | `axes` | logical indicating if x- and y- axes should be drawn. | | `...` | additional graphical arguments, see `[plot](../../graphics/html/plot.default)`, `[plot.default](../../graphics/html/plot.default)` and `[par](../../graphics/html/par)`. | ### Details If `y` is missing, this function creates a time series plot, for multivariate series of one of two kinds depending on `plot.type`. If `y` is present, both `x` and `y` must be univariate, and a scatter plot `y ~ x` will be drawn, enhanced by using `[text](../../graphics/html/text)` if `xy.labels` is `[TRUE](../../base/html/logical)` or `character`, and `[lines](../../graphics/html/lines)` if `xy.lines` is `TRUE`. ### See Also `<ts>` for basic time series construction and access functionality. ### Examples ``` require(graphics) ## Multivariate z <- ts(matrix(rt(200 * 8, df = 3), 200, 8), start = c(1961, 1), frequency = 12) plot(z, yax.flip = TRUE) plot(z, axes = FALSE, ann = FALSE, frame.plot = TRUE, mar.multi = c(0,0,0,0), oma.multi = c(1,1,5,1)) title("plot(ts(..), axes=FALSE, ann=FALSE, frame.plot=TRUE, mar..., oma...)") z <- window(z[,1:3], end = c(1969,12)) plot(z, type = "b") # multiple plot(z, plot.type = "single", lty = 1:3, col = 4:2) ## A phase plot: plot(nhtemp, lag(nhtemp, 1), cex = .8, col = "blue", main = "Lag plot of New Haven temperatures") ## xy.lines and xy.labels are FALSE for large series: plot(lag(sunspots, 1), sunspots, pch = ".") SMI <- EuStockMarkets[, "SMI"] plot(lag(SMI, 1), SMI, pch = ".") plot(lag(SMI, 20), SMI, pch = ".", log = "xy", main = "4 weeks lagged SMI stocks -- log scale", xy.lines = TRUE) ``` r None `anova` Anova Tables --------------------- ### Description Compute analysis of variance (or deviance) tables for one or more fitted model objects. ### Usage ``` anova(object, ...) ``` ### Arguments | | | | --- | --- | | `object` | an object containing the results returned by a model fitting function (e.g., `lm` or `glm`). | | `...` | additional objects of the same type. | ### Value This (generic) function returns an object of class `anova`. These objects represent analysis-of-variance and analysis-of-deviance tables. When given a single argument it produces a table which tests whether the model terms are significant. When given a sequence of objects, `anova` tests the models against one another in the order specified. The print method for `anova` objects prints tables in a ‘pretty’ form. ### Warning The comparison between two or more models will only be valid if they are fitted to the same dataset. This may be a problem if there are missing values and **R**'s default of `na.action = na.omit` is used. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S*, Wadsworth & Brooks/Cole. ### See Also `[coefficients](coef)`, `<effects>`, `<fitted.values>`, `<residuals>`, `[summary](../../base/html/summary)`, `[drop1](add1)`, `<add1>`. r None `ecdf` Empirical Cumulative Distribution Function -------------------------------------------------- ### Description Compute an empirical cumulative distribution function, with several methods for plotting, printing and computing with such an “ecdf” object. ### Usage ``` ecdf(x) ## S3 method for class 'ecdf' plot(x, ..., ylab="Fn(x)", verticals = FALSE, col.01line = "gray70", pch = 19) ## S3 method for class 'ecdf' print(x, digits= getOption("digits") - 2, ...) ## S3 method for class 'ecdf' summary(object, ...) ## S3 method for class 'ecdf' quantile(x, ...) ``` ### Arguments | | | | --- | --- | | `x, object` | numeric vector of the observations for `ecdf`; for the methods, an object inheriting from class `"ecdf"`. | | `...` | arguments to be passed to subsequent methods, e.g., `<plot.stepfun>` for the `plot` method. | | `ylab` | label for the y-axis. | | `verticals` | see `<plot.stepfun>`. | | `col.01line` | numeric or character specifying the color of the horizontal lines at y = 0 and 1, see `[colors](../../grdevices/html/colors)`. | | `pch` | plotting character. | | `digits` | number of significant digits to use, see `[print](../../base/html/print)`. | ### Details The e.c.d.f. (empirical cumulative distribution function) *Fn* is a step function with jumps *i/n* at observation values, where *i* is the number of tied observations at that value. Missing values are ignored. For observations `x`*= (**x1,x2*, ... *xn)*, *Fn* is the fraction of observations less or equal to *t*, i.e., *Fn(t) = #{xi <= t}/n = 1/n sum(i=1,n) Indicator(xi <= t).* The function `plot.ecdf` which implements the `[plot](../../graphics/html/plot.default)` method for `ecdf` objects, is implemented via a call to `<plot.stepfun>`; see its documentation. ### Value For `ecdf`, a function of class `"ecdf"`, inheriting from the `"<stepfun>"` class, and hence inheriting a `[knots](stepfun)()` method. For the `summary` method, a summary of the knots of `object` with a `"header"` attribute. The `<quantile>(obj, ...)` method computes the same quantiles as `quantile(x, ...)` would where `x` is the original sample. ### Note The objects of class `"ecdf"` are not intended to be used for permanent storage and may change structure between versions of **R** (and did at **R** 3.0.0). They can usually be re-created by ``` eval(attr(old_obj, "call"), environment(old_obj)) ``` since the data used is stored as part of the object's environment. ### Author(s) Martin Maechler; fixes and new features by other R-core members. ### See Also `<stepfun>`, the more general class of step functions, `<approxfun>` and `<splinefun>`. ### Examples ``` ##-- Simple didactical ecdf example : x <- rnorm(12) Fn <- ecdf(x) Fn # a *function* Fn(x) # returns the percentiles for x tt <- seq(-2, 2, by = 0.1) 12 * Fn(tt) # Fn is a 'simple' function {with values k/12} summary(Fn) ##--> see below for graphics knots(Fn) # the unique data values {12 of them if there were no ties} y <- round(rnorm(12), 1); y[3] <- y[1] Fn12 <- ecdf(y) Fn12 knots(Fn12) # unique values (always less than 12!) summary(Fn12) summary.stepfun(Fn12) ## Advanced: What's inside the function closure? ls(environment(Fn12)) ## "f" "method" "na.rm" "nobs" "x" "y" "yleft" "yright" utils::ls.str(environment(Fn12)) stopifnot(all.equal(quantile(Fn12), quantile(y))) ###----------------- Plotting -------------------------- require(graphics) op <- par(mfrow = c(3, 1), mgp = c(1.5, 0.8, 0), mar = .1+c(3,3,2,1)) F10 <- ecdf(rnorm(10)) summary(F10) plot(F10) plot(F10, verticals = TRUE, do.points = FALSE) plot(Fn12 , lwd = 2) ; mtext("lwd = 2", adj = 1) xx <- unique(sort(c(seq(-3, 2, length.out = 201), knots(Fn12)))) lines(xx, Fn12(xx), col = "blue") abline(v = knots(Fn12), lty = 2, col = "gray70") plot(xx, Fn12(xx), type = "o", cex = .1) #- plot.default {ugly} plot(Fn12, col.hor = "red", add = TRUE) #- plot method abline(v = knots(Fn12), lty = 2, col = "gray70") ## luxury plot plot(Fn12, verticals = TRUE, col.points = "blue", col.hor = "red", col.vert = "bisque") ##-- this works too (automatic call to ecdf(.)): plot.ecdf(rnorm(24)) title("via simple plot.ecdf(x)", adj = 1) par(op) ``` r None `quade.test` Quade Test ------------------------ ### Description Performs a Quade test with unreplicated blocked data. ### Usage ``` quade.test(y, ...) ## Default S3 method: quade.test(y, groups, blocks, ...) ## S3 method for class 'formula' quade.test(formula, data, subset, na.action, ...) ``` ### Arguments | | | | --- | --- | | `y` | either a numeric vector of data values, or a data matrix. | | `groups` | a vector giving the group for the corresponding elements of `y` if this is a vector; ignored if `y` is a matrix. If not a factor object, it is coerced to one. | | `blocks` | a vector giving the block for the corresponding elements of `y` if this is a vector; ignored if `y` is a matrix. If not a factor object, it is coerced to one. | | `formula` | a formula of the form `a ~ b | c`, where `a`, `b` and `c` give the data values and corresponding groups and blocks, respectively. | | `data` | an optional matrix or data frame (or similar: see `<model.frame>`) containing the variables in the formula `formula`. By default the variables are taken from `environment(formula)`. | | `subset` | an optional vector specifying a subset of observations to be used. | | `na.action` | a function which indicates what should happen when the data contain `NA`s. Defaults to `getOption("na.action")`. | | `...` | further arguments to be passed to or from methods. | ### Details `quade.test` can be used for analyzing unreplicated complete block designs (i.e., there is exactly one observation in `y` for each combination of levels of `groups` and `blocks`) where the normality assumption may be violated. The null hypothesis is that apart from an effect of `blocks`, the location parameter of `y` is the same in each of the `groups`. If `y` is a matrix, `groups` and `blocks` are obtained from the column and row indices, respectively. `NA`'s are not allowed in `groups` or `blocks`; if `y` contains `NA`'s, corresponding blocks are removed. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the value of Quade's F statistic. | | `parameter` | a vector with the numerator and denominator degrees of freedom of the approximate F distribution of the test statistic. | | `p.value` | the p-value of the test. | | `method` | the character string `"Quade test"`. | | `data.name` | a character string giving the names of the data. | ### References D. Quade (1979), Using weighted rankings in the analysis of complete blocks with additive block effects. *Journal of the American Statistical Association* **74**, 680–683. William J. Conover (1999), *Practical nonparametric statistics*. New York: John Wiley & Sons. Pages 373–380. ### See Also `<friedman.test>`. ### Examples ``` ## Conover (1999, p. 375f): ## Numbers of five brands of a new hand lotion sold in seven stores ## during one week. y <- matrix(c( 5, 4, 7, 10, 12, 1, 3, 1, 0, 2, 16, 12, 22, 22, 35, 5, 4, 3, 5, 4, 10, 9, 7, 13, 10, 19, 18, 28, 37, 58, 10, 7, 6, 8, 7), nrow = 7, byrow = TRUE, dimnames = list(Store = as.character(1:7), Brand = LETTERS[1:5])) y (qTst <- quade.test(y)) ## Show equivalence of different versions of test : utils::str(dy <- as.data.frame(as.table(y))) qT. <- quade.test(Freq ~ Brand|Store, data = dy) qT.$data.name <- qTst$data.name stopifnot(all.equal(qTst, qT., tolerance = 1e-15)) dys <- dy[order(dy[,"Freq"]),] qTs <- quade.test(Freq ~ Brand|Store, data = dys) qTs$data.name <- qTst$data.name stopifnot(all.equal(qTst, qTs, tolerance = 1e-15)) ```
programming_docs
r None `Poisson` The Poisson Distribution ----------------------------------- ### Description Density, distribution function, quantile function and random generation for the Poisson distribution with parameter `lambda`. ### Usage ``` dpois(x, lambda, log = FALSE) ppois(q, lambda, lower.tail = TRUE, log.p = FALSE) qpois(p, lambda, lower.tail = TRUE, log.p = FALSE) rpois(n, lambda) ``` ### Arguments | | | | --- | --- | | `x` | vector of (non-negative integer) quantiles. | | `q` | vector of quantiles. | | `p` | vector of probabilities. | | `n` | number of random values to return. | | `lambda` | vector of (non-negative) means. | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details The Poisson distribution has density *p(x) = λ^x exp(-λ)/x!* for *x = 0, 1, 2, …* . The mean and variance are *E(X) = Var(X) = λ*. Note that *λ = 0* is really a limit case (setting *0^0 = 1*) resulting in a point mass at *0*, see also the example. If an element of `x` is not integer, the result of `dpois` is zero, with a warning. *p(x)* is computed using Loader's algorithm, see the reference in `[dbinom](family)`. The quantile is right continuous: `qpois(p, lambda)` is the smallest integer *x* such that *P(X ≤ x) ≥ p*. Setting `lower.tail = FALSE` allows to get much more precise results when the default, `lower.tail = TRUE` would return 1, see the example below. ### Value `dpois` gives the (log) density, `ppois` gives the (log) distribution function, `qpois` gives the quantile function, and `rpois` generates random deviates. Invalid `lambda` will result in return value `NaN`, with a warning. The length of the result is determined by `n` for `rpois`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `n` are recycled to the length of the result. Only the first elements of the logical arguments are used. `rpois` returns a vector of type [integer](../../base/html/integer) unless generated values exceed the maximum representable integer when `[double](../../base/html/double)` values are returned since R version 4.0.0. ### Source `dpois` uses C code contributed by Catherine Loader (see `[dbinom](family)`). `ppois` uses `pgamma`. `qpois` uses the Cornish–Fisher Expansion to include a skewness correction to a normal approximation, followed by a search. `rpois` uses Ahrens, J. H. and Dieter, U. (1982). Computer generation of Poisson deviates from modified normal distributions. *ACM Transactions on Mathematical Software*, **8**, 163–179. ### See Also [Distributions](distributions) for other standard distributions, including `[dbinom](family)` for the binomial and `[dnbinom](negbinomial)` for the negative binomial distribution. `<poisson.test>`. ### Examples ``` require(graphics) -log(dpois(0:7, lambda = 1) * gamma(1+ 0:7)) # == 1 Ni <- rpois(50, lambda = 4); table(factor(Ni, 0:max(Ni))) 1 - ppois(10*(15:25), lambda = 100) # becomes 0 (cancellation) ppois(10*(15:25), lambda = 100, lower.tail = FALSE) # no cancellation par(mfrow = c(2, 1)) x <- seq(-0.01, 5, 0.01) plot(x, ppois(x, 1), type = "s", ylab = "F(x)", main = "Poisson(1) CDF") plot(x, pbinom(x, 100, 0.01), type = "s", ylab = "F(x)", main = "Binomial(100, 0.01) CDF") ## The (limit) case lambda = 0 : stopifnot(identical(dpois(0,0), 1), identical(ppois(0,0), 1), identical(qpois(1,0), 0)) ``` r None `effects` Effects from Fitted Model ------------------------------------ ### Description Returns (orthogonal) effects from a fitted model, usually a linear model. This is a generic function, but currently only has a methods for objects inheriting from classes `"lm"` and `"glm"`. ### Usage ``` effects(object, ...) ## S3 method for class 'lm' effects(object, set.sign = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `object` | an **R** object; typically, the result of a model fitting function such as `<lm>`. | | `set.sign` | logical. If `TRUE`, the sign of the effects corresponding to coefficients in the model will be set to agree with the signs of the corresponding coefficients, otherwise the sign is arbitrary. | | `...` | arguments passed to or from other methods. | ### Details For a linear model fitted by `<lm>` or `<aov>`, the effects are the uncorrelated single-degree-of-freedom values obtained by projecting the data onto the successive orthogonal subspaces generated by the QR decomposition during the fitting process. The first *r* (the rank of the model) are associated with coefficients and the remainder span the space of residuals (but are not associated with particular residuals). Empty models do not have effects. ### Value A (named) numeric vector of the same length as `<residuals>`, or a matrix if there were multiple responses in the fitted model, in either case of class `"coef"`. The first *r* rows are labelled by the corresponding coefficients, and the remaining rows are unlabelled. Note that in rank-deficient models the corresponding coefficients will be in a different order if pivoting occurred. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S.* Wadsworth & Brooks/Cole. ### See Also `<coef>` ### Examples ``` y <- c(1:3, 7, 5) x <- c(1:3, 6:7) ( ee <- effects(lm(y ~ x)) ) c( round(ee - effects(lm(y+10 ~ I(x-3.8))), 3) ) # just the first is different ``` r None `xtabs` Cross Tabulation ------------------------- ### Description Create a contingency table (optionally a sparse matrix) from cross-classifying factors, usually contained in a data frame, using a formula interface. ### Usage ``` xtabs(formula = ~., data = parent.frame(), subset, sparse = FALSE, na.action, addNA = FALSE, exclude = if(!addNA) c(NA, NaN), drop.unused.levels = FALSE) ## S3 method for class 'xtabs' print(x, na.print = "", ...) ``` ### Arguments | | | | --- | --- | | `formula` | a <formula> object with the cross-classifying variables (separated by `+`) on the right hand side (or an object which can be coerced to a formula). Interactions are not allowed. On the left hand side, one may optionally give a vector or a matrix of counts; in the latter case, the columns are interpreted as corresponding to the levels of a variable. This is useful if the data have already been tabulated, see the examples below. | | `data` | an optional matrix or data frame (or similar: see `<model.frame>`) containing the variables in the formula `formula`. By default the variables are taken from `environment(formula)`. | | `subset` | an optional vector specifying a subset of observations to be used. | | `sparse` | logical specifying if the result should be a *sparse* matrix, i.e., inheriting from `[sparseMatrix](../../matrix/html/sparsematrix-class)` Only works for two factors (since there are no higher-order sparse array classes yet). | | `na.action` | a `[function](../../base/html/function)` which indicates what should happen when the data contain `[NA](../../base/html/na)`s. If unspecified, and `addNA` is true, this is set to `[na.pass](na.fail)`. When it is `[na.omit](na.fail)` and `formula` has a left hand side (with counts), `[sum](../../base/html/sum)(*, na.rm = TRUE)` is used instead of `sum(*)` for the counts. | | `addNA` | logical indicating if `NA`s should get a separate level and be counted, using `[addNA](../../base/html/factor)(*, ifany=TRUE)` and setting the default for `na.action` to `na.pass`. | | `exclude` | a vector of values to be excluded when forming the set of levels of the classifying factors. | | `drop.unused.levels` | a logical indicating whether to drop unused levels in the classifying factors. If this is `FALSE` and there are unused levels, the table will contain zero marginals, and a subsequent chi-squared test for independence of the factors will not work. | | `x` | an object of class `"xtabs"`. | | `na.print` | character string (or `NULL`) indicating how `[NA](../../base/html/na)` are printed. The default (`""`) does not show `NA`s clearly, and `na.print = "NA"` maybe advisable instead. | | `...` | further arguments passed to or from other methods. | ### Details There is a `summary` method for contingency table objects created by `table` or `xtabs(*, sparse = FALSE)`, which gives basic information and performs a chi-squared test for independence of factors (note that the function `<chisq.test>` currently only handles 2-d tables). If a left hand side is given in `formula`, its entries are simply summed over the cells corresponding to the right hand side; this also works if the lhs does not give counts. For variables in `formula` which are factors, `exclude` must be specified explicitly; the default exclusions will not be used. In **R** versions before 3.4.0, e.g., when `na.action = na.pass`, sometimes zeroes (`0`) were returned instead of `NA`s. Note that when `addNA` is false as by default, and `na.action` is not specified (or set to `NULL`), in effect `na.action = getOption("na.action", default=na.omit)` is used; see also the examples. ### Value By default, when `sparse = FALSE`, a contingency table in array representation of S3 class `c("xtabs", "table")`, with a `"call"` attribute storing the matched call. When `sparse = TRUE`, a sparse numeric matrix, specifically an object of S4 class `[dgTMatrix](../../matrix/html/dgtmatrix-class)` from package [Matrix](https://CRAN.R-project.org/package=Matrix). ### See Also `[table](../../base/html/table)` for traditional cross-tabulation, and `[as.data.frame.table](../../base/html/table)` which is the inverse operation of `xtabs` (see the `DF` example below). `[sparseMatrix](../../matrix/html/sparsematrix-class)` on sparse matrices in package [Matrix](https://CRAN.R-project.org/package=Matrix). ### Examples ``` ## 'esoph' has the frequencies of cases and controls for all levels of ## the variables 'agegp', 'alcgp', and 'tobgp'. xtabs(cbind(ncases, ncontrols) ~ ., data = esoph) ## Output is not really helpful ... flat tables are better: ftable(xtabs(cbind(ncases, ncontrols) ~ ., data = esoph)) ## In particular if we have fewer factors ... ftable(xtabs(cbind(ncases, ncontrols) ~ agegp, data = esoph)) ## This is already a contingency table in array form. DF <- as.data.frame(UCBAdmissions) ## Now 'DF' is a data frame with a grid of the factors and the counts ## in variable 'Freq'. DF ## Nice for taking margins ... xtabs(Freq ~ Gender + Admit, DF) ## And for testing independence ... summary(xtabs(Freq ~ ., DF)) ## with NA's DN <- DF; DN[cbind(6:9, c(1:2,4,1))] <- NA DN # 'Freq' is missing only for (Rejected, Female, B) tools::assertError(# 'na.fail' should fail : xtabs(Freq ~ Gender + Admit, DN, na.action=na.fail), verbose=TRUE) op <- options(na.action = "na.omit") # the "factory" default (xtabs(Freq ~ Gender + Admit, DN) -> xtD) noC <- function(O) `attr<-`(O, "call", NULL) ident_noC <- function(x,y) identical(noC(x), noC(y)) stopifnot(exprs = { ident_noC(xtD, xtabs(Freq ~ Gender + Admit, DN, na.action = na.omit)) ident_noC(xtD, xtabs(Freq ~ Gender + Admit, DN, na.action = NULL)) }) xtabs(Freq ~ Gender + Admit, DN, na.action = na.pass) ## The Female:Rejected combination has NA 'Freq' (and NA prints 'invisibly' as "") (xtNA <- xtabs(Freq ~ Gender + Admit, DN, addNA = TRUE)) # ==> count NAs ## show NA's better via na.print = ".." : print(xtNA, na.print= "NA") ## Create a nice display for the warp break data. warpbreaks$replicate <- rep_len(1:9, 54) ftable(xtabs(breaks ~ wool + tension + replicate, data = warpbreaks)) ### ---- Sparse Examples ---- if(require("Matrix")) withAutoprint({ ## similar to "nlme"s 'ergoStool' : d.ergo <- data.frame(Type = paste0("T", rep(1:4, 9*4)), Subj = gl(9, 4, 36*4)) xtabs(~ Type + Subj, data = d.ergo) # 4 replicates each set.seed(15) # a subset of cases: xtabs(~ Type + Subj, data = d.ergo[sample(36, 10), ], sparse = TRUE) ## Hypothetical two-level setup: inner <- factor(sample(letters[1:25], 100, replace = TRUE)) inout <- factor(sample(LETTERS[1:5], 25, replace = TRUE)) fr <- data.frame(inner = inner, outer = inout[as.integer(inner)]) xtabs(~ inner + outer, fr, sparse = TRUE) }) ``` r None `optim` General-purpose Optimization ------------------------------------- ### Description General-purpose optimization based on Nelder–Mead, quasi-Newton and conjugate-gradient algorithms. It includes an option for box-constrained optimization and simulated annealing. ### Usage ``` optim(par, fn, gr = NULL, ..., method = c("Nelder-Mead", "BFGS", "CG", "L-BFGS-B", "SANN", "Brent"), lower = -Inf, upper = Inf, control = list(), hessian = FALSE) optimHess(par, fn, gr = NULL, ..., control = list()) ``` ### Arguments | | | | --- | --- | | `par` | Initial values for the parameters to be optimized over. | | `fn` | A function to be minimized (or maximized), with first argument the vector of parameters over which minimization is to take place. It should return a scalar result. | | `gr` | A function to return the gradient for the `"BFGS"`, `"CG"` and `"L-BFGS-B"` methods. If it is `NULL`, a finite-difference approximation will be used. For the `"SANN"` method it specifies a function to generate a new candidate point. If it is `NULL` a default Gaussian Markov kernel is used. | | `...` | Further arguments to be passed to `fn` and `gr`. | | `method` | The method to be used. See ‘Details’. Can be abbreviated. | | `lower, upper` | Bounds on the variables for the `"L-BFGS-B"` method, or bounds in which to *search* for method `"Brent"`. | | `control` | a `[list](../../base/html/list)` of control parameters. See ‘Details’. | | `hessian` | Logical. Should a numerically differentiated Hessian matrix be returned? | ### Details Note that arguments after `...` must be matched exactly. By default `optim` performs minimization, but it will maximize if `control$fnscale` is negative. `optimHess` is an auxiliary function to compute the Hessian at a later stage if `hessian = TRUE` was forgotten. The default method is an implementation of that of Nelder and Mead (1965), that uses only function values and is robust but relatively slow. It will work reasonably well for non-differentiable functions. Method `"BFGS"` is a quasi-Newton method (also known as a variable metric algorithm), specifically that published simultaneously in 1970 by Broyden, Fletcher, Goldfarb and Shanno. This uses function values and gradients to build up a picture of the surface to be optimized. Method `"CG"` is a conjugate gradients method based on that by Fletcher and Reeves (1964) (but with the option of Polak–Ribiere or Beale–Sorenson updates). Conjugate gradient methods will generally be more fragile than the BFGS method, but as they do not store a matrix they may be successful in much larger optimization problems. Method `"L-BFGS-B"` is that of Byrd *et. al.* (1995) which allows *box constraints*, that is each variable can be given a lower and/or upper bound. The initial value must satisfy the constraints. This uses a limited-memory modification of the BFGS quasi-Newton method. If non-trivial bounds are supplied, this method will be selected, with a warning. Nocedal and Wright (1999) is a comprehensive reference for the previous three methods. Method `"SANN"` is by default a variant of simulated annealing given in Belisle (1992). Simulated-annealing belongs to the class of stochastic global optimization methods. It uses only function values but is relatively slow. It will also work for non-differentiable functions. This implementation uses the Metropolis function for the acceptance probability. By default the next candidate point is generated from a Gaussian Markov kernel with scale proportional to the actual temperature. If a function to generate a new candidate point is given, method `"SANN"` can also be used to solve combinatorial optimization problems. Temperatures are decreased according to the logarithmic cooling schedule as given in Belisle (1992, p. 890); specifically, the temperature is set to `temp / log(((t-1) %/% tmax)*tmax + exp(1))`, where `t` is the current iteration step and `temp` and `tmax` are specifiable via `control`, see below. Note that the `"SANN"` method depends critically on the settings of the control parameters. It is not a general-purpose method but can be very useful in getting to a good value on a very rough surface. Method `"Brent"` is for one-dimensional problems only, using `<optimize>()`. It can be useful in cases where `optim()` is used inside other functions where only `method` can be specified, such as in `[mle](../../stats4/html/mle)` from package stats4. Function `fn` can return `NA` or `Inf` if the function cannot be evaluated at the supplied value, but the initial value must have a computable finite value of `fn`. (Except for method `"L-BFGS-B"` where the values should always be finite.) `optim` can be used recursively, and for a single parameter as well as many. It also accepts a zero-length `par`, and just evaluates the function with that argument. The `control` argument is a list that can supply any of the following components: `trace` Non-negative integer. If positive, tracing information on the progress of the optimization is produced. Higher values may produce more tracing information: for method `"L-BFGS-B"` there are six levels of tracing. (To understand exactly what these do see the source code: higher levels give more detail.) `fnscale` An overall scaling to be applied to the value of `fn` and `gr` during optimization. If negative, turns the problem into a maximization problem. Optimization is performed on `fn(par)/fnscale`. `parscale` A vector of scaling values for the parameters. Optimization is performed on `par/parscale` and these should be comparable in the sense that a unit change in any element produces about a unit change in the scaled value. Not used (nor needed) for `method = "Brent"`. `ndeps` A vector of step sizes for the finite-difference approximation to the gradient, on `par/parscale` scale. Defaults to `1e-3`. `maxit` The maximum number of iterations. Defaults to `100` for the derivative-based methods, and `500` for `"Nelder-Mead"`. For `"SANN"` `maxit` gives the total number of function evaluations: there is no other stopping criterion. Defaults to `10000`. `abstol` The absolute convergence tolerance. Only useful for non-negative functions, as a tolerance for reaching zero. `reltol` Relative convergence tolerance. The algorithm stops if it is unable to reduce the value by a factor of `reltol * (abs(val) + reltol)` at a step. Defaults to `sqrt(.Machine$double.eps)`, typically about `1e-8`. `alpha`, `beta`, `gamma` Scaling parameters for the `"Nelder-Mead"` method. `alpha` is the reflection factor (default 1.0), `beta` the contraction factor (0.5) and `gamma` the expansion factor (2.0). `REPORT` The frequency of reports for the `"BFGS"`, `"L-BFGS-B"` and `"SANN"` methods if `control$trace` is positive. Defaults to every 10 iterations for `"BFGS"` and `"L-BFGS-B"`, or every 100 temperatures for `"SANN"`. `warn.1d.NelderMead` a `[logical](../../base/html/logical)` indicating if the (default) `"Nelder-Mean"` method should signal a warning when used for one-dimensional minimization. As the warning is sometimes inappropriate, you can suppress it by setting this option to false. `type` for the conjugate-gradients method. Takes value `1` for the Fletcher–Reeves update, `2` for Polak–Ribiere and `3` for Beale–Sorenson. `lmm` is an integer giving the number of BFGS updates retained in the `"L-BFGS-B"` method, It defaults to `5`. `factr` controls the convergence of the `"L-BFGS-B"` method. Convergence occurs when the reduction in the objective is within this factor of the machine tolerance. Default is `1e7`, that is a tolerance of about `1e-8`. `pgtol` helps control the convergence of the `"L-BFGS-B"` method. It is a tolerance on the projected gradient in the current search direction. This defaults to zero, when the check is suppressed. `temp` controls the `"SANN"` method. It is the starting temperature for the cooling schedule. Defaults to `10`. `tmax` is the number of function evaluations at each temperature for the `"SANN"` method. Defaults to `10`. Any names given to `par` will be copied to the vectors passed to `fn` and `gr`. Note that no other attributes of `par` are copied over. The parameter vector passed to `fn` has special semantics and may be shared between calls: the function should not change or copy it. ### Value For `optim`, a list with components: | | | | --- | --- | | `par` | The best set of parameters found. | | `value` | The value of `fn` corresponding to `par`. | | `counts` | A two-element integer vector giving the number of calls to `fn` and `gr` respectively. This excludes those calls needed to compute the Hessian, if requested, and any calls to `fn` to compute a finite-difference approximation to the gradient. | | `convergence` | An integer code. `0` indicates successful completion (which is always the case for `"SANN"` and `"Brent"`). Possible error codes are `1` indicates that the iteration limit `maxit` had been reached. `10` indicates degeneracy of the Nelder–Mead simplex. `51` indicates a warning from the `"L-BFGS-B"` method; see component `message` for further details. `52` indicates an error from the `"L-BFGS-B"` method; see component `message` for further details. | | `message` | A character string giving any additional information returned by the optimizer, or `NULL`. | | `hessian` | Only if argument `hessian` is true. A symmetric matrix giving an estimate of the Hessian at the solution found. Note that this is the Hessian of the unconstrained problem even if the box constraints are active. | For `optimHess`, the description of the `hessian` component applies. ### Note `optim` will work with one-dimensional `par`s, but the default method does not work well (and will warn). Method `"Brent"` uses `<optimize>` and needs bounds to be available; `"BFGS"` often works well enough if not. ### Source The code for methods `"Nelder-Mead"`, `"BFGS"` and `"CG"` was based originally on Pascal code in Nash (1990) that was translated by `p2c` and then hand-optimized. Dr Nash has agreed that the code can be made freely available. The code for method `"L-BFGS-B"` is based on Fortran code by Zhu, Byrd, Lu-Chen and Nocedal obtained from Netlib (file ‘opt/lbfgs\_bcm.shar’: another version is in ‘toms/778’). The code for method `"SANN"` was contributed by A. Trapletti. ### References Belisle, C. J. P. (1992). Convergence theorems for a class of simulated annealing algorithms on *Rd*. *Journal of Applied Probability*, **29**, 885–895. doi: [10.2307/3214721](https://doi.org/10.2307/3214721). Byrd, R. H., Lu, P., Nocedal, J. and Zhu, C. (1995). A limited memory algorithm for bound constrained optimization. *SIAM Journal on Scientific Computing*, **16**, 1190–1208. doi: [10.1137/0916069](https://doi.org/10.1137/0916069). Fletcher, R. and Reeves, C. M. (1964). Function minimization by conjugate gradients. *Computer Journal* **7**, 148–154. doi: [10.1093/comjnl/7.2.149](https://doi.org/10.1093/comjnl/7.2.149). Nash, J. C. (1990). *Compact Numerical Methods for Computers. Linear Algebra and Function Minimisation*. Adam Hilger. Nelder, J. A. and Mead, R. (1965). A simplex algorithm for function minimization. *Computer Journal*, **7**, 308–313. doi: [10.1093/comjnl/7.4.308](https://doi.org/10.1093/comjnl/7.4.308). Nocedal, J. and Wright, S. J. (1999). *Numerical Optimization*. Springer. ### See Also `<nlm>`, `<nlminb>`. `<optimize>` for one-dimensional minimization and `[constrOptim](constroptim)` for constrained optimization. ### Examples ``` require(graphics) fr <- function(x) { ## Rosenbrock Banana function x1 <- x[1] x2 <- x[2] 100 * (x2 - x1 * x1)^2 + (1 - x1)^2 } grr <- function(x) { ## Gradient of 'fr' x1 <- x[1] x2 <- x[2] c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1), 200 * (x2 - x1 * x1)) } optim(c(-1.2,1), fr) (res <- optim(c(-1.2,1), fr, grr, method = "BFGS")) optimHess(res$par, fr, grr) optim(c(-1.2,1), fr, NULL, method = "BFGS", hessian = TRUE) ## These do not converge in the default number of steps optim(c(-1.2,1), fr, grr, method = "CG") optim(c(-1.2,1), fr, grr, method = "CG", control = list(type = 2)) optim(c(-1.2,1), fr, grr, method = "L-BFGS-B") flb <- function(x) { p <- length(x); sum(c(1, rep(4, p-1)) * (x - c(1, x[-p])^2)^2) } ## 25-dimensional box constrained optim(rep(3, 25), flb, NULL, method = "L-BFGS-B", lower = rep(2, 25), upper = rep(4, 25)) # par[24] is *not* at boundary ## "wild" function , global minimum at about -15.81515 fw <- function (x) 10*sin(0.3*x)*sin(1.3*x^2) + 0.00001*x^4 + 0.2*x+80 plot(fw, -50, 50, n = 1000, main = "optim() minimising 'wild function'") res <- optim(50, fw, method = "SANN", control = list(maxit = 20000, temp = 20, parscale = 20)) res ## Now improve locally {typically only by a small bit}: (r2 <- optim(res$par, fw, method = "BFGS")) points(r2$par, r2$value, pch = 8, col = "red", cex = 2) ## Combinatorial optimization: Traveling salesman problem library(stats) # normally loaded eurodistmat <- as.matrix(eurodist) distance <- function(sq) { # Target function sq2 <- embed(sq, 2) sum(eurodistmat[cbind(sq2[,2], sq2[,1])]) } genseq <- function(sq) { # Generate new candidate sequence idx <- seq(2, NROW(eurodistmat)-1) changepoints <- sample(idx, size = 2, replace = FALSE) tmp <- sq[changepoints[1]] sq[changepoints[1]] <- sq[changepoints[2]] sq[changepoints[2]] <- tmp sq } sq <- c(1:nrow(eurodistmat), 1) # Initial sequence: alphabetic distance(sq) # rotate for conventional orientation loc <- -cmdscale(eurodist, add = TRUE)$points x <- loc[,1]; y <- loc[,2] s <- seq_len(nrow(eurodistmat)) tspinit <- loc[sq,] plot(x, y, type = "n", asp = 1, xlab = "", ylab = "", main = "initial solution of traveling salesman problem", axes = FALSE) arrows(tspinit[s,1], tspinit[s,2], tspinit[s+1,1], tspinit[s+1,2], angle = 10, col = "green") text(x, y, labels(eurodist), cex = 0.8) set.seed(123) # chosen to get a good soln relatively quickly res <- optim(sq, distance, genseq, method = "SANN", control = list(maxit = 30000, temp = 2000, trace = TRUE, REPORT = 500)) res # Near optimum distance around 12842 tspres <- loc[res$par,] plot(x, y, type = "n", asp = 1, xlab = "", ylab = "", main = "optim() 'solving' traveling salesman problem", axes = FALSE) arrows(tspres[s,1], tspres[s,2], tspres[s+1,1], tspres[s+1,2], angle = 10, col = "red") text(x, y, labels(eurodist), cex = 0.8) ## 1-D minimization: "Brent" or optimize() being preferred.. but NM may be ok and "unavoidable", ## ---------------- so we can suppress the check+warning : system.time(rO <- optimize(function(x) (x-pi)^2, c(0, 10))) system.time(ro <- optim(1, function(x) (x-pi)^2, control=list(warn.1d.NelderMead = FALSE))) rO$minimum - pi # 0 (perfect), on one platform ro$par - pi # ~= 1.9e-4 on one platform utils::str(ro) ```
programming_docs
r None `aov` Fit an Analysis of Variance Model ---------------------------------------- ### Description Fit an analysis of variance model by a call to `lm` for each stratum. ### Usage ``` aov(formula, data = NULL, projections = FALSE, qr = TRUE, contrasts = NULL, ...) ``` ### Arguments | | | | --- | --- | | `formula` | A formula specifying the model. | | `data` | A data frame in which the variables specified in the formula will be found. If missing, the variables are searched for in the standard way. | | `projections` | Logical flag: should the projections be returned? | | `qr` | Logical flag: should the QR decomposition be returned? | | `contrasts` | A list of contrasts to be used for some of the factors in the formula. These are not used for any `Error` term, and supplying contrasts for factors only in the `Error` term will give a warning. | | `...` | Arguments to be passed to `lm`, such as `subset` or `na.action`. See ‘Details’ about `weights`. | ### Details This provides a wrapper to `lm` for fitting linear models to balanced or unbalanced experimental designs. The main difference from `lm` is in the way `print`, `summary` and so on handle the fit: this is expressed in the traditional language of the analysis of variance rather than that of linear models. If the formula contains a single `Error` term, this is used to specify error strata, and appropriate models are fitted within each error stratum. The formula can specify multiple responses. Weights can be specified by a `weights` argument, but should not be used with an `Error` term, and are incompletely supported (e.g., not by `<model.tables>`). ### Value An object of class `c("aov", "lm")` or for multiple responses of class `c("maov", "aov", "mlm", "lm")` or for multiple error strata of class `c("aovlist", "<listof>")`. There are `[print](../../base/html/print)` and `[summary](../../base/html/summary)` methods available for these. ### Note `aov` is designed for balanced designs, and the results can be hard to interpret without balance: beware that missing values in the response(s) will likely lose the balance. If there are two or more error strata, the methods used are statistically inefficient without balance, and it may be better to use `[lme](../../nlme/html/lme)` in package [nlme](https://CRAN.R-project.org/package=nlme). Balance can be checked with the `<replications>` function. The default ‘contrasts’ in **R** are not orthogonal contrasts, and `aov` and its helper functions will work better with such contrasts: see the examples for how to select these. ### Author(s) The design was inspired by the S function of the same name described in Chambers *et al* (1992). ### References Chambers, J. M., Freeny, A and Heiberger, R. M. (1992) *Analysis of variance; designed experiments.* Chapter 5 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<lm>`, `<summary.aov>`, `<replications>`, `<alias>`, `<proj>`, `<model.tables>`, `[TukeyHSD](tukeyhsd)` ### Examples ``` ## From Venables and Ripley (2002) p.165. ## Set orthogonal contrasts. op <- options(contrasts = c("contr.helmert", "contr.poly")) ( npk.aov <- aov(yield ~ block + N*P*K, npk) ) summary(npk.aov) coefficients(npk.aov) ## to show the effects of re-ordering terms contrast the two fits aov(yield ~ block + N * P + K, npk) aov(terms(yield ~ block + N * P + K, keep.order = TRUE), npk) ## as a test, not particularly sensible statistically npk.aovE <- aov(yield ~ N*P*K + Error(block), npk) npk.aovE ## IGNORE_RDIFF_BEGIN summary(npk.aovE) ## IGNORE_RDIFF_END options(op) # reset to previous ``` r None `plot.lm` Plot Diagnostics for an lm Object -------------------------------------------- ### Description Six plots (selectable by `which`) are currently available: a plot of residuals against fitted values, a Scale-Location plot of *sqrt(| residuals |)* against fitted values, a Normal Q-Q plot, a plot of Cook's distances versus row labels, a plot of residuals against leverages, and a plot of Cook's distances against leverage/(1-leverage). By default, the first three and `5` are provided. ### Usage ``` ## S3 method for class 'lm' plot(x, which = c(1,2,3,5), caption = list("Residuals vs Fitted", "Normal Q-Q", "Scale-Location", "Cook's distance", "Residuals vs Leverage", expression("Cook's dist vs Leverage " * h[ii] / (1 - h[ii]))), panel = if(add.smooth) function(x, y, ...) panel.smooth(x, y, iter=iter.smooth, ...) else points, sub.caption = NULL, main = "", ask = prod(par("mfcol")) < length(which) && dev.interactive(), ..., id.n = 3, labels.id = names(residuals(x)), cex.id = 0.75, qqline = TRUE, cook.levels = c(0.5, 1.0), add.smooth = getOption("add.smooth"), iter.smooth = if(isGlm) 0 else 3, label.pos = c(4,2), cex.caption = 1, cex.oma.main = 1.25) ``` ### Arguments | | | | --- | --- | | `x` | `lm` object, typically result of `<lm>` or `<glm>`. | | `which` | if a subset of the plots is required, specify a subset of the numbers `1:6`, see `caption` below (and the ‘Details’) for the different kinds. | | `caption` | captions to appear above the plots; `[character](../../base/html/character)` vector or `[list](../../base/html/list)` of valid graphics annotations, see `[as.graphicsAnnot](../../grdevices/html/as.graphicsannot)`, of length 6, the j-th entry corresponding to `which[j]`. Can be set to `""` or `NA` to suppress all captions. | | `panel` | panel function. The useful alternative to `[points](../../graphics/html/points)`, `[panel.smooth](../../graphics/html/panel.smooth)` can be chosen by `add.smooth = TRUE`. | | `sub.caption` | common title—above the figures if there are more than one; used as `sub` (s.`[title](../../graphics/html/title)`) otherwise. If `NULL`, as by default, a possible abbreviated version of `deparse(x$call)` is used. | | `main` | title to each plot—in addition to `caption`. | | `ask` | logical; if `TRUE`, the user is *ask*ed before each plot, see `[par](../../graphics/html/par)(ask=.)`. | | `...` | other parameters to be passed through to plotting functions. | | `id.n` | number of points to be labelled in each plot, starting with the most extreme. | | `labels.id` | vector of labels, from which the labels for extreme points will be chosen. `NULL` uses observation numbers. | | `cex.id` | magnification of point labels. | | `qqline` | logical indicating if a `[qqline](qqnorm)()` should be added to the normal Q-Q plot. | | `cook.levels` | levels of Cook's distance at which to draw contours. | | `add.smooth` | logical indicating if a smoother should be added to most plots; see also `panel` above. | | `iter.smooth` | the number of robustness iterations, the argument `iter` in `[panel.smooth](../../graphics/html/panel.smooth)()`; the default uses no such iterations for `<glm>` fits which is particularly desirable for the (predominant) case of binary observations, but also for other models where the response distribution can be highly skewed. | | `label.pos` | positioning of labels, for the left half and right half of the graph respectively, for plots 1-3. | | `cex.caption` | controls the size of `caption`. | | `cex.oma.main` | controls the size of the `sub.caption` only if that is *above* the figures when there is more than one. | ### Details `sub.caption`—by default the function call—is shown as a subtitle (under the x-axis title) on each plot when plots are on separate pages, or as a subtitle in the outer margin (if any) when there are multiple plots per page. The ‘Scale-Location’ plot, also called ‘Spread-Location’ or ‘S-L’ plot, takes the square root of the absolute residuals in order to diminish skewness (*sqrt(|E|)* is much less skewed than *| E |* for Gaussian zero-mean *E*). The ‘S-L’, the Q-Q, and the Residual-Leverage plot, use *standardized* residuals which have identical variance (under the hypothesis). They are given as *R[i] / (s \* sqrt(1 - h.ii))* where *h.ii* are the diagonal entries of the hat matrix, `[influence](lm.influence)()$hat` (see also `[hat](influence.measures)`), and where the Residual-Leverage plot uses standardized Pearson residuals (`[residuals.glm](glm.summaries)(type = "pearson")`) for *R[i]*. The Residual-Leverage plot shows contours of equal Cook's distance, for values of `cook.levels` (by default 0.5 and 1) and omits cases with leverage one with a warning. If the leverages are constant (as is typically the case in a balanced `<aov>` situation) the plot uses factor level combinations instead of the leverages for the x-axis. (The factor levels are ordered by mean fitted value.) In the Cook's distance vs leverage/(1-leverage) plot, contours of standardized residuals (`[rstandard](influence.measures)(.)`) that are equal in magnitude are lines through the origin. The contour lines are labelled with the magnitudes. Notice that some plots may not make much sense for the `glm` case; e.g., the normal Q-Q plot only makes sense if the distribution is approximately normal. ### Author(s) John Maindonald and Martin Maechler. ### References Belsley, D. A., Kuh, E. and Welsch, R. E. (1980). *Regression Diagnostics*. New York: Wiley. Cook, R. D. and Weisberg, S. (1982). *Residuals and Influence in Regression*. London: Chapman and Hall. Firth, D. (1991) Generalized Linear Models. In Hinkley, D. V. and Reid, N. and Snell, E. J., eds: Pp. 55-82 in Statistical Theory and Modelling. In Honour of Sir David Cox, FRS. London: Chapman and Hall. Hinkley, D. V. (1975). On power transformations to symmetry. *Biometrika*, **62**, 101–111. doi: [10.2307/2334491](https://doi.org/10.2307/2334491). McCullagh, P. and Nelder, J. A. (1989). *Generalized Linear Models*. London: Chapman and Hall. ### See Also `<termplot>`, `<lm.influence>`, `[cooks.distance](influence.measures)`, `[hatvalues](influence.measures)`. ### Examples ``` require(graphics) ## Analysis of the life-cycle savings data ## given in Belsley, Kuh and Welsch. lm.SR <- lm(sr ~ pop15 + pop75 + dpi + ddpi, data = LifeCycleSavings) plot(lm.SR) ## 4 plots on 1 page; ## allow room for printing model formula in outer margin: par(mfrow = c(2, 2), oma = c(0, 0, 2, 0)) plot(lm.SR) plot(lm.SR, id.n = NULL) # no id's plot(lm.SR, id.n = 5, labels.id = NULL) # 5 id numbers ## Was default in R <= 2.1.x: ## Cook's distances instead of Residual-Leverage plot plot(lm.SR, which = 1:4) ## Fit a smooth curve, where applicable: plot(lm.SR, panel = panel.smooth) ## Gives a smoother curve plot(lm.SR, panel = function(x, y) panel.smooth(x, y, span = 1)) par(mfrow = c(2,1)) # same oma as above plot(lm.SR, which = 1:2, sub.caption = "Saving Rates, n=50, p=5") ``` r None `naprint` Adjust for Missing Values ------------------------------------ ### Description Use missing value information to report the effects of an `na.action`. ### Usage ``` naprint(x, ...) ``` ### Arguments | | | | --- | --- | | `x` | An object produced by an `na.action` function. | | `...` | further arguments passed to or from other methods. | ### Details This is a generic function, and the exact information differs by method. `naprint.omit` reports the number of rows omitted: `naprint.default` reports an empty string. ### Value A character string providing information on missing values, for example the number. r None `oneway.test` Test for Equal Means in a One-Way Layout ------------------------------------------------------- ### Description Test whether two or more samples from normal distributions have the same means. The variances are not necessarily assumed to be equal. ### Usage ``` oneway.test(formula, data, subset, na.action, var.equal = FALSE) ``` ### Arguments | | | | --- | --- | | `formula` | a formula of the form `lhs ~ rhs` where `lhs` gives the sample values and `rhs` the corresponding groups. | | `data` | an optional matrix or data frame (or similar: see `<model.frame>`) containing the variables in the formula `formula`. By default the variables are taken from `environment(formula)`. | | `subset` | an optional vector specifying a subset of observations to be used. | | `na.action` | a function which indicates what should happen when the data contain `NA`s. Defaults to `getOption("na.action")`. | | `var.equal` | a logical variable indicating whether to treat the variances in the samples as equal. If `TRUE`, then a simple F test for the equality of means in a one-way analysis of variance is performed. If `FALSE`, an approximate method of Welch (1951) is used, which generalizes the commonly known 2-sample Welch test to the case of arbitrarily many samples. | ### Details If the right-hand side of the formula contains more than one term, their interaction is taken to form the grouping. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the value of the test statistic. | | `parameter` | the degrees of freedom of the exact or approximate F distribution of the test statistic. | | `p.value` | the p-value of the test. | | `method` | a character string indicating the test performed. | | `data.name` | a character string giving the names of the data. | ### References B. L. Welch (1951). On the comparison of several mean values: an alternative approach. *Biometrika*, **38**, 330–336. doi: [10.2307/2332579](https://doi.org/10.2307/2332579). ### See Also The standard t test (`<t.test>`) as the special case for two samples; the Kruskal-Wallis test `<kruskal.test>` for a nonparametric test for equal location parameters in a one-way layout. ### Examples ``` ## Not assuming equal variances oneway.test(extra ~ group, data = sleep) ## Assuming equal variances oneway.test(extra ~ group, data = sleep, var.equal = TRUE) ## which gives the same result as anova(lm(extra ~ group, data = sleep)) ``` r None `SSlogis` Self-Starting Nls Logistic Model ------------------------------------------- ### Description This `selfStart` model evaluates the logistic function and its gradient. It has an `initial` attribute that creates initial estimates of the parameters `Asym`, `xmid`, and `scal`. In **R** 3.4.2 and earlier, that init function failed when `min(input)` was exactly zero. ### Usage ``` SSlogis(input, Asym, xmid, scal) ``` ### Arguments | | | | --- | --- | | `input` | a numeric vector of values at which to evaluate the model. | | `Asym` | a numeric parameter representing the asymptote. | | `xmid` | a numeric parameter representing the `x` value at the inflection point of the curve. The value of `SSlogis` will be `Asym/2` at `xmid`. | | `scal` | a numeric scale parameter on the `input` axis. | ### Value a numeric vector of the same length as `input`. It is the value of the expression `Asym/(1+exp((xmid-input)/scal))`. If all of the arguments `Asym`, `xmid`, and `scal` are names of objects the gradient matrix with respect to these names is attached as an attribute named `gradient`. ### Author(s) José Pinheiro and Douglas Bates ### See Also `<nls>`, `[selfStart](selfstart)` ### Examples ``` Chick.1 <- ChickWeight[ChickWeight$Chick == 1, ] SSlogis(Chick.1$Time, 368, 14, 6) # response only local({ Asym <- 368; xmid <- 14; scal <- 6 SSlogis(Chick.1$Time, Asym, xmid, scal) # response _and_ gradient }) getInitial(weight ~ SSlogis(Time, Asym, xmid, scal), data = Chick.1) ## Initial values are in fact the converged one here, "Number of iter...: 0" : fm1 <- nls(weight ~ SSlogis(Time, Asym, xmid, scal), data = Chick.1) summary(fm1) ## but are slightly improved here: fm2 <- update(fm1, control=nls.control(tol = 1e-9, warnOnly=TRUE), trace = TRUE) all.equal(coef(fm1), coef(fm2)) # "Mean relative difference: 9.6e-6" str(fm2$convInfo) # 3 iterations dwlg1 <- data.frame(Prop = c(rep(0,5), 2, 5, rep(9, 9)), end = 1:16) iPar <- getInitial(Prop ~ SSlogis(end, Asym, xmid, scal), data = dwlg1) ## failed in R <= 3.4.2 (because of the '0's in 'Prop') stopifnot(all.equal(tolerance = 1e-6, iPar, c(Asym = 9.0678, xmid = 6.79331, scal = 0.499934))) ## Visualize the SSlogis() model parametrization : xx <- seq(-0.75, 5, by=1/32) yy <- 5 / (1 + exp((2-xx)/0.6)) # == SSlogis(xx, *): stopifnot( all.equal(yy, SSlogis(xx, Asym = 5, xmid = 2, scal = 0.6)) ) require(graphics) op <- par(mar = c(0.5, 0, 3.5, 0)) plot(xx, yy, type = "l", axes = FALSE, ylim = c(0,6), xlim = c(-1, 5), xlab = "", ylab = "", lwd = 2, main = "Parameters in the SSlogis model") mtext(quote(list(phi[1] == "Asym", phi[2] == "xmid", phi[3] == "scal"))) usr <- par("usr") arrows(usr[1], 0, usr[2], 0, length = 0.1, angle = 25) arrows(0, usr[3], 0, usr[4], length = 0.1, angle = 25) text(usr[2] - 0.2, 0.1, "x", adj = c(1, 0)) text( -0.1, usr[4], "y", adj = c(1, 1)) abline(h = 5, lty = 3) arrows(-0.8, c(2.1, 2.9), -0.8, c(0, 5 ), length = 0.1, angle = 25) text (-0.8, 2.5, quote(phi[1])) segments(c(2,2.6,2.6), c(0, 2.5,3.5), # NB. SSlogis(x = xmid = 2) = 2.5 c(2,2.6,2 ), c(2.5,3.5,2.5), lty = 2, lwd = 0.75) text(2, -.1, quote(phi[2])) arrows(c(2.2, 2.4), 2.5, c(2.0, 2.6), 2.5, length = 0.08, angle = 25) text( 2.3, 2.5, quote(phi[3])); text(2.7, 3, "1") par(op) ``` r None `fisher.test` Fisher's Exact Test for Count Data ------------------------------------------------- ### Description Performs Fisher's exact test for testing the null of independence of rows and columns in a contingency table with fixed marginals. ### Usage ``` fisher.test(x, y = NULL, workspace = 200000, hybrid = FALSE, hybridPars = c(expect = 5, percent = 80, Emin = 1), control = list(), or = 1, alternative = "two.sided", conf.int = TRUE, conf.level = 0.95, simulate.p.value = FALSE, B = 2000) ``` ### Arguments | | | | --- | --- | | `x` | either a two-dimensional contingency table in matrix form, or a factor object. | | `y` | a factor object; ignored if `x` is a matrix. | | `workspace` | an integer specifying the size of the workspace used in the network algorithm. In units of 4 bytes. Only used for non-simulated p-values larger than *2 by 2* tables. Since **R** version 3.5.0, this also increases the internal stack size which allows larger problems to be solved, however sometimes needing hours. In such cases, `simulate.p.values=TRUE` may be more reasonable. | | `hybrid` | a logical. Only used for larger than *2 by 2* tables, in which cases it indicates whether the exact probabilities (default) or a hybrid approximation thereof should be computed. | | `hybridPars` | a numeric vector of length 3, by default describing “Cochran's conditions” for the validity of the chisquare approximation, see ‘Details’. | | `control` | a list with named components for low level algorithm control. At present the only one used is `"mult"`, a positive integer *≥ 2* with default 30 used only for larger than *2 by 2* tables. This says how many times as much space should be allocated to paths as to keys: see file ‘fexact.c’ in the sources of this package. | | `or` | the hypothesized odds ratio. Only used in the *2 by 2* case. | | `alternative` | indicates the alternative hypothesis and must be one of `"two.sided"`, `"greater"` or `"less"`. You can specify just the initial letter. Only used in the *2 by 2* case. | | `conf.int` | logical indicating if a confidence interval for the odds ratio in a *2 by 2* table should be computed (and returned). | | `conf.level` | confidence level for the returned confidence interval. Only used in the *2 by 2* case and if `conf.int = TRUE`. | | `simulate.p.value` | a logical indicating whether to compute p-values by Monte Carlo simulation, in larger than *2 by 2* tables. | | `B` | an integer specifying the number of replicates used in the Monte Carlo test. | ### Details If `x` is a matrix, it is taken as a two-dimensional contingency table, and hence its entries should be nonnegative integers. Otherwise, both `x` and `y` must be vectors of the same length. Incomplete cases are removed, the vectors are coerced into factor objects, and the contingency table is computed from these. For *2 by 2* cases, p-values are obtained directly using the (central or non-central) hypergeometric distribution. Otherwise, computations are based on a C version of the FORTRAN subroutine FEXACT which implements the network developed by Mehta and Patel (1983, 1986) and improved by Clarkson, Fan and Joe (1993). The FORTRAN code can be obtained from <https://www.netlib.org/toms/643>. Note this fails (with an error message) when the entries of the table are too large. (It transposes the table if necessary so it has no more rows than columns. One constraint is that the product of the row marginals be less than *2^31 - 1*.) For *2 by 2* tables, the null of conditional independence is equivalent to the hypothesis that the odds ratio equals one. ‘Exact’ inference can be based on observing that in general, given all marginal totals fixed, the first element of the contingency table has a non-central hypergeometric distribution with non-centrality parameter given by the odds ratio (Fisher, 1935). The alternative for a one-sided test is based on the odds ratio, so `alternative = "greater"` is a test of the odds ratio being bigger than `or`. Two-sided tests are based on the probabilities of the tables, and take as ‘more extreme’ all tables with probabilities less than or equal to that of the observed table, the p-value being the sum of such probabilities. For larger than *2 by 2* tables and `hybrid = TRUE`, asymptotic chi-squared probabilities are only used if the ‘Cochran conditions’ (or modified version thereof) specified by `hybridPars = c(expect = 5, percent = 80, Emin = 1)` are satisfied, that is if no cell has expected counts less than `1` (`= Emin`) and more than 80% (`= percent`) of the cells have expected counts at least 5 (`= expect`), otherwise the exact calculation is used. A corresponding `if()` decision is made for all sub-tables considered. Accidentally, **R** has used `180` instead of `80` as `percent`, i.e., `hybridPars[2]` in **R** versions between 3.0.0 and 3.4.1 (inclusive), i.e., the 2nd of the `hybridPars` (all of which used to be hard-coded previous to **R** 3.5.0). Consequently, in these versions of **R**, `hybrid=TRUE` never made a difference. In the *r x c* case with *r > 2* or *c > 2*, internal tables can get too large for the exact test in which case an error is signalled. Apart from increasing `workspace` sufficiently, which then may lead to very long running times, using `simulate.p.value = TRUE` may then often be sufficient and hence advisable. Simulation is done conditional on the row and column marginals, and works only if the marginals are strictly positive. (A C translation of the algorithm of Patefield (1981) is used.) ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `p.value` | the p-value of the test. | | `conf.int` | a confidence interval for the odds ratio. Only present in the *2 by 2* case and if argument `conf.int = TRUE`. | | `estimate` | an estimate of the odds ratio. Note that the *conditional* Maximum Likelihood Estimate (MLE) rather than the unconditional MLE (the sample odds ratio) is used. Only present in the *2 by 2* case. | | `null.value` | the odds ratio under the null, `or`. Only present in the *2 by 2* case. | | `alternative` | a character string describing the alternative hypothesis. | | `method` | the character string `"Fisher's Exact Test for Count Data"`. | | `data.name` | a character string giving the names of the data. | ### References Agresti, A. (1990). *Categorical data analysis*. New York: Wiley. Pages 59–66. Agresti, A. (2002). *Categorical data analysis*. Second edition. New York: Wiley. Pages 91–101. Fisher, R. A. (1935). The logic of inductive inference. *Journal of the Royal Statistical Society Series A*, **98**, 39–54. doi: [10.2307/2342435](https://doi.org/10.2307/2342435). Fisher, R. A. (1962). Confidence limits for a cross-product ratio. *Australian Journal of Statistics*, **4**, 41. doi: [10.1111/j.1467-842X.1962.tb00285.x](https://doi.org/10.1111/j.1467-842X.1962.tb00285.x). Fisher, R. A. (1970). *Statistical Methods for Research Workers*. Oliver & Boyd. Mehta, Cyrus R. and Patel, Nitin R. (1983). A network algorithm for performing Fisher's exact test in *r x c* contingency tables. *Journal of the American Statistical Association*, **78**, 427–434. doi: [10.1080/01621459.1983.10477989](https://doi.org/10.1080/01621459.1983.10477989). Mehta, C. R. and Patel, N. R. (1986). Algorithm 643: FEXACT, a FORTRAN subroutine for Fisher's exact test on unordered *r x c* contingency tables. *ACM Transactions on Mathematical Software*, **12**, 154–161. doi: [10.1145/6497.214326](https://doi.org/10.1145/6497.214326). Clarkson, D. B., Fan, Y. and Joe, H. (1993) A Remark on Algorithm 643: FEXACT: An Algorithm for Performing Fisher's Exact Test in *r x c* Contingency Tables. *ACM Transactions on Mathematical Software*, **19**, 484–488. doi: [10.1145/168173.168412](https://doi.org/10.1145/168173.168412). Patefield, W. M. (1981). Algorithm AS 159: An efficient method of generating r x c tables with given row and column totals. *Applied Statistics*, **30**, 91–97. doi: [10.2307/2346669](https://doi.org/10.2307/2346669). ### See Also `<chisq.test>` `fisher.exact` in package [exact2x2](https://CRAN.R-project.org/package=exact2x2) for alternative interpretations of two-sided tests and confidence intervals for *2 by 2* tables. ### Examples ``` ## Agresti (1990, p. 61f; 2002, p. 91) Fisher's Tea Drinker ## A British woman claimed to be able to distinguish whether milk or ## tea was added to the cup first. To test, she was given 8 cups of ## tea, in four of which milk was added first. The null hypothesis ## is that there is no association between the true order of pouring ## and the woman's guess, the alternative that there is a positive ## association (that the odds ratio is greater than 1). TeaTasting <- matrix(c(3, 1, 1, 3), nrow = 2, dimnames = list(Guess = c("Milk", "Tea"), Truth = c("Milk", "Tea"))) fisher.test(TeaTasting, alternative = "greater") ## => p = 0.2429, association could not be established ## Fisher (1962, 1970), Criminal convictions of like-sex twins Convictions <- matrix(c(2, 10, 15, 3), nrow = 2, dimnames = list(c("Dizygotic", "Monozygotic"), c("Convicted", "Not convicted"))) Convictions fisher.test(Convictions, alternative = "less") fisher.test(Convictions, conf.int = FALSE) fisher.test(Convictions, conf.level = 0.95)$conf.int fisher.test(Convictions, conf.level = 0.99)$conf.int ## A r x c table Agresti (2002, p. 57) Job Satisfaction Job <- matrix(c(1,2,1,0, 3,3,6,1, 10,10,14,9, 6,7,12,11), 4, 4, dimnames = list(income = c("< 15k", "15-25k", "25-40k", "> 40k"), satisfaction = c("VeryD", "LittleD", "ModerateS", "VeryS"))) fisher.test(Job) # 0.7827 fisher.test(Job, simulate.p.value = TRUE, B = 1e5) # also close to 0.78 ## 6th example in Mehta & Patel's JASA paper MP6 <- rbind( c(1,2,2,1,1,0,1), c(2,0,0,2,3,0,0), c(0,1,1,1,2,7,3), c(1,1,2,0,0,0,1), c(0,1,1,1,1,0,0)) fisher.test(MP6) # Exactly the same p-value, as Cochran's conditions are never met: fisher.test(MP6, hybrid=TRUE) ```
programming_docs
r None `setNames` Set the Names in an Object -------------------------------------- ### Description This is a convenience function that sets the names on an object and returns the object. It is most useful at the end of a function definition where one is creating the object to be returned and would prefer not to store it under a name just so the names can be assigned. ### Usage ``` setNames(object = nm, nm) ``` ### Arguments | | | | --- | --- | | `object` | an object for which a `names` attribute will be meaningful | | `nm` | a character vector of names to assign to the object | ### Value An object of the same sort as `object` with the new names assigned. ### Author(s) Douglas M. Bates and Saikat DebRoy ### See Also `[unname](../../base/html/unname)` for removing names. ### Examples ``` setNames( 1:3, c("foo", "bar", "baz") ) # this is just a short form of tmp <- 1:3 names(tmp) <- c("foo", "bar", "baz") tmp ## special case of character vector, using default setNames(nm = c("First", "2nd")) ``` r None `terms.object` Description of Terms Objects -------------------------------------------- ### Description An object of class `<terms>` holds information about a model. Usually the model was specified in terms of a `<formula>` and that formula was used to determine the terms object. ### Value The object itself is simply the formula supplied to the call of `<terms.formula>`. The object has a number of attributes and they are used to construct the model frame: | | | | --- | --- | | `factors` | A matrix of variables by terms showing which variables appear in which terms. The entries are 0 if the variable does not occur in the term, 1 if it does occur and should be coded by contrasts, and 2 if it occurs and should be coded via dummy variables for all levels (as when a lower-order term is missing). Note that variables in main effects always receive 1, even if the intercept is missing (in which case the first one should be coded with dummy variables). If there are no terms other than an intercept and offsets, this is `numeric(0)`. | | `term.labels` | A character vector containing the labels for each of the terms in the model, except for offsets. Note that these are after possible re-ordering of terms. Non-syntactic names will be quoted by backticks: this makes it easier to re-construct the formula from the term labels. | | `variables` | A call to `list` of the variables in the model. | | `intercept` | Either 0, indicating no intercept is to be fit, or 1 indicating that an intercept is to be fit. | | `order` | A vector of the same length as `term.labels` indicating the order of interaction for each term. | | `response` | The index of the variable (in variables) of the response (the left hand side of the formula). Zero, if there is no response. | | `offset` | If the model contains `<offset>` terms there is an `offset` attribute indicating which variable(s) are offsets | | `specials` | If a `specials` argument was given to `<terms.formula>` there is a `specials` attribute, a pairlist of vectors (one for each specified special function) giving numeric indices of the arguments of the list returned as the `variables` attribute which contain these special functions. | | `dataClasses` | optional. A named character vector giving the classes (as given by `[.MFclass](checkmfclasses)`) of the variables used in a fit. | | `predvars` | optional. An expression to help in computing predictions at new covariate values; see `<makepredictcall>`. | The object has class `c("terms", "formula")`. ### Note These objects are different from those found in S. In particular there is no `formula` attribute: instead the object is itself a formula. (Thus, the mode of a terms object is different.) Examples of the `specials` argument can be seen in the `<aov>` and `[coxph](../../survival/html/coxph)` functions, the latter from package [survival](https://CRAN.R-project.org/package=survival). ### See Also `<terms>`, `<formula>`. ### Examples ``` ## use of specials (as used for gam() in packages mgcv and gam) (tf <- terms(y ~ x + x:z + s(x), specials = "s")) ## Note that the "factors" attribute has variables as row names ## and term labels as column names, both as character vectors. attr(tf, "specials") # index 's' variable(s) rownames(attr(tf, "factors"))[attr(tf, "specials")$s] ## we can keep the order by terms(y ~ x + x:z + s(x), specials = "s", keep.order = TRUE) ``` r None `kernel` Smoothing Kernel Objects ---------------------------------- ### Description The `"tskernel"` class is designed to represent discrete symmetric normalized smoothing kernels. These kernels can be used to smooth vectors, matrices, or time series objects. There are `[print](../../base/html/print)`, `[plot](../../graphics/html/plot.default)` and `[[](../../base/html/extract)` methods for these kernel objects. ### Usage ``` kernel(coef, m = 2, r, name) df.kernel(k) bandwidth.kernel(k) is.tskernel(k) ## S3 method for class 'tskernel' plot(x, type = "h", xlab = "k", ylab = "W[k]", main = attr(x,"name"), ...) ``` ### Arguments | | | | --- | --- | | `coef` | the upper half of the smoothing kernel coefficients (including coefficient zero) *or* the name of a kernel (currently `"daniell"`, `"dirichlet"`, `"fejer"` or `"modified.daniell"`). | | `m` | the kernel dimension(s) if `coef` is a name. When `m` has length larger than one, it means the convolution of kernels of dimension `m[j]`, for `j in 1:length(m)`. Currently this is supported only for the named "\*daniell" kernels. | | `name` | the name the kernel will be called. | | `r` | the kernel order for a Fejer kernel. | | `k, x` | a `"tskernel"` object. | | `type, xlab, ylab, main, ...` | arguments passed to `[plot.default](../../graphics/html/plot.default)`. | ### Details `kernel` is used to construct a general kernel or named specific kernels. The modified Daniell kernel halves the end coefficients (as used by S-PLUS). The `[[](../../base/html/extract)` method allows natural indexing of kernel objects with indices in `(-m) : m`. The normalization is such that for `k <- kernel(*)`, `sum(k[ -k$m : k$m ])` is one. `df.kernel` returns the ‘equivalent degrees of freedom’ of a smoothing kernel as defined in Brockwell and Davis (1991), page 362, and `bandwidth.kernel` returns the equivalent bandwidth as defined in Bloomfield (1976), p. 201, with a continuity correction. ### Value `kernel()` returns an object of class `"tskernel"` which is basically a list with the two components `coef` and the kernel dimension `m`. An additional attribute is `"name"`. ### Author(s) A. Trapletti; modifications by B.D. Ripley ### References Bloomfield, P. (1976) *Fourier Analysis of Time Series: An Introduction.* Wiley. Brockwell, P.J. and Davis, R.A. (1991) *Time Series: Theory and Methods.* Second edition. Springer, pp. 350–365. ### See Also `<kernapply>` ### Examples ``` require(graphics) ## Demonstrate a simple trading strategy for the ## financial time series German stock index DAX. x <- EuStockMarkets[,1] k1 <- kernel("daniell", 50) # a long moving average k2 <- kernel("daniell", 10) # and a short one plot(k1) plot(k2) x1 <- kernapply(x, k1) x2 <- kernapply(x, k2) plot(x) lines(x1, col = "red") # go long if the short crosses the long upwards lines(x2, col = "green") # and go short otherwise ## More interesting kernels kd <- kernel("daniell", c(3, 3)) kd # note the unusual indexing kd[-2:2] plot(kernel("fejer", 100, r = 6)) plot(kernel("modified.daniell", c(7,5,3))) # Reproduce example 10.4.3 from Brockwell and Davis (1991) spectrum(sunspot.year, kernel = kernel("daniell", c(11,7,3)), log = "no") ``` r None `replications` Number of Replications of Terms ----------------------------------------------- ### Description Returns a vector or a list of the number of replicates for each term in the formula. ### Usage ``` replications(formula, data = NULL, na.action) ``` ### Arguments | | | | --- | --- | | `formula` | a formula or a terms object or a data frame. | | `data` | a data frame used to find the objects in `formula`. | | `na.action` | function for handling missing values. Defaults to a `na.action` attribute of `data`, then a setting of the option `na.action`, or `na.fail` if that is not set. | ### Details If `formula` is a data frame and `data` is missing, `formula` is used for `data` with the formula `~ .`. Any character vectors in the formula are coerced to factors. ### Value A vector or list with one entry for each term in the formula giving the number(s) of replications for each level. If all levels are balanced (have the same number of replications) the result is a vector, otherwise it is a list with a component for each terms, as a vector, matrix or array as required. A test for balance is `!is.list(replications(formula,data))`. ### Author(s) The design was inspired by the S function of the same name described in Chambers *et al* (1992). ### References Chambers, J. M., Freeny, A and Heiberger, R. M. (1992) *Analysis of variance; designed experiments.* Chapter 5 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<model.tables>` ### Examples ``` ## From Venables and Ripley (2002) p.165. N <- c(0,1,0,1,1,1,0,0,0,1,1,0,1,1,0,0,1,0,1,0,1,1,0,0) P <- c(1,1,0,0,0,1,0,1,1,1,0,0,0,1,0,1,1,0,0,1,0,1,1,0) K <- c(1,0,0,1,0,1,1,0,0,1,0,1,0,1,1,0,0,0,1,1,1,0,1,0) yield <- c(49.5,62.8,46.8,57.0,59.8,58.5,55.5,56.0,62.8,55.8,69.5, 55.0, 62.0,48.8,45.5,44.2,52.0,51.5,49.8,48.8,57.2,59.0,53.2,56.0) npk <- data.frame(block = gl(6,4), N = factor(N), P = factor(P), K = factor(K), yield = yield) replications(~ . - yield, npk) ``` r None `cancor` Canonical Correlations -------------------------------- ### Description Compute the canonical correlations between two data matrices. ### Usage ``` cancor(x, y, xcenter = TRUE, ycenter = TRUE) ``` ### Arguments | | | | --- | --- | | `x` | numeric matrix (*n \* p1*), containing the x coordinates. | | `y` | numeric matrix (*n \* p2*), containing the y coordinates. | | `xcenter` | logical or numeric vector of length *p1*, describing any centering to be done on the x values before the analysis. If `TRUE` (default), subtract the column means. If `FALSE`, do not adjust the columns. Otherwise, a vector of values to be subtracted from the columns. | | `ycenter` | analogous to `xcenter`, but for the y values. | ### Details The canonical correlation analysis seeks linear combinations of the `y` variables which are well explained by linear combinations of the `x` variables. The relationship is symmetric as ‘well explained’ is measured by correlations. ### Value A list containing the following components: | | | | --- | --- | | `cor` | correlations. | | `xcoef` | estimated coefficients for the `x` variables. | | `ycoef` | estimated coefficients for the `y` variables. | | `xcenter` | the values used to adjust the `x` variables. | | `ycenter` | the values used to adjust the `x` variables. | ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988). *The New S Language*. Wadsworth & Brooks/Cole. Hotelling H. (1936). Relations between two sets of variables. *Biometrika*, **28**, 321–327. doi: [10.1093/biomet/28.3-4.321](https://doi.org/10.1093/biomet/28.3-4.321). Seber, G. A. F. (1984). *Multivariate Observations*. New York: Wiley. Page 506f. ### See Also `[qr](../../matrix/html/qr-methods)`, `[svd](../../base/html/svd)`. ### Examples ``` ## signs of results are random pop <- LifeCycleSavings[, 2:3] oec <- LifeCycleSavings[, -(2:3)] cancor(pop, oec) x <- matrix(rnorm(150), 50, 3) y <- matrix(rnorm(250), 50, 5) (cxy <- cancor(x, y)) all(abs(cor(x %*% cxy$xcoef, y %*% cxy$ycoef)[,1:3] - diag(cxy $ cor)) < 1e-15) all(abs(cor(x %*% cxy$xcoef) - diag(3)) < 1e-15) all(abs(cor(y %*% cxy$ycoef) - diag(5)) < 1e-15) ``` r None `var.test` F Test to Compare Two Variances ------------------------------------------- ### Description Performs an F test to compare the variances of two samples from normal populations. ### Usage ``` var.test(x, ...) ## Default S3 method: var.test(x, y, ratio = 1, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, ...) ## S3 method for class 'formula' var.test(formula, data, subset, na.action, ...) ``` ### Arguments | | | | --- | --- | | `x, y` | numeric vectors of data values, or fitted linear model objects (inheriting from class `"lm"`). | | `ratio` | the hypothesized ratio of the population variances of `x` and `y`. | | `alternative` | a character string specifying the alternative hypothesis, must be one of `"two.sided"` (default), `"greater"` or `"less"`. You can specify just the initial letter. | | `conf.level` | confidence level for the returned confidence interval. | | `formula` | a formula of the form `lhs ~ rhs` where `lhs` is a numeric variable giving the data values and `rhs` a factor with two levels giving the corresponding groups. | | `data` | an optional matrix or data frame (or similar: see `<model.frame>`) containing the variables in the formula `formula`. By default the variables are taken from `environment(formula)`. | | `subset` | an optional vector specifying a subset of observations to be used. | | `na.action` | a function which indicates what should happen when the data contain `NA`s. Defaults to `getOption("na.action")`. | | `...` | further arguments to be passed to or from methods. | ### Details The null hypothesis is that the ratio of the variances of the populations from which `x` and `y` were drawn, or in the data to which the linear models `x` and `y` were fitted, is equal to `ratio`. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the value of the F test statistic. | | `parameter` | the degrees of the freedom of the F distribution of the test statistic. | | `p.value` | the p-value of the test. | | `conf.int` | a confidence interval for the ratio of the population variances. | | `estimate` | the ratio of the sample variances of `x` and `y`. | | `null.value` | the ratio of population variances under the null. | | `alternative` | a character string describing the alternative hypothesis. | | `method` | the character string `"F test to compare two variances"`. | | `data.name` | a character string giving the names of the data. | ### See Also `<bartlett.test>` for testing homogeneity of variances in more than two samples from normal distributions; `<ansari.test>` and `<mood.test>` for two rank based (nonparametric) two-sample tests for difference in scale. ### Examples ``` x <- rnorm(50, mean = 0, sd = 2) y <- rnorm(30, mean = 1, sd = 1) var.test(x, y) # Do x and y have the same variance? var.test(lm(x ~ 1), lm(y ~ 1)) # The same. ``` r None `bandwidth` Bandwidth Selectors for Kernel Density Estimation -------------------------------------------------------------- ### Description Bandwidth selectors for Gaussian kernels in `<density>`. ### Usage ``` bw.nrd0(x) bw.nrd(x) bw.ucv(x, nb = 1000, lower = 0.1 * hmax, upper = hmax, tol = 0.1 * lower) bw.bcv(x, nb = 1000, lower = 0.1 * hmax, upper = hmax, tol = 0.1 * lower) bw.SJ(x, nb = 1000, lower = 0.1 * hmax, upper = hmax, method = c("ste", "dpi"), tol = 0.1 * lower) ``` ### Arguments | | | | --- | --- | | `x` | numeric vector. | | `nb` | number of bins to use. | | `lower, upper` | range over which to minimize. The default is almost always satisfactory. `hmax` is calculated internally from a normal reference bandwidth. | | `method` | either `"ste"` ("solve-the-equation") or `"dpi"` ("direct plug-in"). Can be abbreviated. | | `tol` | for method `"ste"`, the convergence tolerance for `<uniroot>`. The default leads to bandwidth estimates with only slightly more than one digit accuracy, which is sufficient for practical density estimation, but possibly not for theoretical simulation studies. | ### Details `bw.nrd0` implements a rule-of-thumb for choosing the bandwidth of a Gaussian kernel density estimator. It defaults to 0.9 times the minimum of the standard deviation and the interquartile range divided by 1.34 times the sample size to the negative one-fifth power (= Silverman's ‘rule of thumb’, Silverman (1986, page 48, eqn (3.31))) *unless* the quartiles coincide when a positive result will be guaranteed. `bw.nrd` is the more common variation given by Scott (1992), using factor 1.06. `bw.ucv` and `bw.bcv` implement unbiased and biased cross-validation respectively. `bw.SJ` implements the methods of Sheather & Jones (1991) to select the bandwidth using pilot estimation of derivatives. The algorithm for method `"ste"` solves an equation (via `<uniroot>`) and because of that, enlarges the interval `c(lower, upper)` when the boundaries were not user-specified and do not bracket the root. The last three methods use all pairwise binned distances: they are of complexity *O(n^2)* up to `n = nb/2` and *O(n)* thereafter. Because of the binning, the results differ slightly when `x` is translated or sign-flipped. ### Value A bandwidth on a scale suitable for the `bw` argument of `density`. ### Note Long vectors `x` are not supported, but neither are they by `<density>` and kernel density estimation and for more than a few thousand points a histogram would be preferred. ### Author(s) B. D. Ripley, taken from early versions of package MASS. ### References Scott, D. W. (1992) *Multivariate Density Estimation: Theory, Practice, and Visualization.* New York: Wiley. Sheather, S. J. and Jones, M. C. (1991). A reliable data-based bandwidth selection method for kernel density estimation. *Journal of the Royal Statistical Society series B*, **53**, 683–690. doi: [10.1111/j.2517-6161.1991.tb01857.x](https://doi.org/10.1111/j.2517-6161.1991.tb01857.x). <https://www.jstor.org/stable/2345597>. Silverman, B. W. (1986). *Density Estimation*. London: Chapman and Hall. Venables, W. N. and Ripley, B. D. (2002). *Modern Applied Statistics with S*. Springer. ### See Also `<density>`. `[bandwidth.nrd](../../mass/html/bandwidth.nrd)`, `[ucv](../../mass/html/ucv)`, `[bcv](../../mass/html/bcv)` and `[width.SJ](../../mass/html/width.sj)` in package [MASS](https://CRAN.R-project.org/package=MASS), which are all scaled to the `width` argument of `density` and so give answers four times as large. ### Examples ``` require(graphics) plot(density(precip, n = 1000)) rug(precip) lines(density(precip, bw = "nrd"), col = 2) lines(density(precip, bw = "ucv"), col = 3) lines(density(precip, bw = "bcv"), col = 4) lines(density(precip, bw = "SJ-ste"), col = 5) lines(density(precip, bw = "SJ-dpi"), col = 6) legend(55, 0.035, legend = c("nrd0", "nrd", "ucv", "bcv", "SJ-ste", "SJ-dpi"), col = 1:6, lty = 1) ``` r None `birthday` Probability of coincidences --------------------------------------- ### Description Computes answers to a generalised *birthday paradox* problem. `pbirthday` computes the probability of a coincidence and `qbirthday` computes the smallest number of observations needed to have at least a specified probability of coincidence. ### Usage ``` qbirthday(prob = 0.5, classes = 365, coincident = 2) pbirthday(n, classes = 365, coincident = 2) ``` ### Arguments | | | | --- | --- | | `classes` | How many distinct categories the people could fall into | | `prob` | The desired probability of coincidence | | `n` | The number of people | | `coincident` | The number of people to fall in the same category | ### Details The birthday paradox is that a very small number of people, 23, suffices to have a 50–50 chance that two or more of them have the same birthday. This function generalises the calculation to probabilities other than 0.5, numbers of coincident events other than 2, and numbers of classes other than 365. The formula used is approximate for `coincident > 2`. The approximation is very good for moderate values of `prob` but less good for very small probabilities. ### Value | | | | --- | --- | | `qbirthday` | Minimum number of people needed for a probability of at least `prob` that `k` or more of them have the same one out of `classes` equiprobable labels. | | `pbirthday` | Probability of the specified coincidence. | ### References Diaconis, P. and Mosteller F. (1989). Methods for studying coincidences. *Journal of the American Statistical Association*, **84**, 853–861. doi: [10.1080/01621459.1989.10478847](https://doi.org/10.1080/01621459.1989.10478847). ### Examples ``` require(graphics) ## the standard version qbirthday() # 23 ## probability of > 2 people with the same birthday pbirthday(23, coincident = 3) ## examples from Diaconis & Mosteller p. 858. ## 'coincidence' is that husband, wife, daughter all born on the 16th qbirthday(classes = 30, coincident = 3) # approximately 18 qbirthday(coincident = 4) # exact value 187 qbirthday(coincident = 10) # exact value 1181 ## same 4-digit PIN number qbirthday(classes = 10^4) ## 0.9 probability of three or more coincident birthdays qbirthday(coincident = 3, prob = 0.9) ## Chance of 4 or more coincident birthdays in 150 people pbirthday(150, coincident = 4) ## 100 or more coincident birthdays in 1000 people: very rare pbirthday(1000, coincident = 100) ```
programming_docs
r None `fivenum` Tukey Five-Number Summaries -------------------------------------- ### Description Returns Tukey's five number summary (minimum, lower-hinge, median, upper-hinge, maximum) for the input data. ### Usage ``` fivenum(x, na.rm = TRUE) ``` ### Arguments | | | | --- | --- | | `x` | numeric, maybe including `[NA](../../base/html/na)`s and *+/-*`[Inf](../../base/html/is.finite)`s. | | `na.rm` | logical; if `TRUE`, all `[NA](../../base/html/na)` and `[NaN](../../base/html/is.finite)`s are dropped, before the statistics are computed. | ### Value A numeric vector of length 5 containing the summary information. See `[boxplot.stats](../../grdevices/html/boxplot.stats)` for more details. ### See Also `[IQR](iqr)`, `[boxplot.stats](../../grdevices/html/boxplot.stats)`, `<median>`, `<quantile>`, `[range](../../base/html/range)`. ### Examples ``` fivenum(c(rnorm(100), -1:1/0)) ``` r None `cmdscale` Classical (Metric) Multidimensional Scaling ------------------------------------------------------- ### Description Classical multidimensional scaling (MDS) of a data matrix. Also known as *principal coordinates analysis* (Gower, 1966). ### Usage ``` cmdscale(d, k = 2, eig = FALSE, add = FALSE, x.ret = FALSE, list. = eig || add || x.ret) ``` ### Arguments | | | | --- | --- | | `d` | a distance structure such as that returned by `dist` or a full symmetric matrix containing the dissimilarities. | | `k` | the maximum dimension of the space which the data are to be represented in; must be in *{1, 2, …, n-1}*. | | `eig` | indicates whether eigenvalues should be returned. | | `add` | logical indicating if an additive constant *c\** should be computed, and added to the non-diagonal dissimilarities such that the modified dissimilarities are Euclidean. | | `x.ret` | indicates whether the doubly centred symmetric distance matrix should be returned. | | `list.` | logical indicating if a `[list](../../base/html/list)` should be returned or just the *n \* k* matrix, see ‘Value:’. | ### Details Multidimensional scaling takes a set of dissimilarities and returns a set of points such that the distances between the points are approximately equal to the dissimilarities. (It is a major part of what ecologists call ‘ordination’.) A set of Euclidean distances on *n* points can be represented exactly in at most *n - 1* dimensions. `cmdscale` follows the analysis of Mardia (1978), and returns the best-fitting *k*-dimensional representation, where *k* may be less than the argument `k`. The representation is only determined up to location (`cmdscale` takes the column means of the configuration to be at the origin), rotations and reflections. The configuration returned is given in principal-component axes, so the reflection chosen may differ between **R** platforms (see `<prcomp>`). When `add = TRUE`, a minimal additive constant *c\** is computed such that the dissimilarities *d[i,j] + c\** are Euclidean and hence can be represented in `n - 1` dimensions. Whereas S (Becker *et al*, 1988) computes this constant using an approximation suggested by Torgerson, **R** uses the analytical solution of Cailliez (1983), see also Cox and Cox (2001). Note that because of numerical errors the computed eigenvalues need not all be non-negative, and even theoretically the representation could be in fewer than `n - 1` dimensions. ### Value If `.list` is false (as per default), a matrix with `k` columns whose rows give the coordinates of the points chosen to represent the dissimilarities. Otherwise, a `[list](../../base/html/list)` containing the following components. | | | | --- | --- | | `points` | a matrix with up to `k` columns whose rows give the coordinates of the points chosen to represent the dissimilarities. | | `eig` | the *n* eigenvalues computed during the scaling process if `eig` is true. **NB**: versions of **R** before 2.12.1 returned only `k` but were documented to return *n - 1*. | | `x` | the doubly centered distance matrix if `x.ret` is true. | | `ac` | the additive constant *c\**, `0` if `add = FALSE`. | | `GOF` | a numeric vector of length 2, equal to say *(g.1,g.2)*, where *g.i = (sum{j=1..k} λ[j]) / (sum{j=1..n} T.i(λ[j]))*, where *λ[j]* are the eigenvalues (sorted in decreasing order), *T.1(v) = abs(v)*, and *T.2(v) = max(v, 0)*. | ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988). *The New S Language*. Wadsworth & Brooks/Cole. Cailliez, F. (1983). The analytical solution of the additive constant problem. *Psychometrika*, **48**, 343–349. doi: [10.1007/BF02294026](https://doi.org/10.1007/BF02294026). Cox, T. F. and Cox, M. A. A. (2001). *Multidimensional Scaling*. Second edition. Chapman and Hall. Gower, J. C. (1966). Some distance properties of latent root and vector methods used in multivariate analysis. *Biometrika*, **53**, 325–328. doi: [10.2307/2333639](https://doi.org/10.2307/2333639). Krzanowski, W. J. and Marriott, F. H. C. (1994). *Multivariate Analysis. Part I. Distributions, Ordination and Inference.* London: Edward Arnold. (Especially pp. 108–111.) Mardia, K.V. (1978). Some properties of classical multidimensional scaling. *Communications on Statistics – Theory and Methods*, **A7**, 1233–41. doi: [10.1080/03610927808827707](https://doi.org/10.1080/03610927808827707) Mardia, K. V., Kent, J. T. and Bibby, J. M. (1979). Chapter 14 of *Multivariate Analysis*, London: Academic Press. Seber, G. A. F. (1984). *Multivariate Observations*. New York: Wiley. Torgerson, W. S. (1958). *Theory and Methods of Scaling*. New York: Wiley. ### See Also `<dist>`. `[isoMDS](../../mass/html/isomds)` and `[sammon](../../mass/html/sammon)` in package [MASS](https://CRAN.R-project.org/package=MASS) provide alternative methods of multidimensional scaling. ### Examples ``` require(graphics) loc <- cmdscale(eurodist) x <- loc[, 1] y <- -loc[, 2] # reflect so North is at the top ## note asp = 1, to ensure Euclidean distances are represented correctly plot(x, y, type = "n", xlab = "", ylab = "", asp = 1, axes = FALSE, main = "cmdscale(eurodist)") text(x, y, rownames(loc), cex = 0.6) ``` r None `rect.hclust` Draw Rectangles Around Hierarchical Clusters ----------------------------------------------------------- ### Description Draws rectangles around the branches of a dendrogram highlighting the corresponding clusters. First the dendrogram is cut at a certain level, then a rectangle is drawn around selected branches. ### Usage ``` rect.hclust(tree, k = NULL, which = NULL, x = NULL, h = NULL, border = 2, cluster = NULL) ``` ### Arguments | | | | --- | --- | | `tree` | an object of the type produced by `hclust`. | | `k, h` | Scalar. Cut the dendrogram such that either exactly `k` clusters are produced or by cutting at height `h`. | | `which, x` | A vector selecting the clusters around which a rectangle should be drawn. `which` selects clusters by number (from left to right in the tree), `x` selects clusters containing the respective horizontal coordinates. Default is `which = 1:k`. | | `border` | Vector with border colors for the rectangles. | | `cluster` | Optional vector with cluster memberships as returned by `cutree(hclust.obj, k = k)`, can be specified for efficiency if already computed. | ### Value (Invisibly) returns a list where each element contains a vector of data points contained in the respective cluster. ### See Also `<hclust>`, `<identify.hclust>`. ### Examples ``` require(graphics) hca <- hclust(dist(USArrests)) plot(hca) rect.hclust(hca, k = 3, border = "red") x <- rect.hclust(hca, h = 50, which = c(2,7), border = 3:4) x ``` r None `p.adjust` Adjust P-values for Multiple Comparisons ---------------------------------------------------- ### Description Given a set of p-values, returns p-values adjusted using one of several methods. ### Usage ``` p.adjust(p, method = p.adjust.methods, n = length(p)) p.adjust.methods # c("holm", "hochberg", "hommel", "bonferroni", "BH", "BY", # "fdr", "none") ``` ### Arguments | | | | --- | --- | | `p` | numeric vector of p-values (possibly with `[NA](../../base/html/na)`s). Any other **R** object is coerced by `[as.numeric](../../base/html/numeric)`. | | `method` | correction method, a `[character](../../base/html/character)` string. Can be abbreviated. | | `n` | number of comparisons, must be at least `length(p)`; only set this (to non-default) when you know what you are doing! | ### Details The adjustment methods include the Bonferroni correction (`"bonferroni"`) in which the p-values are multiplied by the number of comparisons. Less conservative corrections are also included by Holm (1979) (`"holm"`), Hochberg (1988) (`"hochberg"`), Hommel (1988) (`"hommel"`), Benjamini & Hochberg (1995) (`"BH"` or its alias `"fdr"`), and Benjamini & Yekutieli (2001) (`"BY"`), respectively. A pass-through option (`"none"`) is also included. The set of methods are contained in the `p.adjust.methods` vector for the benefit of methods that need to have the method as an option and pass it on to `p.adjust`. The first four methods are designed to give strong control of the family-wise error rate. There seems no reason to use the unmodified Bonferroni correction because it is dominated by Holm's method, which is also valid under arbitrary assumptions. Hochberg's and Hommel's methods are valid when the hypothesis tests are independent or when they are non-negatively associated (Sarkar, 1998; Sarkar and Chang, 1997). Hommel's method is more powerful than Hochberg's, but the difference is usually small and the Hochberg p-values are faster to compute. The `"BH"` (aka `"fdr"`) and `"BY"` methods of Benjamini, Hochberg, and Yekutieli control the false discovery rate, the expected proportion of false discoveries amongst the rejected hypotheses. The false discovery rate is a less stringent condition than the family-wise error rate, so these methods are more powerful than the others. Note that you can set `n` larger than `length(p)` which means the unobserved p-values are assumed to be greater than all the observed p for `"bonferroni"` and `"holm"` methods and equal to 1 for the other methods. ### Value A numeric vector of corrected p-values (of the same length as `p`, with names copied from `p`). ### References Benjamini, Y., and Hochberg, Y. (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. *Journal of the Royal Statistical Society Series B*, **57**, 289–300. doi: [10.1111/j.2517-6161.1995.tb02031.x](https://doi.org/10.1111/j.2517-6161.1995.tb02031.x). <https://www.jstor.org/stable/2346101>. Benjamini, Y., and Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. *Annals of Statistics*, **29**, 1165–1188. doi: [10.1214/aos/1013699998](https://doi.org/10.1214/aos/1013699998). Holm, S. (1979). A simple sequentially rejective multiple test procedure. *Scandinavian Journal of Statistics*, **6**, 65–70. <https://www.jstor.org/stable/4615733>. Hommel, G. (1988). A stagewise rejective multiple test procedure based on a modified Bonferroni test. *Biometrika*, **75**, 383–386. doi: [10.2307/2336190](https://doi.org/10.2307/2336190). Hochberg, Y. (1988). A sharper Bonferroni procedure for multiple tests of significance. *Biometrika*, **75**, 800–803. doi: [10.2307/2336325](https://doi.org/10.2307/2336325). Shaffer, J. P. (1995). Multiple hypothesis testing. *Annual Review of Psychology*, **46**, 561–584. doi: [10.1146/annurev.ps.46.020195.003021](https://doi.org/10.1146/annurev.ps.46.020195.003021). (An excellent review of the area.) Sarkar, S. (1998). Some probability inequalities for ordered MTP2 random variables: a proof of Simes conjecture. *Annals of Statistics*, **26**, 494–504. doi: [10.1214/aos/1028144846](https://doi.org/10.1214/aos/1028144846). Sarkar, S., and Chang, C. K. (1997). The Simes method for multiple hypothesis testing with positively dependent test statistics. *Journal of the American Statistical Association*, **92**, 1601–1608. doi: [10.2307/2965431](https://doi.org/10.2307/2965431). Wright, S. P. (1992). Adjusted P-values for simultaneous inference. *Biometrics*, **48**, 1005–1013. doi: [10.2307/2532694](https://doi.org/10.2307/2532694). (Explains the adjusted P-value approach.) ### See Also `pairwise.*` functions such as `<pairwise.t.test>`. ### Examples ``` require(graphics) set.seed(123) x <- rnorm(50, mean = c(rep(0, 25), rep(3, 25))) p <- 2*pnorm(sort(-abs(x))) round(p, 3) round(p.adjust(p), 3) round(p.adjust(p, "BH"), 3) ## or all of them at once (dropping the "fdr" alias): p.adjust.M <- p.adjust.methods[p.adjust.methods != "fdr"] p.adj <- sapply(p.adjust.M, function(meth) p.adjust(p, meth)) p.adj.60 <- sapply(p.adjust.M, function(meth) p.adjust(p, meth, n = 60)) stopifnot(identical(p.adj[,"none"], p), p.adj <= p.adj.60) round(p.adj, 3) ## or a bit nicer: noquote(apply(p.adj, 2, format.pval, digits = 3)) ## and a graphic: matplot(p, p.adj, ylab="p.adjust(p, meth)", type = "l", asp = 1, lty = 1:6, main = "P-value adjustments") legend(0.7, 0.6, p.adjust.M, col = 1:6, lty = 1:6) ## Can work with NA's: pN <- p; iN <- c(46, 47); pN[iN] <- NA pN.a <- sapply(p.adjust.M, function(meth) p.adjust(pN, meth)) ## The smallest 20 P-values all affected by the NA's : round((pN.a / p.adj)[1:20, ] , 4) ``` r None `Tukey` The Studentized Range Distribution ------------------------------------------- ### Description Functions of the distribution of the studentized range, *R/s*, where *R* is the range of a standard normal sample and *df\*s^2* is independently distributed as chi-squared with *df* degrees of freedom, see `[pchisq](chisquare)`. ### Usage ``` ptukey(q, nmeans, df, nranges = 1, lower.tail = TRUE, log.p = FALSE) qtukey(p, nmeans, df, nranges = 1, lower.tail = TRUE, log.p = FALSE) ``` ### Arguments | | | | --- | --- | | `q` | vector of quantiles. | | `p` | vector of probabilities. | | `nmeans` | sample size for range (same for each group). | | `df` | degrees of freedom for *s* (see below). | | `nranges` | number of *groups* whose **maximum** range is considered. | | `log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details If *ng =*`nranges` is greater than one, *R* is the *maximum* of *ng* groups of `nmeans` observations each. ### Value `ptukey` gives the distribution function and `qtukey` its inverse, the quantile function. The length of the result is the maximum of the lengths of the numerical arguments. The other numerical arguments are recycled to that length. Only the first elements of the logical arguments are used. ### Note A Legendre 16-point formula is used for the integral of `ptukey`. The computations are relatively expensive, especially for `qtukey` which uses a simple secant method for finding the inverse of `ptukey`. `qtukey` will be accurate to the 4th decimal place. ### Source `qtukey` is in part adapted from Odeh and Evans (1974). ### References Copenhaver, Margaret Diponzio and Holland, Burt S. (1988). Computation of the distribution of the maximum studentized range statistic with application to multiple significance testing of simple effects. *Journal of Statistical Computation and Simulation*, **30**, 1–15. doi: [10.1080/00949658808811082](https://doi.org/10.1080/00949658808811082). Odeh, R. E. and Evans, J. O. (1974). Algorithm AS 70: Percentage Points of the Normal Distribution. *Applied Statistics*, **23**, 96–97. doi: [10.2307/2347061](https://doi.org/10.2307/2347061). ### See Also [Distributions](distributions) for standard distributions, including `[pnorm](normal)` and `[qnorm](normal)` for the corresponding functions for the normal distribution. ### Examples ``` if(interactive()) curve(ptukey(x, nm = 6, df = 5), from = -1, to = 8, n = 101) (ptt <- ptukey(0:10, 2, df = 5)) (qtt <- qtukey(.95, 2, df = 2:11)) ## The precision may be not much more than about 8 digits: summary(abs(.95 - ptukey(qtt, 2, df = 2:11))) ``` r None `na.fail` Handle Missing Values in Objects ------------------------------------------- ### Description These generic functions are useful for dealing with `[NA](../../base/html/na)`s in e.g., data frames. `na.fail` returns the object if it does not contain any missing values, and signals an error otherwise. `na.omit` returns the object with incomplete cases removed. `na.pass` returns the object unchanged. ### Usage ``` na.fail(object, ...) na.omit(object, ...) na.exclude(object, ...) na.pass(object, ...) ``` ### Arguments | | | | --- | --- | | `object` | an **R** object, typically a data frame | | `...` | further arguments special methods could require. | ### Details At present these will handle vectors, matrices and data frames comprising vectors and matrices (only). If `na.omit` removes cases, the row numbers of the cases form the `"na.action"` attribute of the result, of class `"omit"`. `na.exclude` differs from `na.omit` only in the class of the `"na.action"` attribute of the result, which is `"exclude"`. This gives different behaviour in functions making use of `[naresid](nafns)` and `[napredict](nafns)`: when `na.exclude` is used the residuals and predictions are padded to the correct length by inserting `NA`s for cases omitted by `na.exclude`. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S.* Wadsworth & Brooks/Cole. ### See Also `<na.action>`; `[options](../../base/html/options)` with argument `na.action` for setting NA actions; and `<lm>` and `<glm>` for functions using these. `<na.contiguous>` as alternative for time series. ### Examples ``` DF <- data.frame(x = c(1, 2, 3), y = c(0, 10, NA)) na.omit(DF) m <- as.matrix(DF) na.omit(m) stopifnot(all(na.omit(1:3) == 1:3)) # does not affect objects with no NA's try(na.fail(DF)) #> Error: missing values in ... options("na.action") ``` r None `ls.print` Print lsfit Regression Results ------------------------------------------ ### Description Computes basic statistics, including standard errors, t- and p-values for the regression coefficients and prints them if `print.it` is `TRUE`. ### Usage ``` ls.print(ls.out, digits = 4, print.it = TRUE) ``` ### Arguments | | | | --- | --- | | `ls.out` | Typically the result of `<lsfit>()` | | `digits` | The number of significant digits used for printing | | `print.it` | a logical indicating whether the result should also be printed | ### Value A list with the components | | | | --- | --- | | `summary` | The ANOVA table of the regression | | `coef.table` | matrix with regression coefficients, standard errors, t- and p-values | ### Note Usually you would use `summary(lm(...))` and `anova(lm(...))` to obtain similar output. ### See Also `<ls.diag>`, `<lsfit>`, also for examples; `<lm>`, `<lm.influence>` which usually are preferable. r None `checkMFClasses` Functions to Check the Type of Variables passed to Model Frames --------------------------------------------------------------------------------- ### Description `.checkMFClasses` checks if the variables used in a predict method agree in type with those used for fitting. `.MFclass` categorizes variables for this purpose. `.getXlevels()` extracts factor levels from `[factor](../../base/html/factor)` or `[character](../../base/html/character)` variables. ### Usage ``` .checkMFClasses(cl, m, ordNotOK = FALSE) .MFclass(x) .getXlevels(Terms, m) ``` ### Arguments | | | | --- | --- | | `cl` | a character vector of class descriptions to match. | | `m` | a model frame (`<model.frame>()` result). | | `x` | any **R** object. | | `ordNotOK` | logical: are ordered factors different? | | `Terms` | a `terms` object (`<terms.object>`). | ### Details For applications involving `<model.matrix>()` such as linear models we do not need to differentiate between *ordered* factors and factors as although these affect the coding, the coding used in the fit is already recorded and imposed during prediction. However, other applications may treat ordered factors differently: `[rpart](../../rpart/html/rpart)` does, for example. ### Value `.checkMFClasses()` checks and either signals an error calling `[stop](../../base/html/stop)()` or returns `[NULL](../../base/html/null)` invisibly. `.MFclass()` returns a character string, one of `"logical"`, `"ordered"`, `"factor"`, `"numeric"`, `"nmatrix.*"` (a numeric matrix with a number of columns appended) or `"other"`. `.getXlevels` returns a named `[list](../../base/html/list)` of character vectors, possibly empty, or `[NULL](../../base/html/null)`. ### Examples ``` sapply(warpbreaks, .MFclass) # "numeric" plus 2 x "factor" sapply(iris, .MFclass) # 4 x "numeric" plus "factor" mf <- model.frame(Sepal.Width ~ Species, iris) mc <- model.frame(Sepal.Width ~ Sepal.Length, iris) .checkMFClasses("numeric", mc) # nothing else .checkMFClasses(c("numeric", "factor"), mf) ## simple .getXlevels() cases : (xl <- .getXlevels(terms(mf), mf)) # a list with one entry " $ Species" with 3 levels: stopifnot(exprs = { identical(xl$Species, levels(iris$Species)) identical(.getXlevels(terms(mc), mc), xl[0]) # a empty named list, as no factors is.null(.getXlevels(terms(x~x), list(x=1))) }) ```
programming_docs
r None `ARMAtoMA` Convert ARMA Process to Infinite MA Process ------------------------------------------------------- ### Description Convert ARMA process to infinite MA process. ### Usage ``` ARMAtoMA(ar = numeric(), ma = numeric(), lag.max) ``` ### Arguments | | | | --- | --- | | `ar` | numeric vector of AR coefficients | | `ma` | numeric vector of MA coefficients | | `lag.max` | Largest MA(Inf) coefficient required. | ### Value A vector of coefficients. ### References Brockwell, P. J. and Davis, R. A. (1991) *Time Series: Theory and Methods*, Second Edition. Springer. ### See Also `<arima>`, `[ARMAacf](armaacf)`. ### Examples ``` ARMAtoMA(c(1.0, -0.25), 1.0, 10) ## Example from Brockwell & Davis (1991, p.92) ## answer (1 + 3*n)*2^(-n) n <- 1:10; (1 + 3*n)*2^(-n) ``` r None `decompose` Classical Seasonal Decomposition by Moving Averages ---------------------------------------------------------------- ### Description Decompose a time series into seasonal, trend and irregular components using moving averages. Deals with additive or multiplicative seasonal component. ### Usage ``` decompose(x, type = c("additive", "multiplicative"), filter = NULL) ``` ### Arguments | | | | --- | --- | | `x` | A time series. | | `type` | The type of seasonal component. Can be abbreviated. | | `filter` | A vector of filter coefficients in reverse time order (as for AR or MA coefficients), used for filtering out the seasonal component. If `NULL`, a moving average with symmetric window is performed. | ### Details The additive model used is: *Y[t] = T[t] + S[t] + e[t]* The multiplicative model used is: *Y[t] = T[t] \* S[t] \* e[t]* The function first determines the trend component using a moving average (if `filter` is `NULL`, a symmetric window with equal weights is used), and removes it from the time series. Then, the seasonal figure is computed by averaging, for each time unit, over all periods. The seasonal figure is then centered. Finally, the error component is determined by removing trend and seasonal figure (recycled as needed) from the original time series. This only works well if `x` covers an integer number of complete periods. ### Value An object of class `"decomposed.ts"` with following components: | | | | --- | --- | | `x` | The original series. | | `seasonal` | The seasonal component (i.e., the repeated seasonal figure). | | `figure` | The estimated seasonal figure only. | | `trend` | The trend component. | | `random` | The remainder part. | | `type` | The value of `type`. | ### Note The function `<stl>` provides a much more sophisticated decomposition. ### Author(s) David Meyer [[email protected]](mailto:[email protected]) ### References M. Kendall and A. Stuart (1983) *The Advanced Theory of Statistics*, Vol.3, Griffin. pp. 410–414. ### See Also `<stl>` ### Examples ``` require(graphics) m <- decompose(co2) m$figure plot(m) ## example taken from Kendall/Stuart x <- c(-50, 175, 149, 214, 247, 237, 225, 329, 729, 809, 530, 489, 540, 457, 195, 176, 337, 239, 128, 102, 232, 429, 3, 98, 43, -141, -77, -13, 125, 361, -45, 184) x <- ts(x, start = c(1951, 1), end = c(1958, 4), frequency = 4) m <- decompose(x) ## seasonal figure: 6.25, 8.62, -8.84, -6.03 round(decompose(x)$figure / 10, 2) ``` r None `plot.stepfun` Plot Step Functions ----------------------------------- ### Description Method of the generic `[plot](../../graphics/html/plot.default)` for `<stepfun>` objects and utility for plotting piecewise constant functions. ### Usage ``` ## S3 method for class 'stepfun' plot(x, xval, xlim, ylim = range(c(y, Fn.kn)), xlab = "x", ylab = "f(x)", main = NULL, add = FALSE, verticals = TRUE, do.points = (n < 1000), pch = par("pch"), col = par("col"), col.points = col, cex.points = par("cex"), col.hor = col, col.vert = col, lty = par("lty"), lwd = par("lwd"), ...) ## S3 method for class 'stepfun' lines(x, ...) ``` ### Arguments | | | | --- | --- | | `x` | an **R** object inheriting from `"stepfun"`. | | `xval` | numeric vector of abscissa values at which to evaluate `x`. Defaults to `[knots](stepfun)(x)` restricted to `xlim`. | | `xlim, ylim` | limits for the plot region: see `[plot.window](../../graphics/html/plot.window)`. Both have sensible defaults if omitted. | | `xlab, ylab` | labels for x and y axis. | | `main` | main title. | | `add` | logical; if `TRUE` only *add* to an existing plot. | | `verticals` | logical; if `TRUE`, draw vertical lines at steps. | | `do.points` | logical; if `TRUE`, also draw points at the (`xlim` restricted) knot locations. Default is true, for sample size *< 1000*. | | | | | --- | --- | | `pch` | character; point character if `do.points`. | | `col` | default color of all points and lines. | | `col.points` | character or integer code; color of points if `do.points`. | | `cex.points` | numeric; character expansion factor if `do.points`. | | `col.hor` | color of horizontal lines. | | `col.vert` | color of vertical lines. | | `lty, lwd` | line type and thickness for all lines. | | `...` | further arguments of `[plot](../../graphics/html/plot.default)(.)`, or if`(add)` `[segments](../../graphics/html/segments)(.)`. | ### Value A list with two components | | | | --- | --- | | `t` | abscissa (x) values, including the two outermost ones. | | `y` | y values ‘in between’ the `t[]`. | ### Author(s) Martin Maechler [[email protected]](mailto:[email protected]), 1990, 1993; ported to **R**, 1997. ### See Also `<ecdf>` for empirical distribution functions as special step functions, `<approxfun>` and `<splinefun>`. ### Examples ``` require(graphics) y0 <- c(1,2,4,3) sfun0 <- stepfun(1:3, y0, f = 0) sfun.2 <- stepfun(1:3, y0, f = .2) sfun1 <- stepfun(1:3, y0, right = TRUE) tt <- seq(0, 3, by = 0.1) op <- par(mfrow = c(2,2)) plot(sfun0); plot(sfun0, xval = tt, add = TRUE, col.hor = "bisque") plot(sfun.2);plot(sfun.2, xval = tt, add = TRUE, col = "orange") # all colors plot(sfun1);lines(sfun1, xval = tt, col.hor = "coral") ##-- This is revealing : plot(sfun0, verticals = FALSE, main = "stepfun(x, y0, f=f) for f = 0, .2, 1") for(i in 1:3) lines(list(sfun0, sfun.2, stepfun(1:3, y0, f = 1))[[i]], col = i) legend(2.5, 1.9, paste("f =", c(0, 0.2, 1)), col = 1:3, lty = 1, y.intersp = 1) par(op) # Extend and/or restrict 'viewport': plot(sfun0, xlim = c(0,5), ylim = c(0, 3.5), main = "plot(stepfun(*), xlim= . , ylim = .)") ##-- this works too (automatic call to ecdf(.)): plot.stepfun(rt(50, df = 3), col.vert = "gray20") ``` r None `Wilcoxon` Distribution of the Wilcoxon Rank Sum Statistic ----------------------------------------------------------- ### Description Density, distribution function, quantile function and random generation for the distribution of the Wilcoxon rank sum statistic obtained from samples with size `m` and `n`, respectively. ### Usage ``` dwilcox(x, m, n, log = FALSE) pwilcox(q, m, n, lower.tail = TRUE, log.p = FALSE) qwilcox(p, m, n, lower.tail = TRUE, log.p = FALSE) rwilcox(nn, m, n) ``` ### Arguments | | | | --- | --- | | `x, q` | vector of quantiles. | | `p` | vector of probabilities. | | `nn` | number of observations. If `length(nn) > 1`, the length is taken to be the number required. | | `m, n` | numbers of observations in the first and second sample, respectively. Can be vectors of positive integers. | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details This distribution is obtained as follows. Let `x` and `y` be two random, independent samples of size `m` and `n`. Then the Wilcoxon rank sum statistic is the number of all pairs `(x[i], y[j])` for which `y[j]` is not greater than `x[i]`. This statistic takes values between `0` and `m * n`, and its mean and variance are `m * n / 2` and `m * n * (m + n + 1) / 12`, respectively. If any of the first three arguments are vectors, the recycling rule is used to do the calculations for all combinations of the three up to the length of the longest vector. ### Value `dwilcox` gives the density, `pwilcox` gives the distribution function, `qwilcox` gives the quantile function, and `rwilcox` generates random deviates. The length of the result is determined by `nn` for `rwilcox`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `nn` are recycled to the length of the result. Only the first elements of the logical arguments are used. ### Warning These functions can use large amounts of memory and stack (and even crash **R** if the stack limit is exceeded and stack-checking is not in place) if one sample is large (several thousands or more). ### Note S-PLUS uses a different (but equivalent) definition of the Wilcoxon statistic: see `<wilcox.test>` for details. ### Author(s) Kurt Hornik ### Source These ("d","p","q") are calculated via recursion, based on `cwilcox(k, m, n)`, the number of choices with statistic `k` from samples of size `m` and `n`, which is itself calculated recursively and the results cached. Then `dwilcox` and `pwilcox` sum appropriate values of `cwilcox`, and `qwilcox` is based on inversion. `rwilcox` generates a random permutation of ranks and evaluates the statistic. Note that it is based on the same C code as `[sample](../../base/html/sample)()`, and hence is determined by `[.Random.seed](../../base/html/random)`, notably from `[RNGkind](../../base/html/random)(sample.kind = ..)` which changed with **R** version 3.6.0. ### See Also `<wilcox.test>` to calculate the statistic from data, find p values and so on. [Distributions](distributions) for standard distributions, including `[dsignrank](signrank)` for the distribution of the *one-sample* Wilcoxon signed rank statistic. ### Examples ``` require(graphics) x <- -1:(4*6 + 1) fx <- dwilcox(x, 4, 6) Fx <- pwilcox(x, 4, 6) layout(rbind(1,2), widths = 1, heights = c(3,2)) plot(x, fx, type = "h", col = "violet", main = "Probabilities (density) of Wilcoxon-Statist.(n=6, m=4)") plot(x, Fx, type = "s", col = "blue", main = "Distribution of Wilcoxon-Statist.(n=6, m=4)") abline(h = 0:1, col = "gray20", lty = 2) layout(1) # set back N <- 200 hist(U <- rwilcox(N, m = 4,n = 6), breaks = 0:25 - 1/2, border = "red", col = "pink", sub = paste("N =",N)) mtext("N * f(x), f() = true \"density\"", side = 3, col = "blue") lines(x, N*fx, type = "h", col = "blue", lwd = 2) points(x, N*fx, cex = 2) ## Better is a Quantile-Quantile Plot qqplot(U, qw <- qwilcox((1:N - 1/2)/N, m = 4, n = 6), main = paste("Q-Q-Plot of empirical and theoretical quantiles", "Wilcoxon Statistic, (m=4, n=6)", sep = "\n")) n <- as.numeric(names(print(tU <- table(U)))) text(n+.2, n+.5, labels = tU, col = "red") ``` r None `Beta` The Beta Distribution ----------------------------- ### Description Density, distribution function, quantile function and random generation for the Beta distribution with parameters `shape1` and `shape2` (and optional non-centrality parameter `ncp`). ### Usage ``` dbeta(x, shape1, shape2, ncp = 0, log = FALSE) pbeta(q, shape1, shape2, ncp = 0, lower.tail = TRUE, log.p = FALSE) qbeta(p, shape1, shape2, ncp = 0, lower.tail = TRUE, log.p = FALSE) rbeta(n, shape1, shape2, ncp = 0) ``` ### Arguments | | | | --- | --- | | `x, q` | vector of quantiles. | | `p` | vector of probabilities. | | `n` | number of observations. If `length(n) > 1`, the length is taken to be the number required. | | `shape1, shape2` | non-negative parameters of the Beta distribution. | | `ncp` | non-centrality parameter. | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details The Beta distribution with parameters `shape1` *= a* and `shape2` *= b* has density *Γ(a+b)/(Γ(a)Γ(b))x^(a-1)(1-x)^(b-1)* for *a > 0*, *b > 0* and *0 ≤ x ≤ 1* where the boundary values at *x=0* or *x=1* are defined as by continuity (as limits). The mean is *a/(a+b)* and the variance is *ab/((a+b)^2 (a+b+1))*. These moments and all distributional properties can be defined as limits (leading to point masses at 0, 1/2, or 1) when *a* or *b* are zero or infinite, and the corresponding `[dpqr]beta()` functions are defined correspondingly. `pbeta` is closely related to the incomplete beta function. As defined by Abramowitz and Stegun 6.6.1 *B\_x(a,b) = integral\_0^x t^(a-1) (1-t)^(b-1) dt,* and 6.6.2 *I\_x(a,b) = B\_x(a,b) / B(a,b)* where *B(a,b) = B\_1(a,b)* is the Beta function (`[beta](../../base/html/special)`). *I\_x(a,b)* is `pbeta(x, a, b)`. The noncentral Beta distribution (with `ncp` *= λ*) is defined (Johnson *et al*, 1995, pp. 502) as the distribution of *X/(X+Y)* where *X ~ chi^2\_2a(λ)* and *Y ~ chi^2\_2b*. ### Value `dbeta` gives the density, `pbeta` the distribution function, `qbeta` the quantile function, and `rbeta` generates random deviates. Invalid arguments will result in return value `NaN`, with a warning. The length of the result is determined by `n` for `rbeta`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `n` are recycled to the length of the result. Only the first elements of the logical arguments are used. ### Note Supplying `ncp = 0` uses the algorithm for the non-central distribution, which is not the same algorithm as when `ncp` is omitted. This is to give consistent behaviour in extreme cases with values of `ncp` very near zero. ### Source * The central `dbeta` is based on a binomial probability, using code contributed by Catherine Loader (see `[dbinom](family)`) if either shape parameter is larger than one, otherwise directly from the definition. The non-central case is based on the derivation as a Poisson mixture of betas (Johnson *et al*, 1995, pp. 502–3). * The central `pbeta` for the default (`log_p = FALSE`) uses a C translation based on Didonato, A. and Morris, A., Jr, (1992) Algorithm 708: Significant digit computation of the incomplete beta function ratios, *ACM Transactions on Mathematical Software*, **18**, 360–373. (See also Brown, B. and Lawrence Levy, L. (1994) Certification of algorithm 708: Significant digit computation of the incomplete beta, *ACM Transactions on Mathematical Software*, **20**, 393–397.) We have slightly tweaked the original “TOMS 708” algorithm, and enhanced for `log.p = TRUE`. For that (log-scale) case, underflow to `-Inf` (i.e., *P = 0*) or `0`, (i.e., *P = 1*) still happens because the original algorithm was designed without log-scale considerations. Underflow to `-Inf` now typically signals a `[warning](../../base/html/warning)`. * The non-central `pbeta` uses a C translation of Lenth, R. V. (1987) Algorithm AS 226: Computing noncentral beta probabilities. *Appl. Statist*, **36**, 241–244, incorporating Frick, H. (1990)'s AS R84, *Appl. Statist*, **39**, 311–2, and Lam, M.L. (1995)'s AS R95, *Appl. Statist*, **44**, 551–2. This computes the lower tail only, so the upper tail suffers from cancellation and a warning will be given when this is likely to be significant. * The central case of `qbeta` is based on a C translation of Cran, G. W., K. J. Martin and G. E. Thomas (1977). Remark AS R19 and Algorithm AS 109, *Applied Statistics*, **26**, 111–114, and subsequent remarks (AS83 and correction). * The central case of `rbeta` is based on a C translation of R. C. H. Cheng (1978). Generating beta variates with nonintegral shape parameters. *Communications of the ACM*, **21**, 317–322. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Abramowitz, M. and Stegun, I. A. (1972) *Handbook of Mathematical Functions.* New York: Dover. Chapter 6: Gamma and Related Functions. Johnson, N. L., Kotz, S. and Balakrishnan, N. (1995) *Continuous Univariate Distributions*, volume 2, especially chapter 25. Wiley, New York. ### See Also [Distributions](distributions) for other standard distributions. `[beta](../../base/html/special)` for the Beta function. ### Examples ``` x <- seq(0, 1, length.out = 21) dbeta(x, 1, 1) pbeta(x, 1, 1) ## Visualization, including limit cases: pl.beta <- function(a,b, asp = if(isLim) 1, ylim = if(isLim) c(0,1.1)) { if(isLim <- a == 0 || b == 0 || a == Inf || b == Inf) { eps <- 1e-10 x <- c(0, eps, (1:7)/16, 1/2+c(-eps,0,eps), (9:15)/16, 1-eps, 1) } else { x <- seq(0, 1, length.out = 1025) } fx <- cbind(dbeta(x, a,b), pbeta(x, a,b), qbeta(x, a,b)) f <- fx; f[fx == Inf] <- 1e100 matplot(x, f, ylab="", type="l", ylim=ylim, asp=asp, main = sprintf("[dpq]beta(x, a=%g, b=%g)", a,b)) abline(0,1, col="gray", lty=3) abline(h = 0:1, col="gray", lty=3) legend("top", paste0(c("d","p","q"), "beta(x, a,b)"), col=1:3, lty=1:3, bty = "n") invisible(cbind(x, fx)) } pl.beta(3,1) pl.beta(2, 4) pl.beta(3, 7) pl.beta(3, 7, asp=1) pl.beta(0, 0) ## point masses at {0, 1} pl.beta(0, 2) ## point mass at 0 ; the same as pl.beta(1, Inf) pl.beta(Inf, 2) ## point mass at 1 ; the same as pl.beta(3, 0) pl.beta(Inf, Inf)# point mass at 1/2 ``` r None `screeplot` Screeplots ----------------------- ### Description `screeplot.default` plots the variances against the number of the principal component. This is also the `plot` method for classes `"princomp"` and `"prcomp"`. ### Usage ``` ## Default S3 method: screeplot(x, npcs = min(10, length(x$sdev)), type = c("barplot", "lines"), main = deparse1(substitute(x)), ...) ``` ### Arguments | | | | --- | --- | | `x` | an object containing a `sdev` component, such as that returned by `<princomp>()` and `<prcomp>()`. | | `npcs` | the number of components to be plotted. | | `type` | the type of plot. Can be abbreviated. | | `main, ...` | graphics parameters. | ### References Mardia, K. V., J. T. Kent and J. M. Bibby (1979). *Multivariate Analysis*, London: Academic Press. Venables, W. N. and B. D. Ripley (2002). *Modern Applied Statistics with S*, Springer-Verlag. ### See Also `<princomp>` and `<prcomp>`. ### Examples ``` require(graphics) ## The variances of the variables in the ## USArrests data vary by orders of magnitude, so scaling is appropriate (pc.cr <- princomp(USArrests, cor = TRUE)) # inappropriate screeplot(pc.cr) fit <- princomp(covmat = Harman74.cor) screeplot(fit) screeplot(fit, npcs = 24, type = "lines") ``` r None `SSgompertz` Self-Starting Nls Gompertz Growth Model ----------------------------------------------------- ### Description This `selfStart` model evaluates the Gompertz growth model and its gradient. It has an `initial` attribute that creates initial estimates of the parameters `Asym`, `b2`, and `b3`. ### Usage ``` SSgompertz(x, Asym, b2, b3) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector of values at which to evaluate the model. | | `Asym` | a numeric parameter representing the asymptote. | | `b2` | a numeric parameter related to the value of the function at `x = 0` | | `b3` | a numeric parameter related to the scale the `x` axis. | ### Value a numeric vector of the same length as `input`. It is the value of the expression `Asym*exp(-b2*b3^x)`. If all of the arguments `Asym`, `b2`, and `b3` are names of objects the gradient matrix with respect to these names is attached as an attribute named `gradient`. ### Author(s) Douglas Bates ### See Also `<nls>`, `[selfStart](selfstart)` ### Examples ``` DNase.1 <- subset(DNase, Run == 1) SSgompertz(log(DNase.1$conc), 4.5, 2.3, 0.7) # response only local({ Asym <- 4.5; b2 <- 2.3; b3 <- 0.7 SSgompertz(log(DNase.1$conc), Asym, b2, b3) # response _and_ gradient }) print(getInitial(density ~ SSgompertz(log(conc), Asym, b2, b3), data = DNase.1), digits = 5) ## Initial values are in fact the converged values fm1 <- nls(density ~ SSgompertz(log(conc), Asym, b2, b3), data = DNase.1) summary(fm1) plot(density ~ log(conc), DNase.1, # xlim = c(0, 21), main = "SSgompertz() fit to DNase.1") ux <- par("usr")[1:2]; x <- seq(ux[1], ux[2], length.out=250) lines(x, do.call(SSgompertz, c(list(x=x), coef(fm1))), col = "red", lwd=2) As <- coef(fm1)[["Asym"]]; abline(v = 0, h = 0, lty = 3) axis(2, at= exp(-coef(fm1)[["b2"]]), quote(e^{-b[2]}), las=1, pos=0) ```
programming_docs
r None `nobs` Extract the Number of Observations from a Fit. ------------------------------------------------------ ### Description Extract the number of ‘observations’ from a model fit. This is principally intended to be used in computing BIC (see `[AIC](aic)`). ### Usage ``` nobs(object, ...) ## Default S3 method: nobs(object, use.fallback = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `object` | A fitted model object. | | `use.fallback` | logical: should fallback methods be used to try to guess the value? | | `...` | Further arguments to be passed to methods. | ### Details This is a generic function, with an S4 generic in package stats4. There are methods in this package for objects of classes `"<lm>"`, `"<glm>"`, `"<nls>"` and `"[logLik](loglik)"`, as well as a default method (which throws an error, unless `use.fallback = TRUE` when it looks for `weights` and `residuals` components – use with care!). The main usage is in determining the appropriate penalty for BIC, but `nobs` is also used by the stepwise fitting methods `<step>`, `<add1>` and `[drop1](add1)` as a quick check that different fits have been fitted to the same set of data (and not, say, that further rows have been dropped because of NAs in the new predictors). For `lm`, `glm` and `nls` fits, observations with zero weight are not included. ### Value A single number, normally an integer. Could be `NA`. ### See Also `[AIC](aic)`. r None `factor.scope` Compute Allowed Changes in Adding to or Dropping from a Formula ------------------------------------------------------------------------------- ### Description `add.scope` and `drop.scope` compute those terms that can be individually added to or dropped from a model while respecting the hierarchy of terms. ### Usage ``` add.scope(terms1, terms2) drop.scope(terms1, terms2) factor.scope(factor, scope) ``` ### Arguments | | | | --- | --- | | `terms1` | the terms or formula for the base model. | | `terms2` | the terms or formula for the upper (`add.scope`) or lower (`drop.scope`) scope. If missing for `drop.scope` it is taken to be the null formula, so all terms (except any intercept) are candidates to be dropped. | | `factor` | the `"factor"` attribute of the terms of the base object. | | `scope` | a list with one or both components `drop` and `add` giving the `"factor"` attribute of the lower and upper scopes respectively. | ### Details `factor.scope` is not intended to be called directly by users. ### Value For `add.scope` and `drop.scope` a character vector of terms labels. For `factor.scope`, a list with components `drop` and `add`, character vectors of terms labels. ### See Also `<add1>`, `[drop1](add1)`, `<aov>`, `<lm>` ### Examples ``` add.scope( ~ a + b + c + a:b, ~ (a + b + c)^3) # [1] "a:c" "b:c" drop.scope( ~ a + b + c + a:b) # [1] "c" "a:b" ``` r None `step` Choose a model by AIC in a Stepwise Algorithm ----------------------------------------------------- ### Description Select a formula-based model by AIC. ### Usage ``` step(object, scope, scale = 0, direction = c("both", "backward", "forward"), trace = 1, keep = NULL, steps = 1000, k = 2, ...) ``` ### Arguments | | | | --- | --- | | `object` | an object representing a model of an appropriate class (mainly `"lm"` and `"glm"`). This is used as the initial model in the stepwise search. | | `scope` | defines the range of models examined in the stepwise search. This should be either a single formula, or a list containing components `upper` and `lower`, both formulae. See the details for how to specify the formulae and how they are used. | | `scale` | used in the definition of the AIC statistic for selecting the models, currently only for `<lm>`, `<aov>` and `<glm>` models. The default value, `0`, indicates the scale should be estimated: see `[extractAIC](extractaic)`. | | `direction` | the mode of stepwise search, can be one of `"both"`, `"backward"`, or `"forward"`, with a default of `"both"`. If the `scope` argument is missing the default for `direction` is `"backward"`. Values can be abbreviated. | | `trace` | if positive, information is printed during the running of `step`. Larger values may give more detailed information. | | `keep` | a filter function whose input is a fitted model object and the associated `AIC` statistic, and whose output is arbitrary. Typically `keep` will select a subset of the components of the object and return them. The default is not to keep anything. | | `steps` | the maximum number of steps to be considered. The default is 1000 (essentially as many as required). It is typically used to stop the process early. | | `k` | the multiple of the number of degrees of freedom used for the penalty. Only `k = 2` gives the genuine AIC: `k = log(n)` is sometimes referred to as BIC or SBC. | | `...` | any additional arguments to `[extractAIC](extractaic)`. | ### Details `step` uses `<add1>` and `[drop1](add1)` repeatedly; it will work for any method for which they work, and that is determined by having a valid method for `[extractAIC](extractaic)`. When the additive constant can be chosen so that AIC is equal to Mallows' *Cp*, this is done and the tables are labelled appropriately. The set of models searched is determined by the `scope` argument. The right-hand-side of its `lower` component is always included in the model, and right-hand-side of the model is included in the `upper` component. If `scope` is a single formula, it specifies the `upper` component, and the `lower` model is empty. If `scope` is missing, the initial model is used as the `upper` model. Models specified by `scope` can be templates to update `object` as used by `<update.formula>`. So using `.` in a `scope` formula means ‘what is already there’, with `.^2` indicating all interactions of existing terms. There is a potential problem in using `<glm>` fits with a variable `scale`, as in that case the deviance is not simply related to the maximized log-likelihood. The `"glm"` method for function `[extractAIC](extractaic)` makes the appropriate adjustment for a `gaussian` family, but may need to be amended for other cases. (The `binomial` and `poisson` families have fixed `scale` by default and do not correspond to a particular maximum-likelihood problem for variable `scale`.) ### Value the stepwise-selected model is returned, with up to two additional components. There is an `"anova"` component corresponding to the steps taken in the search, as well as a `"keep"` component if the `keep=` argument was supplied in the call. The `"Resid. Dev"` column of the analysis of deviance table refers to a constant minus twice the maximized log likelihood: it will be a deviance only in cases where a saturated model is well-defined (thus excluding `lm`, `aov` and `survreg` fits, for example). ### Warning The model fitting must apply the models to the same dataset. This may be a problem if there are missing values and **R**'s default of `na.action = na.omit` is used. We suggest you remove the missing values first. Calls to the function `<nobs>` are used to check that the number of observations involved in the fitting process remains unchanged. ### Note This function differs considerably from the function in S, which uses a number of approximations and does not in general compute the correct AIC. This is a minimal implementation. Use `[stepAIC](../../mass/html/stepaic)` in package [MASS](https://CRAN.R-project.org/package=MASS) for a wider range of object classes. ### Author(s) B. D. Ripley: `step` is a slightly simplified version of `[stepAIC](../../mass/html/stepaic)` in package [MASS](https://CRAN.R-project.org/package=MASS) (Venables & Ripley, 2002 and earlier editions). The idea of a `step` function follows that described in Hastie & Pregibon (1992); but the implementation in **R** is more general. ### References Hastie, T. J. and Pregibon, D. (1992) *Generalized linear models.* Chapter 6 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S.* New York: Springer (4th ed). ### See Also `[stepAIC](../../mass/html/stepaic)` in [MASS](https://CRAN.R-project.org/package=MASS), `<add1>`, `[drop1](add1)` ### Examples ``` ## following on from example(lm) step(lm.D9) summary(lm1 <- lm(Fertility ~ ., data = swiss)) slm1 <- step(lm1) summary(slm1) slm1$anova ``` r None `KalmanLike` Kalman Filtering ------------------------------ ### Description Use Kalman Filtering to find the (Gaussian) log-likelihood, or for forecasting or smoothing. ### Usage ``` KalmanLike(y, mod, nit = 0L, update = FALSE) KalmanRun(y, mod, nit = 0L, update = FALSE) KalmanSmooth(y, mod, nit = 0L) KalmanForecast(n.ahead = 10L, mod, update = FALSE) makeARIMA(phi, theta, Delta, kappa = 1e6, SSinit = c("Gardner1980", "Rossignol2011"), tol = .Machine$double.eps) ``` ### Arguments | | | | --- | --- | | `y` | a univariate time series. | | `mod` | a list describing the state-space model: see ‘Details’. | | `nit` | the time at which the initialization is computed. `nit = 0L` implies that the initialization is for a one-step prediction, so `Pn` should not be computed at the first step. | | `update` | if `TRUE` the update `mod` object will be returned as attribute `"mod"` of the result. | | `n.ahead` | the number of steps ahead for which prediction is required. | | `phi, theta` | numeric vectors of length *≥ 0* giving AR and MA parameters. | | `Delta` | vector of differencing coefficients, so an ARMA model is fitted to `y[t] - Delta[1]*y[t-1] - ...`. | | `kappa` | the prior variance (as a multiple of the innovations variance) for the past observations in a differenced model. | | `SSinit` | a string specifying the algorithm to compute the `Pn` part of the state-space initialization; see ‘Details’. | | `tol` | tolerance eventually passed to `[solve.default](../../base/html/solve)` when `SSinit = "Rossignol2011"`. | ### Details These functions work with a general univariate state-space model with state vector a, transitions a <- T a + R e, *e ~ N(0, kappa Q)* and observation equation y = Z'a + eta, *eta ~ N(0, kappa h)*. The likelihood is a profile likelihood after estimation of *kappa*. The model is specified as a list with at least components `T` the transition matrix `Z` the observation coefficients `h` the observation variance `V` RQR' `a` the current state estimate `P` the current estimate of the state uncertainty matrix *Q* `Pn` the estimate at time *t-1* of the state uncertainty matrix *Q* (not updated by `KalmanForecast`). `KalmanSmooth` is the workhorse function for `[tsSmooth](tssmooth)`. `makeARIMA` constructs the state-space model for an ARIMA model, see also `<arima>`. The state-space initialization has used Gardner *et al*'s method (`SSinit = "Gardner1980"`), as only method for years. However, that suffers sometimes from deficiencies when close to non-stationarity. For this reason, it may be replaced as default in the future and only kept for reproducibility reasons. Explicit specification of `SSinit` is therefore recommended, notably also in `<arima>()`. The `"Rossignol2011"` method has been proposed and partly documented by Raphael Rossignol, Univ. Grenoble, on 2011-09-20 (see PR#14682, below), and later been ported to C by Matwey V. Kornilov. It computes the covariance matrix of *(X\_{t-1},...,X\_{t-p},Z\_t,...,Z\_{t-q})* by the method of difference equations (page 93 of Brockwell and Davis), apparently suggested by a referee of Gardner *et al* (see p.314 of their paper). ### Value For `KalmanLike`, a list with components `Lik` (the log-likelihood less some constants) and `s2`, the estimate of *kappa*. For `KalmanRun`, a list with components `values`, a vector of length 2 giving the output of `KalmanLike`, `resid` (the residuals) and `states`, the contemporaneous state estimates, a matrix with one row for each observation time. For `KalmanSmooth`, a list with two components. Component `smooth` is a `n` by `p` matrix of state estimates based on all the observations, with one row for each time. Component `var` is a `n` by `p` by `p` array of variance matrices. For `KalmanForecast`, a list with components `pred`, the predictions, and `var`, the unscaled variances of the prediction errors (to be multiplied by `s2`). For `makeARIMA`, a model list including components for its arguments. ### Warning These functions are designed to be called from other functions which check the validity of the arguments passed, so very little checking is done. ### References Durbin, J. and Koopman, S. J. (2001). *Time Series Analysis by State Space Methods*. Oxford University Press. Gardner, G, Harvey, A. C. and Phillips, G. D. A. (1980). Algorithm AS 154: An algorithm for exact maximum likelihood estimation of autoregressive-moving average models by means of Kalman filtering. *Applied Statistics*, **29**, 311–322. doi: [10.2307/2346910](https://doi.org/10.2307/2346910). R bug report PR#14682 (2011-2013) <https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=14682>. ### See Also `<arima>`, `[StructTS](structts)`. `[tsSmooth](tssmooth)`. ### Examples ``` ## an ARIMA fit fit3 <- arima(presidents, c(3, 0, 0)) predict(fit3, 12) ## reconstruct this pr <- KalmanForecast(12, fit3$model) pr$pred + fit3$coef[4] sqrt(pr$var * fit3$sigma2) ## and now do it year by year mod <- fit3$model for(y in 1:3) { pr <- KalmanForecast(4, mod, TRUE) print(list(pred = pr$pred + fit3$coef["intercept"], se = sqrt(pr$var * fit3$sigma2))) mod <- attr(pr, "mod") } ``` r None `se.contrast` Standard Errors for Contrasts in Model Terms ----------------------------------------------------------- ### Description Returns the standard errors for one or more contrasts in an `aov` object. ### Usage ``` se.contrast(object, ...) ## S3 method for class 'aov' se.contrast(object, contrast.obj, coef = contr.helmert(ncol(contrast))[, 1], data = NULL, ...) ``` ### Arguments | | | | --- | --- | | `object` | A suitable fit, usually from `aov`. | | `contrast.obj` | The contrasts for which standard errors are requested. This can be specified via a list or via a matrix. A single contrast can be specified by a list of logical vectors giving the cells to be contrasted. Multiple contrasts should be specified by a matrix, each column of which is a numerical contrast vector (summing to zero). | | `coef` | used when `contrast.obj` is a list; it should be a vector of the same length as the list with zero sum. The default value is the first Helmert contrast, which contrasts the first and second cell means specified by the list. | | `data` | The data frame used to evaluate `contrast.obj`. | | `...` | further arguments passed to or from other methods. | ### Details Contrasts are usually used to test if certain means are significantly different; it can be easier to use `se.contrast` than compute them directly from the coefficients. In multistratum models, the contrasts can appear in more than one stratum, in which case the standard errors are computed in the lowest stratum and adjusted for efficiencies and comparisons between strata. (See the comments in the note in the help for `<aov>` about using orthogonal contrasts.) Such standard errors are often conservative. Suitable matrices for use with `coef` can be found by calling `<contrasts>` and indexing the columns by a factor. ### Value A vector giving the standard errors for each contrast. ### See Also `<contrasts>`, `<model.tables>` ### Examples ``` ## From Venables and Ripley (2002) p.165. N <- c(0,1,0,1,1,1,0,0,0,1,1,0,1,1,0,0,1,0,1,0,1,1,0,0) P <- c(1,1,0,0,0,1,0,1,1,1,0,0,0,1,0,1,1,0,0,1,0,1,1,0) K <- c(1,0,0,1,0,1,1,0,0,1,0,1,0,1,1,0,0,0,1,1,1,0,1,0) yield <- c(49.5,62.8,46.8,57.0,59.8,58.5,55.5,56.0,62.8,55.8,69.5, 55.0, 62.0,48.8,45.5,44.2,52.0,51.5,49.8,48.8,57.2,59.0,53.2,56.0) npk <- data.frame(block = gl(6,4), N = factor(N), P = factor(P), K = factor(K), yield = yield) ## Set suitable contrasts. options(contrasts = c("contr.helmert", "contr.poly")) npk.aov1 <- aov(yield ~ block + N + K, data = npk) se.contrast(npk.aov1, list(N == "0", N == "1"), data = npk) # or via a matrix cont <- matrix(c(-1,1), 2, 1, dimnames = list(NULL, "N")) se.contrast(npk.aov1, cont[N, , drop = FALSE]/12, data = npk) ## test a multi-stratum model npk.aov2 <- aov(yield ~ N + K + Error(block/(N + K)), data = npk) se.contrast(npk.aov2, list(N == "0", N == "1")) ## an example looking at an interaction contrast ## Dataset from R.E. Kirk (1995) ## 'Experimental Design: procedures for the behavioral sciences' score <- c(12, 8,10, 6, 8, 4,10,12, 8, 6,10,14, 9, 7, 9, 5,11,12, 7,13, 9, 9, 5,11, 8, 7, 3, 8,12,10,13,14,19, 9,16,14) A <- gl(2, 18, labels = c("a1", "a2")) B <- rep(gl(3, 6, labels = c("b1", "b2", "b3")), 2) fit <- aov(score ~ A*B) cont <- c(1, -1)[A] * c(1, -1, 0)[B] sum(cont) # 0 sum(cont*score) # value of the contrast se.contrast(fit, as.matrix(cont)) (t.stat <- sum(cont*score)/se.contrast(fit, as.matrix(cont))) summary(fit, split = list(B = 1:2), expand.split = TRUE) ## t.stat^2 is the F value on the A:B: C1 line (with Helmert contrasts) ## Now look at all three interaction contrasts cont <- c(1, -1)[A] * cbind(c(1, -1, 0), c(1, 0, -1), c(0, 1, -1))[B,] se.contrast(fit, cont) # same, due to balance. rm(A, B, score) ## multi-stratum example where efficiencies play a role ## An example from Yates (1932), ## a 2^3 design in 2 blocks replicated 4 times Block <- gl(8, 4) A <- factor(c(0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1, 0,1,0,1,0,1,0,1,0,1,0,1)) B <- factor(c(0,0,1,1,0,0,1,1,0,1,0,1,1,0,1,0,0,0,1,1, 0,0,1,1,0,0,1,1,0,0,1,1)) C <- factor(c(0,1,1,0,1,0,0,1,0,0,1,1,0,0,1,1,0,1,0,1, 1,0,1,0,0,0,1,1,1,1,0,0)) Yield <- c(101, 373, 398, 291, 312, 106, 265, 450, 106, 306, 324, 449, 272, 89, 407, 338, 87, 324, 279, 471, 323, 128, 423, 334, 131, 103, 445, 437, 324, 361, 302, 272) aovdat <- data.frame(Block, A, B, C, Yield) fit <- aov(Yield ~ A + B * C + Error(Block), data = aovdat) cont1 <- c(-1, 1)[A]/32 # Helmert contrasts cont2 <- c(-1, 1)[B] * c(-1, 1)[C]/32 cont <- cbind(A = cont1, BC = cont2) colSums(cont*Yield) # values of the contrasts se.contrast(fit, as.matrix(cont)) # comparison with lme library(nlme) fit2 <- lme(Yield ~ A + B*C, random = ~1 | Block, data = aovdat) summary(fit2)$tTable # same estimates, similar (but smaller) se's. ``` r None `power` Create a Power Link Object ----------------------------------- ### Description Creates a link object based on the link function *η = μ ^ λ*. ### Usage ``` power(lambda = 1) ``` ### Arguments | | | | --- | --- | | `lambda` | a real number. | ### Details If `lambda` is non-positive, it is taken as zero, and the log link is obtained. The default `lambda = 1` gives the identity link. ### Value A list with components `linkfun`, `linkinv`, `mu.eta`, and `valideta`. See `<make.link>` for information on their meaning. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S.* Wadsworth & Brooks/Cole. ### See Also `<make.link>`, `<family>` To raise a number to a power, see `[Arithmetic](../../base/html/arithmetic)`. To calculate the power of a test, see various functions in the stats package, e.g., `<power.t.test>`. ### Examples ``` power() quasi(link = power(1/3))[c("linkfun", "linkinv")] ``` r None `biplot` Biplot of Multivariate Data ------------------------------------- ### Description Plot a biplot on the current graphics device. ### Usage ``` biplot(x, ...) ## Default S3 method: biplot(x, y, var.axes = TRUE, col, cex = rep(par("cex"), 2), xlabs = NULL, ylabs = NULL, expand = 1, xlim = NULL, ylim = NULL, arrow.len = 0.1, main = NULL, sub = NULL, xlab = NULL, ylab = NULL, ...) ``` ### Arguments | | | | --- | --- | | `x` | The `biplot`, a fitted object. For `biplot.default`, the first set of points (a two-column matrix), usually associated with observations. | | `y` | The second set of points (a two-column matrix), usually associated with variables. | | `var.axes` | If `TRUE` the second set of points have arrows representing them as (unscaled) axes. | | `col` | A vector of length 2 giving the colours for the first and second set of points respectively (and the corresponding axes). If a single colour is specified it will be used for both sets. If missing the default colour is looked for in the `[palette](../../grdevices/html/palette)`: if there it and the next colour as used, otherwise the first two colours of the palette are used. | | `cex` | The character expansion factor used for labelling the points. The labels can be of different sizes for the two sets by supplying a vector of length two. | | `xlabs` | A vector of character strings to label the first set of points: the default is to use the row dimname of `x`, or `1:n` if the dimname is `NULL`. | | `ylabs` | A vector of character strings to label the second set of points: the default is to use the row dimname of `y`, or `1:n` if the dimname is `NULL`. | | `expand` | An expansion factor to apply when plotting the second set of points relative to the first. This can be used to tweak the scaling of the two sets to a physically comparable scale. | | `arrow.len` | The length of the arrow heads on the axes plotted in `var.axes` is true. The arrow head can be suppressed by `arrow.len = 0`. | | `xlim, ylim` | Limits for the x and y axes in the units of the first set of variables. | | `main, sub, xlab, ylab, ...` | graphical parameters. | ### Details A biplot is plot which aims to represent both the observations and variables of a matrix of multivariate data on the same plot. There are many variations on biplots (see the references) and perhaps the most widely used one is implemented by `<biplot.princomp>`. The function `biplot.default` merely provides the underlying code to plot two sets of variables on the same figure. Graphical parameters can also be given to `biplot`: the size of `xlabs` and `ylabs` is controlled by `cex`. ### Side Effects a plot is produced on the current graphics device. ### References K. R. Gabriel (1971). The biplot graphical display of matrices with application to principal component analysis. *Biometrika*, **58**, 453–467. doi: [10.2307/2334381](https://doi.org/10.2307/2334381). J.C. Gower and D. J. Hand (1996). *Biplots*. Chapman & Hall. ### See Also `<biplot.princomp>`, also for examples.
programming_docs
r None `princomp` Principal Components Analysis ----------------------------------------- ### Description `princomp` performs a principal components analysis on the given numeric data matrix and returns the results as an object of class `princomp`. ### Usage ``` princomp(x, ...) ## S3 method for class 'formula' princomp(formula, data = NULL, subset, na.action, ...) ## Default S3 method: princomp(x, cor = FALSE, scores = TRUE, covmat = NULL, subset = rep_len(TRUE, nrow(as.matrix(x))), fix_sign = TRUE, ...) ## S3 method for class 'princomp' predict(object, newdata, ...) ``` ### Arguments | | | | --- | --- | | `formula` | a formula with no response variable, referring only to numeric variables. | | `data` | an optional data frame (or similar: see `<model.frame>`) containing the variables in the formula `formula`. By default the variables are taken from `environment(formula)`. | | `subset` | an optional vector used to select rows (observations) of the data matrix `x`. | | `na.action` | a function which indicates what should happen when the data contain `NA`s. The default is set by the `na.action` setting of `[options](../../base/html/options)`, and is `<na.fail>` if that is unset. The ‘factory-fresh’ default is `[na.omit](na.fail)`. | | `x` | a numeric matrix or data frame which provides the data for the principal components analysis. | | `cor` | a logical value indicating whether the calculation should use the correlation matrix or the covariance matrix. (The correlation matrix can only be used if there are no constant variables.) | | `scores` | a logical value indicating whether the score on each principal component should be calculated. | | `covmat` | a covariance matrix, or a covariance list as returned by `<cov.wt>` (and `[cov.mve](../../mass/html/cov.rob)` or `[cov.mcd](../../mass/html/cov.rob)` from package [MASS](https://CRAN.R-project.org/package=MASS)). If supplied, this is used rather than the covariance matrix of `x`. | | `fix_sign` | Should the signs of the loadings and scores be chosen so that the first element of each loading is non-negative? | | `...` | arguments passed to or from other methods. If `x` is a formula one might specify `cor` or `scores`. | | `object` | Object of class inheriting from `"princomp"`. | | `newdata` | An optional data frame or matrix in which to look for variables with which to predict. If omitted, the scores are used. If the original fit used a formula or a data frame or a matrix with column names, `newdata` must contain columns with the same names. Otherwise it must contain the same number of columns, to be used in the same order. | ### Details `princomp` is a generic function with `"formula"` and `"default"` methods. The calculation is done using `[eigen](../../base/html/eigen)` on the correlation or covariance matrix, as determined by `<cor>`. This is done for compatibility with the S-PLUS result. A preferred method of calculation is to use `[svd](../../base/html/svd)` on `x`, as is done in `prcomp`. Note that the default calculation uses divisor `N` for the covariance matrix. The `[print](../../base/html/print)` method for these objects prints the results in a nice format and the `[plot](../../graphics/html/plot.default)` method produces a scree plot (`<screeplot>`). There is also a `<biplot>` method. If `x` is a formula then the standard NA-handling is applied to the scores (if requested): see `[napredict](nafns)`. `princomp` only handles so-called R-mode PCA, that is feature extraction of variables. If a data matrix is supplied (possibly via a formula) it is required that there are at least as many units as variables. For Q-mode PCA use `<prcomp>`. ### Value `princomp` returns a list with class `"princomp"` containing the following components: | | | | --- | --- | | `sdev` | the standard deviations of the principal components. | | `loadings` | the matrix of variable loadings (i.e., a matrix whose columns contain the eigenvectors). This is of class `"loadings"`: see `<loadings>` for its `print` method. | | `center` | the means that were subtracted. | | `scale` | the scalings applied to each variable. | | `n.obs` | the number of observations. | | `scores` | if `scores = TRUE`, the scores of the supplied data on the principal components. These are non-null only if `x` was supplied, and if `covmat` was also supplied if it was a covariance list. For the formula method, `[napredict](nafns)()` is applied to handle the treatment of values omitted by the `na.action`. | | `call` | the matched call. | | `na.action` | If relevant. | ### Note The signs of the columns of the loadings and scores are arbitrary, and so may differ between different programs for PCA, and even between different builds of **R**: `fix_sign = TRUE` alleviates that. ### References Mardia, K. V., J. T. Kent and J. M. Bibby (1979). *Multivariate Analysis*, London: Academic Press. Venables, W. N. and B. D. Ripley (2002). *Modern Applied Statistics with S*, Springer-Verlag. ### See Also `<summary.princomp>`, `<screeplot>`, `<biplot.princomp>`, `<prcomp>`, `<cor>`, `[cov](cor)`, `[eigen](../../base/html/eigen)`. ### Examples ``` require(graphics) ## The variances of the variables in the ## USArrests data vary by orders of magnitude, so scaling is appropriate (pc.cr <- princomp(USArrests)) # inappropriate princomp(USArrests, cor = TRUE) # =^= prcomp(USArrests, scale=TRUE) ## Similar, but different: ## The standard deviations differ by a factor of sqrt(49/50) summary(pc.cr <- princomp(USArrests, cor = TRUE)) loadings(pc.cr) # note that blank entries are small but not zero ## The signs of the columns of the loadings are arbitrary plot(pc.cr) # shows a screeplot. biplot(pc.cr) ## Formula interface princomp(~ ., data = USArrests, cor = TRUE) ## NA-handling USArrests[1, 2] <- NA pc.cr <- princomp(~ Murder + Assault + UrbanPop, data = USArrests, na.action = na.exclude, cor = TRUE) pc.cr$scores[1:5, ] ## (Simple) Robust PCA: ## Classical: (pc.cl <- princomp(stackloss)) ## Robust: (pc.rob <- princomp(stackloss, covmat = MASS::cov.rob(stackloss))) ``` r None `ts` Time-Series Objects ------------------------- ### Description The function `ts` is used to create time-series objects. `as.ts` and `is.ts` coerce an object to a time-series and test whether an object is a time series. ### Usage ``` ts(data = NA, start = 1, end = numeric(), frequency = 1, deltat = 1, ts.eps = getOption("ts.eps"), class = , names = ) as.ts(x, ...) is.ts(x) ``` ### Arguments | | | | --- | --- | | `data` | a vector or matrix of the observed time-series values. A data frame will be coerced to a numeric matrix via `data.matrix`. (See also ‘Details’.) | | `start` | the time of the first observation. Either a single number or a vector of two numbers (the second of which is an integer), which specify a natural time unit and a (1-based) number of samples into the time unit. See the examples for the use of the second form. | | `end` | the time of the last observation, specified in the same way as `start`. | | `frequency` | the number of observations per unit of time. | | `deltat` | the fraction of the sampling period between successive observations; e.g., 1/12 for monthly data. Only one of `frequency` or `deltat` should be provided. | | `ts.eps` | time series comparison tolerance. Frequencies are considered equal if their absolute difference is less than `ts.eps`. | | `class` | class to be given to the result, or none if `NULL` or `"none"`. The default is `"ts"` for a single series, `c("mts", "ts", "matrix")` for multiple series. | | `names` | a character vector of names for the series in a multiple series: defaults to the colnames of `data`, or `Series 1`, `Series 2`, .... | | `x` | an arbitrary **R** object. | | `...` | arguments passed to methods (unused for the default method). | ### Details The function `ts` is used to create time-series objects. These are vectors or matrices with class of `"ts"` (and additional attributes) which represent data which has been sampled at equispaced points in time. In the matrix case, each column of the matrix `data` is assumed to contain a single (univariate) time series. Time series must have at least one observation, and although they need not be numeric there is very limited support for non-numeric series. Class `"ts"` has a number of methods. In particular arithmetic will attempt to align time axes, and subsetting to extract subsets of series can be used (e.g., `EuStockMarkets[, "DAX"]`). However, subsetting the first (or only) dimension will return a matrix or vector, as will matrix subsetting. Subassignment can be used to replace values but not to extend a series (see `<window>`). There is a method for `[t](../../base/html/t)` that transposes the series as a matrix (a one-column matrix if a vector) and hence returns a result that does not inherit from class `"ts"`. Argument `frequency` indicates the sampling frequency of the time series, with the default value `1` indicating one sample in each unit time interval. For example, one could use a value of `7` for `frequency` when the data are sampled daily, and the natural time period is a week, or `12` when the data are sampled monthly and the natural time period is a year. Values of `4` and `12` are assumed in (e.g.) `print` methods to imply a quarterly and monthly series respectively. As from **R** 4.0.0, `frequency` need not be a whole number. For example, `frequency = 0.2` would imply sampling once every five time units. `as.ts` is generic. Its default method will use the `<tsp>` attribute of the object if it has one to set the start and end times and frequency. `is.ts` tests if an object is a time series. It is generic: you can write methods to handle specific classes of objects, see [InternalMethods](../../base/html/internalmethods). ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<tsp>`, `[frequency](time)`, `<start>`, `[end](start)`, `<time>`, `<window>`; `<print.ts>`, the print method for time series objects; `<plot.ts>`, the plot method for time series objects. For other definitions of ‘time series’ (e.g., time-ordered observations) see the CRAN task view at <https://CRAN.R-project.org/view=TimeSeries>. ### Examples ``` require(graphics) ts(1:10, frequency = 4, start = c(1959, 2)) # 2nd Quarter of 1959 print( ts(1:10, frequency = 7, start = c(12, 2)), calendar = TRUE) # print.ts(.) ## Using July 1954 as start date: gnp <- ts(cumsum(1 + round(rnorm(100), 2)), start = c(1954, 7), frequency = 12) plot(gnp) # using 'plot.ts' for time-series plot ## Multivariate z <- ts(matrix(rnorm(300), 100, 3), start = c(1961, 1), frequency = 12) class(z) head(z) # as "matrix" plot(z) plot(z, plot.type = "single", lty = 1:3) ## A phase plot: plot(nhtemp, lag(nhtemp, 1), cex = .8, col = "blue", main = "Lag plot of New Haven temperatures") ``` r None `predict.nls` Predicting from Nonlinear Least Squares Fits ----------------------------------------------------------- ### Description `predict.nls` produces predicted values, obtained by evaluating the regression function in the frame `newdata`. If the logical `se.fit` is `TRUE`, standard errors of the predictions are calculated. If the numeric argument `scale` is set (with optional `df`), it is used as the residual standard deviation in the computation of the standard errors, otherwise this is extracted from the model fit. Setting `intervals` specifies computation of confidence or prediction (tolerance) intervals at the specified `level`. At present `se.fit` and `interval` are ignored. ### Usage ``` ## S3 method for class 'nls' predict(object, newdata , se.fit = FALSE, scale = NULL, df = Inf, interval = c("none", "confidence", "prediction"), level = 0.95, ...) ``` ### Arguments | | | | --- | --- | | `object` | An object that inherits from class `nls`. | | `newdata` | A named list or data frame in which to look for variables with which to predict. If `newdata` is missing the fitted values at the original data points are returned. | | `se.fit` | A logical value indicating if the standard errors of the predictions should be calculated. Defaults to `FALSE`. At present this argument is ignored. | | `scale` | A numeric scalar. If it is set (with optional `df`), it is used as the residual standard deviation in the computation of the standard errors, otherwise this information is extracted from the model fit. At present this argument is ignored. | | `df` | A positive numeric scalar giving the number of degrees of freedom for the `scale` estimate. At present this argument is ignored. | | `interval` | A character string indicating if prediction intervals or a confidence interval on the mean responses are to be calculated. At present this argument is ignored. | | `level` | A numeric scalar between 0 and 1 giving the confidence level for the intervals (if any) to be calculated. At present this argument is ignored. | | `...` | Additional optional arguments. At present no optional arguments are used. | ### Value `predict.nls` produces a vector of predictions. When implemented, `interval` will produce a matrix of predictions and bounds with column names `fit`, `lwr`, and `upr`. When implemented, if `se.fit` is `TRUE`, a list with the following components will be returned: | | | | --- | --- | | `fit` | vector or matrix as above | | `se.fit` | standard error of predictions | | `residual.scale` | residual standard deviations | | `df` | degrees of freedom for residual | ### Note Variables are first looked for in `newdata` and then searched for in the usual way (which will include the environment of the formula used in the fit). A warning will be given if the variables found are not of the same length as those in `newdata` if it was supplied. ### See Also The model fitting function `<nls>`, `<predict>`. ### Examples ``` require(graphics) fm <- nls(demand ~ SSasympOrig(Time, A, lrc), data = BOD) predict(fm) # fitted values at observed times ## Form data plot and smooth line for the predictions opar <- par(las = 1) plot(demand ~ Time, data = BOD, col = 4, main = "BOD data and fitted first-order curve", xlim = c(0,7), ylim = c(0, 20) ) tt <- seq(0, 8, length.out = 101) lines(tt, predict(fm, list(Time = tt))) par(opar) ``` r None `NLSstLfAsymptote` Horizontal Asymptote on the Left Side --------------------------------------------------------- ### Description Provide an initial guess at the horizontal asymptote on the left side (i.e., small values of `x`) of the graph of `y` versus `x` from the `xy` object. Primarily used within `initial` functions for self-starting nonlinear regression models. ### Usage ``` NLSstLfAsymptote(xy) ``` ### Arguments | | | | --- | --- | | `xy` | a `sortedXyData` object | ### Value A single numeric value estimating the horizontal asymptote for small `x`. ### Author(s) José Pinheiro and Douglas Bates ### See Also `[sortedXyData](sortedxydata)`, `[NLSstClosestX](nlsstclosestx)`, `[NLSstRtAsymptote](nlsstrtasymptote)`, `[selfStart](selfstart)` ### Examples ``` DNase.2 <- DNase[ DNase$Run == "2", ] DN.srt <- sortedXyData( expression(log(conc)), expression(density), DNase.2 ) NLSstLfAsymptote( DN.srt ) ``` r None `SSweibull` Self-Starting Nls Weibull Growth Curve Model --------------------------------------------------------- ### Description This `selfStart` model evaluates the Weibull model for growth curve data and its gradient. It has an `initial` attribute that will evaluate initial estimates of the parameters `Asym`, `Drop`, `lrc`, and `pwr` for a given set of data. ### Usage ``` SSweibull(x, Asym, Drop, lrc, pwr) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector of values at which to evaluate the model. | | `Asym` | a numeric parameter representing the horizontal asymptote on the right side (very small values of `x`). | | `Drop` | a numeric parameter representing the change from `Asym` to the `y` intercept. | | `lrc` | a numeric parameter representing the natural logarithm of the rate constant. | | `pwr` | a numeric parameter representing the power to which `x` is raised. | ### Details This model is a generalization of the `[SSasymp](ssasymp)` model in that it reduces to `SSasymp` when `pwr` is unity. ### Value a numeric vector of the same length as `x`. It is the value of the expression `Asym-Drop*exp(-exp(lrc)*x^pwr)`. If all of the arguments `Asym`, `Drop`, `lrc`, and `pwr` are names of objects, the gradient matrix with respect to these names is attached as an attribute named `gradient`. ### Author(s) Douglas Bates ### References Ratkowsky, David A. (1983), *Nonlinear Regression Modeling*, Dekker. (section 4.4.5) ### See Also `<nls>`, `[selfStart](selfstart)`, `[SSasymp](ssasymp)` ### Examples ``` Chick.6 <- subset(ChickWeight, (Chick == 6) & (Time > 0)) SSweibull(Chick.6$Time, 160, 115, -5.5, 2.5) # response only local({ Asym <- 160; Drop <- 115; lrc <- -5.5; pwr <- 2.5 SSweibull(Chick.6$Time, Asym, Drop, lrc, pwr) # response _and_ gradient }) ## IGNORE_RDIFF_BEGIN getInitial(weight ~ SSweibull(Time, Asym, Drop, lrc, pwr), data = Chick.6) ## IGNORE_RDIFF_END ## Initial values are in fact the converged values fm1 <- nls(weight ~ SSweibull(Time, Asym, Drop, lrc, pwr), data = Chick.6) summary(fm1) ## Data and Fit: plot(weight ~ Time, Chick.6, xlim = c(0, 21), main = "SSweibull() fit to Chick.6") ux <- par("usr")[1:2]; x <- seq(ux[1], ux[2], length.out=250) lines(x, do.call(SSweibull, c(list(x=x), coef(fm1))), col = "red", lwd=2) As <- coef(fm1)[["Asym"]]; abline(v = 0, h = c(As, As - coef(fm1)[["Drop"]]), lty = 3) ``` r None `case.names` Case and Variable Names of Fitted Models ------------------------------------------------------ ### Description Simple utilities returning (non-missing) case names, and (non-eliminated) variable names. ### Usage ``` case.names(object, ...) ## S3 method for class 'lm' case.names(object, full = FALSE, ...) variable.names(object, ...) ## S3 method for class 'lm' variable.names(object, full = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `object` | an **R** object, typically a fitted model. | | `full` | logical; if `TRUE`, all names (including zero weights, ...) are returned. | | `...` | further arguments passed to or from other methods. | ### Value A character vector. ### See Also `<lm>`; further, `[all.names](../../base/html/allnames)`, `[all.vars](../../base/html/allnames)` for functions with a similar name but only slightly related purpose. ### Examples ``` x <- 1:20 y <- setNames(x + (x/4 - 2)^3 + rnorm(20, sd = 3), paste("O", x, sep = ".")) ww <- rep(1, 20); ww[13] <- 0 summary(lmxy <- lm(y ~ x + I(x^2)+I(x^3) + I((x-10)^2), weights = ww), correlation = TRUE) variable.names(lmxy) variable.names(lmxy, full = TRUE) # includes the last case.names(lmxy) case.names(lmxy, full = TRUE) # includes the 0-weight case ``` r None `ftable` Flat Contingency Tables --------------------------------- ### Description Create ‘flat’ contingency tables. ### Usage ``` ftable(x, ...) ## Default S3 method: ftable(..., exclude = c(NA, NaN), row.vars = NULL, col.vars = NULL) ``` ### Arguments | | | | --- | --- | | `x, ...` | **R** objects which can be interpreted as factors (including character strings), or a list (or data frame) whose components can be so interpreted, or a contingency table object of class `"table"` or `"ftable"`. | | `exclude` | values to use in the exclude argument of `factor` when interpreting non-factor objects. | | `row.vars` | a vector of integers giving the numbers of the variables, or a character vector giving the names of the variables to be used for the rows of the flat contingency table. | | `col.vars` | a vector of integers giving the numbers of the variables, or a character vector giving the names of the variables to be used for the columns of the flat contingency table. | ### Details `ftable` creates ‘flat’ contingency tables. Similar to the usual contingency tables, these contain the counts of each combination of the levels of the variables (factors) involved. This information is then re-arranged as a matrix whose rows and columns correspond to unique combinations of the levels of the row and column variables (as specified by `row.vars` and `col.vars`, respectively). The combinations are created by looping over the variables in reverse order (so that the levels of the left-most variable vary the slowest). Displaying a contingency table in this flat matrix form (via `print.ftable`, the print method for objects of class `"ftable"`) is often preferable to showing it as a higher-dimensional array. `ftable` is a generic function. Its default method, `ftable.default`, first creates a contingency table in array form from all arguments except `row.vars` and `col.vars`. If the first argument is of class `"table"`, it represents a contingency table and is used as is; if it is a flat table of class `"ftable"`, the information it contains is converted to the usual array representation using `as.ftable`. Otherwise, the arguments should be **R** objects which can be interpreted as factors (including character strings), or a list (or data frame) whose components can be so interpreted, which are cross-tabulated using `[table](../../base/html/table)`. Then, the arguments `row.vars` and `col.vars` are used to collapse the contingency table into flat form. If neither of these two is given, the last variable is used for the columns. If both are given and their union is a proper subset of all variables involved, the other variables are summed out. When the arguments are **R** expressions interpreted as factors, additional arguments will be passed to `table` to control how the variable names are displayed; see the last example below. Function `<ftable.formula>` provides a formula method for creating flat contingency tables. There are methods for `[as.table](../../base/html/table)`, `[as.matrix](../../base/html/matrix)` and `[as.data.frame](../../base/html/as.data.frame)`. ### Value `ftable` returns an object of class `"ftable"`, which is a matrix with counts of each combination of the levels of variables with information on the names and levels of the (row and columns) variables stored as attributes `"row.vars"` and `"col.vars"`. ### See Also `<ftable.formula>` for the formula interface (which allows a `data = .` argument); `<read.ftable>` for information on reading, writing and coercing flat contingency tables; `[table](../../base/html/table)` for ordinary cross-tabulation; `<xtabs>` for formula-based cross-tabulation. ### Examples ``` ## Start with a contingency table. ftable(Titanic, row.vars = 1:3) ftable(Titanic, row.vars = 1:2, col.vars = "Survived") ftable(Titanic, row.vars = 2:1, col.vars = "Survived") ## Start with a data frame. x <- ftable(mtcars[c("cyl", "vs", "am", "gear")]) x ftable(x, row.vars = c(2, 4)) ## Start with expressions, use table()'s "dnn" to change labels ftable(mtcars$cyl, mtcars$vs, mtcars$am, mtcars$gear, row.vars = c(2, 4), dnn = c("Cylinders", "V/S", "Transmission", "Gears")) ```
programming_docs
r None `heatmap` Draw a Heat Map -------------------------- ### Description A heat map is a false color image (basically `[image](../../graphics/html/image)(t(x))`) with a dendrogram added to the left side and to the top. Typically, reordering of the rows and columns according to some set of values (row or column means) within the restrictions imposed by the dendrogram is carried out. ### Usage ``` heatmap(x, Rowv = NULL, Colv = if(symm)"Rowv" else NULL, distfun = dist, hclustfun = hclust, reorderfun = function(d, w) reorder(d, w), add.expr, symm = FALSE, revC = identical(Colv, "Rowv"), scale = c("row", "column", "none"), na.rm = TRUE, margins = c(5, 5), ColSideColors, RowSideColors, cexRow = 0.2 + 1/log10(nr), cexCol = 0.2 + 1/log10(nc), labRow = NULL, labCol = NULL, main = NULL, xlab = NULL, ylab = NULL, keep.dendro = FALSE, verbose = getOption("verbose"), ...) ``` ### Arguments | | | | --- | --- | | `x` | numeric matrix of the values to be plotted. | | `Rowv` | determines if and how the *row* dendrogram should be computed and reordered. Either a `<dendrogram>` or a vector of values used to reorder the row dendrogram or `[NA](../../base/html/na)` to suppress any row dendrogram (and reordering) or by default, `[NULL](../../base/html/null)`, see ‘Details’ below. | | `Colv` | determines if and how the *column* dendrogram should be reordered. Has the same options as the `Rowv` argument above and *additionally* when `x` is a square matrix, `Colv = "Rowv"` means that columns should be treated identically to the rows (and so if there is to be no row dendrogram there will not be a column one either). | | `distfun` | function used to compute the distance (dissimilarity) between both rows and columns. Defaults to `<dist>`. | | `hclustfun` | function used to compute the hierarchical clustering when `Rowv` or `Colv` are not dendrograms. Defaults to `<hclust>`. Should take as argument a result of `distfun` and return an object to which `[as.dendrogram](dendrogram)` can be applied. | | `reorderfun` | `function(d, w)` of dendrogram and weights for reordering the row and column dendrograms. The default uses `<reorder.dendrogram>`. | | `add.expr` | expression that will be evaluated after the call to `image`. Can be used to add components to the plot. | | `symm` | logical indicating if `x` should be treated **symm**etrically; can only be true when `x` is a square matrix. | | `revC` | logical indicating if the column order should be `[rev](../../base/html/rev)`ersed for plotting, such that e.g., for the symmetric case, the symmetry axis is as usual. | | `scale` | character indicating if the values should be centered and scaled in either the row direction or the column direction, or none. The default is `"row"` if `symm` false, and `"none"` otherwise. | | `na.rm` | logical indicating whether `NA`'s should be removed. | | `margins` | numeric vector of length 2 containing the margins (see `[par](../../graphics/html/par)(mar = *)`) for column and row names, respectively. | | `ColSideColors` | (optional) character vector of length `ncol(x)` containing the color names for a horizontal side bar that may be used to annotate the columns of `x`. | | `RowSideColors` | (optional) character vector of length `nrow(x)` containing the color names for a vertical side bar that may be used to annotate the rows of `x`. | | `cexRow, cexCol` | positive numbers, used as `cex.axis` in for the row or column axis labeling. The defaults currently only use number of rows or columns, respectively. | | `labRow, labCol` | character vectors with row and column labels to use; these default to `rownames(x)` or `colnames(x)`, respectively. | | `main, xlab, ylab` | main, x- and y-axis titles; defaults to none. | | `keep.dendro` | logical indicating if the dendrogram(s) should be kept as part of the result (when `Rowv` and/or `Colv` are not NA). | | `verbose` | logical indicating if information should be printed. | | `...` | additional arguments passed on to `[image](../../graphics/html/image)`, e.g., `col` specifying the colors. | ### Details If either `Rowv` or `Colv` are dendrograms they are honored (and not reordered). Otherwise, dendrograms are computed as `dd <- as.dendrogram(hclustfun(distfun(X)))` where `X` is either `x` or `t(x)`. If either is a vector (of ‘weights’) then the appropriate dendrogram is reordered according to the supplied values subject to the constraints imposed by the dendrogram, by `[reorder](reorder.factor)(dd, Rowv)`, in the row case. If either is missing, as by default, then the ordering of the corresponding dendrogram is by the mean value of the rows/columns, i.e., in the case of rows, `Rowv <- rowMeans(x, na.rm = na.rm)`. If either is `[NA](../../base/html/na)`, *no reordering* will be done for the corresponding side. By default (`scale = "row"`) the rows are scaled to have mean zero and standard deviation one. There is some empirical evidence from genomic plotting that this is useful. The default colors are not pretty. Consider using enhancements such as the [RColorBrewer](https://CRAN.R-project.org/package=RColorBrewer) package. ### Value Invisibly, a list with components | | | | --- | --- | | `rowInd` | **r**ow index permutation vector as returned by `<order.dendrogram>`. | | `colInd` | **c**olumn index permutation vector. | | `Rowv` | the row dendrogram; only if input `Rowv` was not NA and `keep.dendro` is true. | | `Colv` | the column dendrogram; only if input `Colv` was not NA and `keep.dendro` is true. | ### Note Unless `Rowv = NA` (or `Colw = NA`), the original rows and columns are reordered *in any case* to match the dendrogram, e.g., the rows by `<order.dendrogram>(Rowv)` where `Rowv` is the (possibly `[reorder](reorder.factor)()`ed) row dendrogram. `heatmap()` uses `[layout](../../graphics/html/layout)` and draws the `[image](../../graphics/html/image)` in the lower right corner of a 2x2 layout. Consequentially, it can **not** be used in a multi column/row layout, i.e., when `[par](../../graphics/html/par)(mfrow = *)` or `(mfcol = *)` has been called. ### Author(s) Andy Liaw, original; R. Gentleman, M. Maechler, W. Huber, revisions. ### See Also `[image](../../graphics/html/image)`, `<hclust>` ### Examples ``` require(graphics); require(grDevices) x <- as.matrix(mtcars) rc <- rainbow(nrow(x), start = 0, end = .3) cc <- rainbow(ncol(x), start = 0, end = .3) hv <- heatmap(x, col = cm.colors(256), scale = "column", RowSideColors = rc, ColSideColors = cc, margins = c(5,10), xlab = "specification variables", ylab = "Car Models", main = "heatmap(<Mtcars data>, ..., scale = \"column\")") utils::str(hv) # the two re-ordering index vectors ## no column dendrogram (nor reordering) at all: heatmap(x, Colv = NA, col = cm.colors(256), scale = "column", RowSideColors = rc, margins = c(5,10), xlab = "specification variables", ylab = "Car Models", main = "heatmap(<Mtcars data>, ..., scale = \"column\")") ## "no nothing" heatmap(x, Rowv = NA, Colv = NA, scale = "column", main = "heatmap(*, NA, NA) ~= image(t(x))") round(Ca <- cor(attitude), 2) symnum(Ca) # simple graphic heatmap(Ca, symm = TRUE, margins = c(6,6)) # with reorder() heatmap(Ca, Rowv = FALSE, symm = TRUE, margins = c(6,6)) # _NO_ reorder() ## slightly artificial with color bar, without and with ordering: cc <- rainbow(nrow(Ca)) heatmap(Ca, Rowv = FALSE, symm = TRUE, RowSideColors = cc, ColSideColors = cc, margins = c(6,6)) heatmap(Ca, symm = TRUE, RowSideColors = cc, ColSideColors = cc, margins = c(6,6)) ## For variable clustering, rather use distance based on cor(): symnum( cU <- cor(USJudgeRatings) ) hU <- heatmap(cU, Rowv = FALSE, symm = TRUE, col = topo.colors(16), distfun = function(c) as.dist(1 - c), keep.dendro = TRUE) ## The Correlation matrix with same reordering: round(100 * cU[hU[[1]], hU[[2]]]) ## The column dendrogram: utils::str(hU$Colv) ``` r None `TDist` The Student t Distribution ----------------------------------- ### Description Density, distribution function, quantile function and random generation for the t distribution with `df` degrees of freedom (and optional non-centrality parameter `ncp`). ### Usage ``` dt(x, df, ncp, log = FALSE) pt(q, df, ncp, lower.tail = TRUE, log.p = FALSE) qt(p, df, ncp, lower.tail = TRUE, log.p = FALSE) rt(n, df, ncp) ``` ### Arguments | | | | --- | --- | | `x, q` | vector of quantiles. | | `p` | vector of probabilities. | | `n` | number of observations. If `length(n) > 1`, the length is taken to be the number required. | | `df` | degrees of freedom (*> 0*, maybe non-integer). `df = Inf` is allowed. | | `ncp` | non-centrality parameter *delta*; currently except for `rt()`, only for `abs(ncp) <= 37.62`. If omitted, use the central t distribution. | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details The *t* distribution with `df` *= n* degrees of freedom has density *f(x) = Γ((n+1)/2) / (√(n π) Γ(n/2)) (1 + x^2/n)^-((n+1)/2)* for all real *x*. It has mean *0* (for *n > 1*) and variance *n/(n-2)* (for *n > 2*). The general *non-central* *t* with parameters *(df, Del)* `= (df, ncp)` is defined as the distribution of *T(df, Del) := (U + Del) / √(V/df)* where *U* and *V* are independent random variables, *U ~ N(0,1)* and *V ~ χ^2(df)* (see [Chisquare](chisquare)). The most used applications are power calculations for *t*-tests: Let *T= (mX - m0) / (S/sqrt(n))* where *mX* is the `[mean](../../base/html/mean)` and *S* the sample standard deviation (`<sd>`) of *X\_1, X\_2, …, X\_n* which are i.i.d. *N(μ, σ^2)* Then *T* is distributed as non-central *t* with `df`*= n - 1* degrees of freedom and **n**on-**c**entrality **p**arameter `ncp` *= (μ - m0) \* sqrt(n)/σ*. ### Value `dt` gives the density, `pt` gives the distribution function, `qt` gives the quantile function, and `rt` generates random deviates. Invalid arguments will result in return value `NaN`, with a warning. The length of the result is determined by `n` for `rt`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `n` are recycled to the length of the result. Only the first elements of the logical arguments are used. ### Note Supplying `ncp = 0` uses the algorithm for the non-central distribution, which is not the same algorithm used if `ncp` is omitted. This is to give consistent behaviour in extreme cases with values of `ncp` very near zero. The code for non-zero `ncp` is principally intended to be used for moderate values of `ncp`: it will not be highly accurate, especially in the tails, for large values. ### Source The central `dt` is computed via an accurate formula provided by Catherine Loader (see the reference in `[dbinom](family)`). For the non-central case of `dt`, C code contributed by Claus Ekstrøm based on the relationship (for *x != 0*) to the cumulative distribution. For the central case of `pt`, a normal approximation in the tails, otherwise via `[pbeta](beta)`. For the non-central case of `pt` based on a C translation of Lenth, R. V. (1989). *Algorithm AS 243* — Cumulative distribution function of the non-central *t* distribution, *Applied Statistics* **38**, 185–189. This computes the lower tail only, so the upper tail suffers from cancellation and a warning will be given when this is likely to be significant. For central `qt`, a C translation of Hill, G. W. (1970) Algorithm 396: Student's t-quantiles. *Communications of the ACM*, **13(10)**, 619–620. altered to take account of Hill, G. W. (1981) Remark on Algorithm 396, *ACM Transactions on Mathematical Software*, **7**, 250–1. The non-central case is done by inversion. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. (Except non-central versions.) Johnson, N. L., Kotz, S. and Balakrishnan, N. (1995) *Continuous Univariate Distributions*, volume 2, chapters 28 and 31. Wiley, New York. ### See Also [Distributions](distributions) for other standard distributions, including `[df](fdist)` for the F distribution. ### Examples ``` require(graphics) 1 - pt(1:5, df = 1) qt(.975, df = c(1:10,20,50,100,1000)) tt <- seq(0, 10, length.out = 21) ncp <- seq(0, 6, length.out = 31) ptn <- outer(tt, ncp, function(t, d) pt(t, df = 3, ncp = d)) t.tit <- "Non-central t - Probabilities" image(tt, ncp, ptn, zlim = c(0,1), main = t.tit) persp(tt, ncp, ptn, zlim = 0:1, r = 2, phi = 20, theta = 200, main = t.tit, xlab = "t", ylab = "non-centrality parameter", zlab = "Pr(T <= t)") plot(function(x) dt(x, df = 3, ncp = 2), -3, 11, ylim = c(0, 0.32), main = "Non-central t - Density", yaxs = "i") ``` r None `prop.trend.test` Test for trend in proportions ------------------------------------------------ ### Description Performs chi-squared test for trend in proportions, i.e., a test asymptotically optimal for local alternatives where the log odds vary in proportion with `score`. By default, `score` is chosen as the group numbers. ### Usage ``` prop.trend.test(x, n, score = seq_along(x)) ``` ### Arguments | | | | --- | --- | | `x` | Number of events | | `n` | Number of trials | | `score` | Group score | ### Value An object of class `"htest"` with title, test statistic, p-value, etc. ### Note This really should get integrated with `prop.test` ### Author(s) Peter Dalgaard ### See Also `<prop.test>` ### Examples ``` smokers <- c( 83, 90, 129, 70 ) patients <- c( 86, 93, 136, 82 ) prop.test(smokers, patients) prop.trend.test(smokers, patients) prop.trend.test(smokers, patients, c(0,0,0,1)) ``` r None `relevel` Reorder Levels of Factor ----------------------------------- ### Description The levels of a factor are re-ordered so that the level specified by `ref` is first and the others are moved down. This is useful for `contr.treatment` contrasts which take the first level as the reference. ### Usage ``` relevel(x, ref, ...) ``` ### Arguments | | | | --- | --- | | `x` | an unordered factor. | | `ref` | the reference level, typically a string. | | `...` | additional arguments for future methods. | ### Details This, as `[reorder](reorder.factor)()`, is a special case of simply calling `[factor](../../base/html/factor)(x, levels = levels(x)[....])`. ### Value A factor of the same length as `x`. ### See Also `[factor](../../base/html/factor)`, `[contr.treatment](contrast)`, `[levels](../../base/html/levels)`, `[reorder](reorder.factor)`. ### Examples ``` warpbreaks$tension <- relevel(warpbreaks$tension, ref = "M") summary(lm(breaks ~ wool + tension, data = warpbreaks)) ``` r None `Normal` The Normal Distribution --------------------------------- ### Description Density, distribution function, quantile function and random generation for the normal distribution with mean equal to `mean` and standard deviation equal to `sd`. ### Usage ``` dnorm(x, mean = 0, sd = 1, log = FALSE) pnorm(q, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) qnorm(p, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) rnorm(n, mean = 0, sd = 1) ``` ### Arguments | | | | --- | --- | | `x, q` | vector of quantiles. | | `p` | vector of probabilities. | | `n` | number of observations. If `length(n) > 1`, the length is taken to be the number required. | | `mean` | vector of means. | | `sd` | vector of standard deviations. | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]* otherwise, *P[X > x]*. | ### Details If `mean` or `sd` are not specified they assume the default values of `0` and `1`, respectively. The normal distribution has density *f(x) = 1/(√(2 π) σ) e^-((x - μ)^2/(2 σ^2))* where *μ* is the mean of the distribution and *σ* the standard deviation. ### Value `dnorm` gives the density, `pnorm` gives the distribution function, `qnorm` gives the quantile function, and `rnorm` generates random deviates. The length of the result is determined by `n` for `rnorm`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `n` are recycled to the length of the result. Only the first elements of the logical arguments are used. For `sd = 0` this gives the limit as `sd` decreases to 0, a point mass at `mu`. `sd < 0` is an error and returns `NaN`. ### Source For `pnorm`, based on Cody, W. D. (1993) Algorithm 715: SPECFUN – A portable FORTRAN package of special function routines and test drivers. *ACM Transactions on Mathematical Software* **19**, 22–32. For `qnorm`, the code is a C translation of Wichura, M. J. (1988) Algorithm AS 241: The percentage points of the normal distribution. *Applied Statistics*, **37**, 477–484. which provides precise results up to about 16 digits. For `rnorm`, see [RNG](../../base/html/random) for how to select the algorithm and for references to the supplied methods. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Johnson, N. L., Kotz, S. and Balakrishnan, N. (1995) *Continuous Univariate Distributions*, volume 1, chapter 13. Wiley, New York. ### See Also [Distributions](distributions) for other standard distributions, including `[dlnorm](lognormal)` for the *Log*normal distribution. ### Examples ``` require(graphics) dnorm(0) == 1/sqrt(2*pi) dnorm(1) == exp(-1/2)/sqrt(2*pi) dnorm(1) == 1/sqrt(2*pi*exp(1)) ## Using "log = TRUE" for an extended range : par(mfrow = c(2,1)) plot(function(x) dnorm(x, log = TRUE), -60, 50, main = "log { Normal density }") curve(log(dnorm(x)), add = TRUE, col = "red", lwd = 2) mtext("dnorm(x, log=TRUE)", adj = 0) mtext("log(dnorm(x))", col = "red", adj = 1) plot(function(x) pnorm(x, log.p = TRUE), -50, 10, main = "log { Normal Cumulative }") curve(log(pnorm(x)), add = TRUE, col = "red", lwd = 2) mtext("pnorm(x, log=TRUE)", adj = 0) mtext("log(pnorm(x))", col = "red", adj = 1) ## if you want the so-called 'error function' erf <- function(x) 2 * pnorm(x * sqrt(2)) - 1 ## (see Abramowitz and Stegun 29.2.29) ## and the so-called 'complementary error function' erfc <- function(x) 2 * pnorm(x * sqrt(2), lower = FALSE) ## and the inverses erfinv <- function (x) qnorm((1 + x)/2)/sqrt(2) erfcinv <- function (x) qnorm(x/2, lower = FALSE)/sqrt(2) ``` r None `ansari.test` Ansari-Bradley Test ---------------------------------- ### Description Performs the Ansari-Bradley two-sample test for a difference in scale parameters. ### Usage ``` ansari.test(x, ...) ## Default S3 method: ansari.test(x, y, alternative = c("two.sided", "less", "greater"), exact = NULL, conf.int = FALSE, conf.level = 0.95, ...) ## S3 method for class 'formula' ansari.test(formula, data, subset, na.action, ...) ``` ### Arguments | | | | --- | --- | | `x` | numeric vector of data values. | | `y` | numeric vector of data values. | | `alternative` | indicates the alternative hypothesis and must be one of `"two.sided"`, `"greater"` or `"less"`. You can specify just the initial letter. | | `exact` | a logical indicating whether an exact p-value should be computed. | | `conf.int` | a logical,indicating whether a confidence interval should be computed. | | `conf.level` | confidence level of the interval. | | `formula` | a formula of the form `lhs ~ rhs` where `lhs` is a numeric variable giving the data values and `rhs` a factor with two levels giving the corresponding groups. | | `data` | an optional matrix or data frame (or similar: see `<model.frame>`) containing the variables in the formula `formula`. By default the variables are taken from `environment(formula)`. | | `subset` | an optional vector specifying a subset of observations to be used. | | `na.action` | a function which indicates what should happen when the data contain `NA`s. Defaults to `getOption("na.action")`. | | `...` | further arguments to be passed to or from methods. | ### Details Suppose that `x` and `y` are independent samples from distributions with densities *f((t-m)/s)/s* and *f(t-m)*, respectively, where *m* is an unknown nuisance parameter and *s*, the ratio of scales, is the parameter of interest. The Ansari-Bradley test is used for testing the null that *s* equals 1, the two-sided alternative being that *s != 1* (the distributions differ only in variance), and the one-sided alternatives being *s > 1* (the distribution underlying `x` has a larger variance, `"greater"`) or *s < 1* (`"less"`). By default (if `exact` is not specified), an exact p-value is computed if both samples contain less than 50 finite values and there are no ties. Otherwise, a normal approximation is used. Optionally, a nonparametric confidence interval and an estimator for *s* are computed. If exact p-values are available, an exact confidence interval is obtained by the algorithm described in Bauer (1972), and the Hodges-Lehmann estimator is employed. Otherwise, the returned confidence interval and point estimate are based on normal approximations. Note that mid-ranks are used in the case of ties rather than average scores as employed in Hollander & Wolfe (1973). See, e.g., Hajek, Sidak and Sen (1999), pages 131ff, for more information. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the value of the Ansari-Bradley test statistic. | | `p.value` | the p-value of the test. | | `null.value` | the ratio of scales *s* under the null, 1. | | `alternative` | a character string describing the alternative hypothesis. | | `method` | the string `"Ansari-Bradley test"`. | | `data.name` | a character string giving the names of the data. | | `conf.int` | a confidence interval for the scale parameter. (Only present if argument `conf.int = TRUE`.) | | `estimate` | an estimate of the ratio of scales. (Only present if argument `conf.int = TRUE`.) | ### Note To compare results of the Ansari-Bradley test to those of the F test to compare two variances (under the assumption of normality), observe that *s* is the ratio of scales and hence *s^2* is the ratio of variances (provided they exist), whereas for the F test the ratio of variances itself is the parameter of interest. In particular, confidence intervals are for *s* in the Ansari-Bradley test but for *s^2* in the F test. ### References David F. Bauer (1972). Constructing confidence sets using rank statistics. *Journal of the American Statistical Association*, **67**, 687–690. doi: [10.1080/01621459.1972.10481279](https://doi.org/10.1080/01621459.1972.10481279). Jaroslav Hajek, Zbynek Sidak and Pranab K. Sen (1999). *Theory of Rank Tests*. San Diego, London: Academic Press. Myles Hollander and Douglas A. Wolfe (1973). *Nonparametric Statistical Methods*. New York: John Wiley & Sons. Pages 83–92. ### See Also `<fligner.test>` for a rank-based (nonparametric) *k*-sample test for homogeneity of variances; `<mood.test>` for another rank-based two-sample test for a difference in scale parameters; `<var.test>` and `<bartlett.test>` for parametric tests for the homogeneity in variance. `[ansari\_test](../../coin/html/scaletests)` in package [coin](https://CRAN.R-project.org/package=coin) for exact and approximate *conditional* p-values for the Ansari-Bradley test, as well as different methods for handling ties. ### Examples ``` ## Hollander & Wolfe (1973, p. 86f): ## Serum iron determination using Hyland control sera ramsay <- c(111, 107, 100, 99, 102, 106, 109, 108, 104, 99, 101, 96, 97, 102, 107, 113, 116, 113, 110, 98) jung.parekh <- c(107, 108, 106, 98, 105, 103, 110, 105, 104, 100, 96, 108, 103, 104, 114, 114, 113, 108, 106, 99) ansari.test(ramsay, jung.parekh) ansari.test(rnorm(10), rnorm(10, 0, 2), conf.int = TRUE) ## try more points - failed in 2.4.1 ansari.test(rnorm(100), rnorm(100, 0, 2), conf.int = TRUE) ```
programming_docs
r None `summary.nls` Summarizing Non-Linear Least-Squares Model Fits -------------------------------------------------------------- ### Description `summary` method for class `"nls"`. ### Usage ``` ## S3 method for class 'nls' summary(object, correlation = FALSE, symbolic.cor = FALSE, ...) ## S3 method for class 'summary.nls' print(x, digits = max(3, getOption("digits") - 3), symbolic.cor = x$symbolic.cor, signif.stars = getOption("show.signif.stars"), ...) ``` ### Arguments | | | | --- | --- | | `object` | an object of class `"nls"`. | | `x` | an object of class `"summary.nls"`, usually the result of a call to `summary.nls`. | | `correlation` | logical; if `TRUE`, the correlation matrix of the estimated parameters is returned and printed. | | `digits` | the number of significant digits to use when printing. | | `symbolic.cor` | logical. If `TRUE`, print the correlations in a symbolic form (see `<symnum>`) rather than as numbers. | | `signif.stars` | logical. If `TRUE`, ‘significance stars’ are printed for each coefficient. | | `...` | further arguments passed to or from other methods. | ### Details The distribution theory used to find the distribution of the standard errors and of the residual standard error (for t ratios) is based on linearization and is approximate, maybe very approximate. `print.summary.nls` tries to be smart about formatting the coefficients, standard errors, etc. and additionally gives ‘significance stars’ if `signif.stars` is `TRUE`. Correlations are printed to two decimal places (or symbolically): to see the actual correlations print `summary(object)$correlation` directly. ### Value The function `summary.nls` computes and returns a list of summary statistics of the fitted model given in `object`, using the component `"formula"` from its argument, plus | | | | --- | --- | | `residuals` | the *weighted* residuals, the usual residuals rescaled by the square root of the weights specified in the call to `nls`. | | `coefficients` | a *p x 4* matrix with columns for the estimated coefficient, its standard error, t-statistic and corresponding (two-sided) p-value. | | `sigma` | the square root of the estimated variance of the random error *σ^2 = 1/(n-p) Sum(R[i]^2),* where *R[i]* is the *i*-th weighted residual. | | `df` | degrees of freedom, a 2-vector *(p, n-p)*. (Here and elsewhere *n* omits observations with zero weights.) | | `cov.unscaled` | a *p x p* matrix of (unscaled) covariances of the parameter estimates. | | `correlation` | the correlation matrix corresponding to the above `cov.unscaled`, if `correlation = TRUE` is specified and there are a non-zero number of residual degrees of freedom. | | `symbolic.cor` | (only if `correlation` is true.) The value of the argument `symbolic.cor`. | ### See Also The model fitting function `<nls>`, `[summary](../../base/html/summary)`. Function `<coef>` will extract the matrix of coefficients with standard errors, t-statistics and p-values. r None `prcomp` Principal Components Analysis --------------------------------------- ### Description Performs a principal components analysis on the given data matrix and returns the results as an object of class `prcomp`. ### Usage ``` prcomp(x, ...) ## S3 method for class 'formula' prcomp(formula, data = NULL, subset, na.action, ...) ## Default S3 method: prcomp(x, retx = TRUE, center = TRUE, scale. = FALSE, tol = NULL, rank. = NULL, ...) ## S3 method for class 'prcomp' predict(object, newdata, ...) ``` ### Arguments | | | | --- | --- | | `formula` | a formula with no response variable, referring only to numeric variables. | | `data` | an optional data frame (or similar: see `<model.frame>`) containing the variables in the formula `formula`. By default the variables are taken from `environment(formula)`. | | `subset` | an optional vector used to select rows (observations) of the data matrix `x`. | | `na.action` | a function which indicates what should happen when the data contain `NA`s. The default is set by the `na.action` setting of `[options](../../base/html/options)`, and is `<na.fail>` if that is unset. The ‘factory-fresh’ default is `[na.omit](na.fail)`. | | `...` | arguments passed to or from other methods. If `x` is a formula one might specify `scale.` or `tol`. | | `x` | a numeric or complex matrix (or data frame) which provides the data for the principal components analysis. | | `retx` | a logical value indicating whether the rotated variables should be returned. | | `center` | a logical value indicating whether the variables should be shifted to be zero centered. Alternately, a vector of length equal the number of columns of `x` can be supplied. The value is passed to `scale`. | | `scale.` | a logical value indicating whether the variables should be scaled to have unit variance before the analysis takes place. The default is `FALSE` for consistency with S, but in general scaling is advisable. Alternatively, a vector of length equal the number of columns of `x` can be supplied. The value is passed to `[scale](../../base/html/scale)`. | | `tol` | a value indicating the magnitude below which components should be omitted. (Components are omitted if their standard deviations are less than or equal to `tol` times the standard deviation of the first component.) With the default null setting, no components are omitted (unless `rank.` is specified less than `min(dim(x))`.). Other settings for tol could be `tol = 0` or `tol = sqrt(.Machine$double.eps)`, which would omit essentially constant components. | | `rank.` | optionally, a number specifying the maximal rank, i.e., maximal number of principal components to be used. Can be set as alternative or in addition to `tol`, useful notably when the desired rank is considerably smaller than the dimensions of the matrix. | | `object` | object of class inheriting from `"prcomp"` | | `newdata` | An optional data frame or matrix in which to look for variables with which to predict. If omitted, the scores are used. If the original fit used a formula or a data frame or a matrix with column names, `newdata` must contain columns with the same names. Otherwise it must contain the same number of columns, to be used in the same order. | ### Details The calculation is done by a singular value decomposition of the (centered and possibly scaled) data matrix, not by using `eigen` on the covariance matrix. This is generally the preferred method for numerical accuracy. The `print` method for these objects prints the results in a nice format and the `plot` method produces a scree plot. Unlike `<princomp>`, variances are computed with the usual divisor *N - 1*. Note that `scale = TRUE` cannot be used if there are zero or constant (for `center = TRUE`) variables. ### Value `prcomp` returns a list with class `"prcomp"` containing the following components: | | | | --- | --- | | `sdev` | the standard deviations of the principal components (i.e., the square roots of the eigenvalues of the covariance/correlation matrix, though the calculation is actually done with the singular values of the data matrix). | | `rotation` | the matrix of variable loadings (i.e., a matrix whose columns contain the eigenvectors). The function `princomp` returns this in the element `loadings`. | | `x` | if `retx` is true the value of the rotated data (the centred (and scaled if requested) data multiplied by the `rotation` matrix) is returned. Hence, `cov(x)` is the diagonal matrix `diag(sdev^2)`. For the formula method, `[napredict](nafns)()` is applied to handle the treatment of values omitted by the `na.action`. | | `center, scale` | the centering and scaling used, or `FALSE`. | ### Note The signs of the columns of the rotation matrix are arbitrary, and so may differ between different programs for PCA, and even between different builds of **R**. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Mardia, K. V., J. T. Kent, and J. M. Bibby (1979) *Multivariate Analysis*, London: Academic Press. Venables, W. N. and B. D. Ripley (2002) *Modern Applied Statistics with S*, Springer-Verlag. ### See Also `[biplot.prcomp](biplot.princomp)`, `<screeplot>`, `<princomp>`, `<cor>`, `[cov](cor)`, `[svd](../../base/html/svd)`, `[eigen](../../base/html/eigen)`. ### Examples ``` C <- chol(S <- toeplitz(.9 ^ (0:31))) # Cov.matrix and its root all.equal(S, crossprod(C)) set.seed(17) X <- matrix(rnorm(32000), 1000, 32) Z <- X %*% C ## ==> cov(Z) ~= C'C = S all.equal(cov(Z), S, tolerance = 0.08) pZ <- prcomp(Z, tol = 0.1) summary(pZ) # only ~14 PCs (out of 32) ## or choose only 3 PCs more directly: pz3 <- prcomp(Z, rank. = 3) summary(pz3) # same numbers as the first 3 above stopifnot(ncol(pZ$rotation) == 14, ncol(pz3$rotation) == 3, all.equal(pz3$sdev, pZ$sdev, tolerance = 1e-15)) # exactly equal typically ## signs are random require(graphics) ## the variances of the variables in the ## USArrests data vary by orders of magnitude, so scaling is appropriate prcomp(USArrests) # inappropriate prcomp(USArrests, scale. = TRUE) prcomp(~ Murder + Assault + Rape, data = USArrests, scale. = TRUE) plot(prcomp(USArrests)) summary(prcomp(USArrests, scale. = TRUE)) biplot(prcomp(USArrests, scale. = TRUE)) ``` r None `power.prop.test` Power Calculations for Two-Sample Test for Proportions ------------------------------------------------------------------------- ### Description Compute the power of the two-sample test for proportions, or determine parameters to obtain a target power. ### Usage ``` power.prop.test(n = NULL, p1 = NULL, p2 = NULL, sig.level = 0.05, power = NULL, alternative = c("two.sided", "one.sided"), strict = FALSE, tol = .Machine$double.eps^0.25) ``` ### Arguments | | | | --- | --- | | `n` | number of observations (per group) | | `p1` | probability in one group | | `p2` | probability in other group | | `sig.level` | significance level (Type I error probability) | | `power` | power of test (1 minus Type II error probability) | | `alternative` | one- or two-sided test. Can be abbreviated. | | `strict` | use strict interpretation in two-sided case | | `tol` | numerical tolerance used in root finding, the default providing (at least) four significant digits. | ### Details Exactly one of the parameters `n`, `p1`, `p2`, `power`, and `sig.level` must be passed as NULL, and that parameter is determined from the others. Notice that `sig.level` has a non-NULL default so `NULL` must be explicitly passed if you want it computed. If `strict = TRUE` is used, the power will include the probability of rejection in the opposite direction of the true effect, in the two-sided case. Without this the power will be half the significance level if the true difference is zero. Note that not all conditions can be satisfied, e.g., for ``` power.prop.test(n=30, p1=0.90, p2=NULL, power=0.8, strict=TRUE) ``` there is no proportion `p2` between `p1 = 0.9` and 1, as you'd need a sample size of at least *n = 74* to yield the desired power for *(p1,p2) = (0.9, 1)*. For these impossible conditions, currently a warning (`[warning](../../base/html/warning)`) is signalled which may become an error (`[stop](../../base/html/stop)`) in the future. ### Value Object of class `"power.htest"`, a list of the arguments (including the computed one) augmented with `method` and `note` elements. ### Note `<uniroot>` is used to solve power equation for unknowns, so you may see errors from it, notably about inability to bracket the root when invalid arguments are given. If one of `p1` and `p2` is computed, then *p1 < p2* is assumed and will hold, but if you specify both, *p2 <= p1* is allowed. ### Author(s) Peter Dalgaard. Based on previous work by Claus Ekstrøm ### See Also `<prop.test>`, `<uniroot>` ### Examples ``` power.prop.test(n = 50, p1 = .50, p2 = .75) ## => power = 0.740 power.prop.test(p1 = .50, p2 = .75, power = .90) ## => n = 76.7 power.prop.test(n = 50, p1 = .5, power = .90) ## => p2 = 0.8026 power.prop.test(n = 50, p1 = .5, p2 = 0.9, power = .90, sig.level=NULL) ## => sig.l = 0.00131 power.prop.test(p1 = .5, p2 = 0.501, sig.level=.001, power=0.90) ## => n = 10451937 try( power.prop.test(n=30, p1=0.90, p2=NULL, power=0.8) ) # a warning (which may become an error) ## Reason: power.prop.test( p1=0.90, p2= 1.0, power=0.8) ##-> n = 73.37 ``` r None `symnum` Symbolic Number Coding -------------------------------- ### Description Symbolically encode a given numeric or logical vector or array. Particularly useful for visualization of structured matrices, e.g., correlation, sparse, or logical ones. ### Usage ``` symnum(x, cutpoints = c(0.3, 0.6, 0.8, 0.9, 0.95), symbols = if(numeric.x) c(" ", ".", ",", "+", "*", "B") else c(".", "|"), legend = length(symbols) >= 3, na = "?", eps = 1e-5, numeric.x = is.numeric(x), corr = missing(cutpoints) && numeric.x, show.max = if(corr) "1", show.min = NULL, abbr.colnames = has.colnames, lower.triangular = corr && is.numeric(x) && is.matrix(x), diag.lower.tri = corr && !is.null(show.max)) ``` ### Arguments | | | | --- | --- | | `x` | numeric or logical vector or array. | | `cutpoints` | numeric vector whose values `cutpoints[j]` *== c[j]* (*after* augmentation, see `corr` below) are used for intervals. | | `symbols` | character vector, one shorter than (the *augmented*, see `corr` below) `cutpoints`. `symbols[j]` *== s[j]* are used as ‘code’ for the (half open) interval *(c[j], c[j+1]]*. When `numeric.x` is `FALSE`, i.e., by default when argument `x` is `logical`, the default is `c(".","|")` (graphical 0 / 1 s). | | `legend` | logical indicating if a `"legend"` attribute is desired. | | `na` | character or logical. How `[NA](../../base/html/na)s` are coded. If `na == FALSE`, `NA`s are coded invisibly, *including* the `"legend"` attribute below, which otherwise mentions NA coding. | | `eps` | absolute precision to be used at left and right boundary. | | `numeric.x` | logical indicating if `x` should be treated as numbers, otherwise as logical. | | `corr` | logical. If `TRUE`, `x` contains correlations. The cutpoints are augmented by `0` and `1` and `abs(x)` is coded. | | `show.max` | if `TRUE`, or of mode `character`, the maximal cutpoint is coded especially. | | `show.min` | if `TRUE`, or of mode `character`, the minimal cutpoint is coded especially. | | `abbr.colnames` | logical, integer or `NULL` indicating how column names should be abbreviated (if they are); if `NULL` (or `FALSE` and `x` has no column names), the column names will all be empty, i.e., `""`; otherwise if `abbr.colnames` is false, they are left unchanged. If `TRUE` or integer, existing column names will be abbreviated to `[abbreviate](../../base/html/abbreviate)(*, minlength = abbr.colnames)`. | | `lower.triangular` | logical. If `TRUE` and `x` is a matrix, only the *lower triangular* part of the matrix is coded as non-blank. | | `diag.lower.tri` | logical. If `lower.triangular` *and* this are `TRUE`, the *diagonal* part of the matrix is shown. | ### Value An atomic character object of class `[noquote](../../base/html/noquote)` and the same dimensions as `x`. If `legend` is `TRUE` (as by default when there are more than two classes), the result has an attribute `"legend"` containing a legend of the returned character codes, in the form *c[1] \sQuote{s[1]} c[2] \sQuote{s[2]} … \sQuote{s[n]} c\_[n+1]* where *c[j]* `= cutpoints[j]` and *s[j]* `= symbols[j]`. ### Note The optional (mostly logical) arguments all try to use smart defaults. Specifying them explicitly may lead to considerably improved output in many cases. ### Author(s) Martin Maechler [[email protected]](mailto:[email protected]) ### See Also `[as.character](../../base/html/character)`; `[image](../../graphics/html/image)` ### Examples ``` ii <- setNames(0:8, 0:8) symnum(ii, cutpoints = 2*(0:4), symbols = c(".", "-", "+", "$")) symnum(ii, cutpoints = 2*(0:4), symbols = c(".", "-", "+", "$"), show.max = TRUE) symnum(1:12 %% 3 == 0) # --> "|" = TRUE, "." = FALSE for logical ## Pascal's Triangle modulo 2 -- odd and even numbers: N <- 38 pascal <- t(sapply(0:N, function(n) round(choose(n, 0:N - (N-n)%/%2)))) rownames(pascal) <- rep("", 1+N) # <-- to improve "graphic" symnum(pascal %% 2, symbols = c(" ", "A"), numeric.x = FALSE) ##-- Symbolic correlation matrices: symnum(cor(attitude), diag.lower.tri = FALSE) symnum(cor(attitude), abbr.colnames = NULL) symnum(cor(attitude), abbr.colnames = FALSE) symnum(cor(attitude), abbr.colnames = 2) symnum(cor(rbind(1, rnorm(25), rnorm(25)^2))) symnum(cor(matrix(rexp(30, 1), 5, 18))) # <<-- PATTERN ! -- symnum(cm1 <- cor(matrix(rnorm(90) , 5, 18))) # < White Noise SMALL n symnum(cm1, diag.lower.tri = FALSE) symnum(cm2 <- cor(matrix(rnorm(900), 50, 18))) # < White Noise "BIG" n symnum(cm2, lower.triangular = FALSE) ## NA's: Cm <- cor(matrix(rnorm(60), 10, 6)); Cm[c(3,6), 2] <- NA symnum(Cm, show.max = NULL) ## Graphical P-values (aka "significance stars"): pval <- rev(sort(c(outer(1:6, 10^-(1:3))))) symp <- symnum(pval, corr = FALSE, cutpoints = c(0, .001,.01,.05, .1, 1), symbols = c("***","**","*","."," ")) noquote(cbind(P.val = format(pval), Signif = symp)) ``` r None `SSbiexp` Self-Starting Nls Biexponential model ------------------------------------------------ ### Description This `selfStart` model evaluates the biexponential model function and its gradient. It has an `initial` attribute that creates initial estimates of the parameters `A1`, `lrc1`, `A2`, and `lrc2`. ### Usage ``` SSbiexp(input, A1, lrc1, A2, lrc2) ``` ### Arguments | | | | --- | --- | | `input` | a numeric vector of values at which to evaluate the model. | | `A1` | a numeric parameter representing the multiplier of the first exponential. | | `lrc1` | a numeric parameter representing the natural logarithm of the rate constant of the first exponential. | | `A2` | a numeric parameter representing the multiplier of the second exponential. | | `lrc2` | a numeric parameter representing the natural logarithm of the rate constant of the second exponential. | ### Value a numeric vector of the same length as `input`. It is the value of the expression `A1*exp(-exp(lrc1)*input)+A2*exp(-exp(lrc2)*input)`. If all of the arguments `A1`, `lrc1`, `A2`, and `lrc2` are names of objects, the gradient matrix with respect to these names is attached as an attribute named `gradient`. ### Author(s) José Pinheiro and Douglas Bates ### See Also `<nls>`, `[selfStart](selfstart)` ### Examples ``` Indo.1 <- Indometh[Indometh$Subject == 1, ] SSbiexp( Indo.1$time, 3, 1, 0.6, -1.3 ) # response only A1 <- 3; lrc1 <- 1; A2 <- 0.6; lrc2 <- -1.3 SSbiexp( Indo.1$time, A1, lrc1, A2, lrc2 ) # response and gradient print(getInitial(conc ~ SSbiexp(time, A1, lrc1, A2, lrc2), data = Indo.1), digits = 5) ## Initial values are in fact the converged values fm1 <- nls(conc ~ SSbiexp(time, A1, lrc1, A2, lrc2), data = Indo.1) summary(fm1) ## Show the model components visually require(graphics) xx <- seq(0, 5, length.out = 101) y1 <- 3.5 * exp(-4*xx) y2 <- 1.5 * exp(-xx) plot(xx, y1 + y2, type = "l", lwd=2, ylim = c(-0.2,6), xlim = c(0, 5), main = "Components of the SSbiexp model") lines(xx, y1, lty = 2, col="tomato"); abline(v=0, h=0, col="gray40") lines(xx, y2, lty = 3, col="blue2" ) legend("topright", c("y1+y2", "y1 = 3.5 * exp(-4*x)", "y2 = 1.5 * exp(-x)"), lty=1:3, col=c("black","tomato","blue2"), bty="n") axis(2, pos=0, at = c(3.5, 1.5), labels = c("A1","A2"), las=2) ## and how you could have got their sum via SSbiexp(): ySS <- SSbiexp(xx, 3.5, log(4), 1.5, log(1)) ## --- --- stopifnot(all.equal(y1+y2, ySS, tolerance = 1e-15)) ## Show a no-noise example datN <- data.frame(time = (0:600)/64) datN$conc <- predict(fm1, newdata=datN) plot(conc ~ time, data=datN) # perfect, no noise ## IGNORE_RDIFF_BEGIN ## Fails by default (scaleOffset=0): ## Not run: nls(conc ~ SSbiexp(time, A1, lrc1, A2, lrc2), data = datN, trace=TRUE) ## End(Not run) fmX1 <- nls(conc ~ SSbiexp(time, A1, lrc1, A2, lrc2), data = datN, control = list(scaleOffset=1)) fmX <- nls(conc ~ SSbiexp(time, A1, lrc1, A2, lrc2), data = datN, control = list(scaleOffset=1, printEval=TRUE, tol=1e-11, nDcentral=TRUE), trace=TRUE) all.equal(coef(fm1), coef(fmX1), tolerance=0) # ... rel.diff.: 1.57e-6 all.equal(coef(fm1), coef(fmX), tolerance=0) # ... rel.diff.: 1.03e-12 ## IGNORE_RDIFF_END stopifnot(all.equal(coef(fm1), coef(fmX1), tolerance = 6e-6), all.equal(coef(fm1), coef(fmX ), tolerance = 1e-11)) ```
programming_docs
r None `integrate` Integration of One-Dimensional Functions ----------------------------------------------------- ### Description Adaptive quadrature of functions of one variable over a finite or infinite interval. ### Usage ``` integrate(f, lower, upper, ..., subdivisions = 100L, rel.tol = .Machine$double.eps^0.25, abs.tol = rel.tol, stop.on.error = TRUE, keep.xy = FALSE, aux = NULL) ``` ### Arguments | | | | --- | --- | | `f` | an **R** function taking a numeric first argument and returning a numeric vector of the same length. Returning a non-finite element will generate an error. | | `lower, upper` | the limits of integration. Can be infinite. | | `...` | additional arguments to be passed to `f`. | | `subdivisions` | the maximum number of subintervals. | | `rel.tol` | relative accuracy requested. | | `abs.tol` | absolute accuracy requested. | | `stop.on.error` | logical. If true (the default) an error stops the function. If false some errors will give a result with a warning in the `message` component. | | `keep.xy` | unused. For compatibility with S. | | `aux` | unused. For compatibility with S. | ### Details Note that arguments after `...` must be matched exactly. If one or both limits are infinite, the infinite range is mapped onto a finite interval. For a finite interval, globally adaptive interval subdivision is used in connection with extrapolation by Wynn's Epsilon algorithm, with the basic step being Gauss–Kronrod quadrature. `rel.tol` cannot be less than `max(50*.Machine$double.eps, 0.5e-28)` if `abs.tol <= 0`. Note that the comments in the C source code in ‘<R>/src/appl/integrate.c’ give more details, particularly about reasons for failure (internal error code `ier >= 1`). In **R** versions *<=* 3.2.x, the first entries of `lower` and `upper` were used whereas an error is signalled now if they are not of length one. ### Value A list of class `"integrate"` with components | | | | --- | --- | | `value` | the final estimate of the integral. | | `abs.error` | estimate of the modulus of the absolute error. | | `subdivisions` | the number of subintervals produced in the subdivision process. | | `message` | `"OK"` or a character string giving the error message. | | `call` | the matched call. | ### Note Like all numerical integration routines, these evaluate the function on a finite set of points. If the function is approximately constant (in particular, zero) over nearly all its range it is possible that the result and error estimate may be seriously wrong. When integrating over infinite intervals do so explicitly, rather than just using a large number as the endpoint. This increases the chance of a correct answer – any function whose integral over an infinite interval is finite must be near zero for most of that interval. For values at a finite set of points to be a fair reflection of the behaviour of the function elsewhere, the function needs to be well-behaved, for example differentiable except perhaps for a small number of jumps or integrable singularities. `f` must accept a vector of inputs and produce a vector of function evaluations at those points. The `[Vectorize](../../base/html/vectorize)` function may be helpful to convert `f` to this form. ### Source Based on QUADPACK routines `dqags` and `dqagi` by R. Piessens and E. deDoncker–Kapenga, available from Netlib. ### References R. Piessens, E. deDoncker–Kapenga, C. Uberhuber, D. Kahaner (1983) *Quadpack: a Subroutine Package for Automatic Integration*; Springer Verlag. ### Examples ``` integrate(dnorm, -1.96, 1.96) integrate(dnorm, -Inf, Inf) ## a slowly-convergent integral integrand <- function(x) {1/((x+1)*sqrt(x))} integrate(integrand, lower = 0, upper = Inf) ## don't do this if you really want the integral from 0 to Inf integrate(integrand, lower = 0, upper = 10) integrate(integrand, lower = 0, upper = 100000) integrate(integrand, lower = 0, upper = 1000000, stop.on.error = FALSE) ## some functions do not handle vector input properly f <- function(x) 2.0 try(integrate(f, 0, 1)) integrate(Vectorize(f), 0, 1) ## correct integrate(function(x) rep(2.0, length(x)), 0, 1) ## correct ## integrate can fail if misused integrate(dnorm, 0, 2) integrate(dnorm, 0, 20) integrate(dnorm, 0, 200) integrate(dnorm, 0, 2000) integrate(dnorm, 0, 20000) ## fails on many systems integrate(dnorm, 0, Inf) ## works integrate(dnorm, 0:1, 20) #-> error! ## "silently" gave integrate(dnorm, 0, 20) in earlier versions of R ``` r None `medpolish` Median Polish (Robust Twoway Decomposition) of a Matrix -------------------------------------------------------------------- ### Description Fits an additive model (twoway decomposition) using Tukey's *median polish* procedure. ### Usage ``` medpolish(x, eps = 0.01, maxiter = 10, trace.iter = TRUE, na.rm = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | a numeric matrix. | | `eps` | real number greater than 0. A tolerance for convergence: see ‘Details’. | | `maxiter` | the maximum number of iterations | | `trace.iter` | logical. Should progress in convergence be reported? | | `na.rm` | logical. Should missing values be removed? | ### Details The model fitted is additive (constant + rows + columns). The algorithm works by alternately removing the row and column medians, and continues until the proportional reduction in the sum of absolute residuals is less than `eps` or until there have been `maxiter` iterations. The sum of absolute residuals is printed at each iteration of the fitting process, if `trace.iter` is `TRUE`. If `na.rm` is `FALSE` the presence of any `NA` value in `x` will cause an error, otherwise `NA` values are ignored. `medpolish` returns an object of class `medpolish` (see below). There are printing and plotting methods for this class, which are invoked via by the generics `[print](../../base/html/print)` and `[plot](../../graphics/html/plot.default)`. ### Value An object of class `medpolish` with the following named components: | | | | --- | --- | | `overall` | the fitted constant term. | | `row` | the fitted row effects. | | `col` | the fitted column effects. | | `residuals` | the residuals. | | `name` | the name of the dataset. | ### References Tukey, J. W. (1977). *Exploratory Data Analysis*, Reading Massachusetts: Addison-Wesley. ### See Also `<median>`; `<aov>` for a *mean* instead of *median* decomposition. ### Examples ``` require(graphics) ## Deaths from sport parachuting; from ABC of EDA, p.224: deaths <- rbind(c(14,15,14), c( 7, 4, 7), c( 8, 2,10), c(15, 9,10), c( 0, 2, 0)) dimnames(deaths) <- list(c("1-24", "25-74", "75-199", "200++", "NA"), paste(1973:1975)) deaths (med.d <- medpolish(deaths)) plot(med.d) ## Check decomposition: all(deaths == med.d$overall + outer(med.d$row,med.d$col, "+") + med.d$residuals) ``` r None `loglin` Fitting Log-Linear Models ----------------------------------- ### Description `loglin` is used to fit log-linear models to multidimensional contingency tables by Iterative Proportional Fitting. ### Usage ``` loglin(table, margin, start = rep(1, length(table)), fit = FALSE, eps = 0.1, iter = 20, param = FALSE, print = TRUE) ``` ### Arguments | | | | --- | --- | | `table` | a contingency table to be fit, typically the output from `table`. | | `margin` | a list of vectors with the marginal totals to be fit. (Hierarchical) log-linear models can be specified in terms of these marginal totals which give the ‘maximal’ factor subsets contained in the model. For example, in a three-factor model, `list(c(1, 2), c(1, 3))` specifies a model which contains parameters for the grand mean, each factor, and the 1-2 and 1-3 interactions, respectively (but no 2-3 or 1-2-3 interaction), i.e., a model where factors 2 and 3 are independent conditional on factor 1 (sometimes represented as ‘[12][13]’). The names of factors (i.e., `names(dimnames(table))`) may be used rather than numeric indices. | | `start` | a starting estimate for the fitted table. This optional argument is important for incomplete tables with structural zeros in `table` which should be preserved in the fit. In this case, the corresponding entries in `start` should be zero and the others can be taken as one. | | `fit` | a logical indicating whether the fitted values should be returned. | | `eps` | maximum deviation allowed between observed and fitted margins. | | `iter` | maximum number of iterations. | | `param` | a logical indicating whether the parameter values should be returned. | | `print` | a logical. If `TRUE`, the number of iterations and the final deviation are printed. | ### Details The Iterative Proportional Fitting algorithm as presented in Haberman (1972) is used for fitting the model. At most `iter` iterations are performed, convergence is taken to occur when the maximum deviation between observed and fitted margins is less than `eps`. All internal computations are done in double precision; there is no limit on the number of factors (the dimension of the table) in the model. Assuming that there are no structural zeros, both the Likelihood Ratio Test and Pearson test statistics have an asymptotic chi-squared distribution with `df` degrees of freedom. Note that the IPF steps are applied to the factors in the order given in `margin`. Hence if the model is decomposable and the order given in `margin` is a running intersection property ordering then IPF will converge in one iteration. Package [MASS](https://CRAN.R-project.org/package=MASS) contains `loglm`, a front-end to `loglin` which allows the log-linear model to be specified and fitted in a formula-based manner similar to that of other fitting functions such as `lm` or `glm`. ### Value A list with the following components. | | | | --- | --- | | `lrt` | the Likelihood Ratio Test statistic. | | `pearson` | the Pearson test statistic (X-squared). | | `df` | the degrees of freedom for the fitted model. There is no adjustment for structural zeros. | | `margin` | list of the margins that were fit. Basically the same as the input `margin`, but with numbers replaced by names where possible. | | `fit` | An array like `table` containing the fitted values. Only returned if `fit` is `TRUE`. | | `param` | A list containing the estimated parameters of the model. The ‘standard’ constraints of zero marginal sums (e.g., zero row and column sums for a two factor parameter) are employed. Only returned if `param` is `TRUE`. | ### Author(s) Kurt Hornik ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988). *The New S Language*. Wadsworth & Brooks/Cole. Haberman, S. J. (1972). Algorithm AS 51: Log-linear fit for contingency tables. *Applied Statistics*, **21**, 218–225. doi: [10.2307/2346506](https://doi.org/10.2307/2346506). Agresti, A. (1990). *Categorical data analysis*. New York: Wiley. ### See Also `[table](../../base/html/table)`. `[loglm](../../mass/html/loglm)` in package [MASS](https://CRAN.R-project.org/package=MASS) for a user-friendly wrapper. `<glm>` for another way to fit log-linear models. ### Examples ``` ## Model of joint independence of sex from hair and eye color. fm <- loglin(HairEyeColor, list(c(1, 2), c(1, 3), c(2, 3))) fm 1 - pchisq(fm$lrt, fm$df) ## Model with no three-factor interactions fits well. ``` r None `sortedXyData` Create a sortedXyData Object -------------------------------------------- ### Description This is a constructor function for the class of `sortedXyData` objects. These objects are mostly used in the `initial` function for a self-starting nonlinear regression model, which will be of the `selfStart` class. ### Usage ``` sortedXyData(x, y, data) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector or an expression that will evaluate in `data` to a numeric vector | | `y` | a numeric vector or an expression that will evaluate in `data` to a numeric vector | | `data` | an optional data frame in which to evaluate expressions for `x` and `y`, if they are given as expressions | ### Value A `sortedXyData` object. This is a data frame with exactly two numeric columns, named `x` and `y`. The rows are sorted so the `x` column is in increasing order. Duplicate `x` values are eliminated by averaging the corresponding `y` values. ### Author(s) José Pinheiro and Douglas Bates ### See Also `[selfStart](selfstart)`, `[NLSstClosestX](nlsstclosestx)`, `[NLSstLfAsymptote](nlsstlfasymptote)`, `[NLSstRtAsymptote](nlsstrtasymptote)` ### Examples ``` DNase.2 <- DNase[ DNase$Run == "2", ] sortedXyData( expression(log(conc)), expression(density), DNase.2 ) ``` r None `na.action` NA Action ---------------------- ### Description Extract information on the NA action used to create an object. ### Usage ``` na.action(object, ...) ``` ### Arguments | | | | --- | --- | | `object` | any object whose `[NA](../../base/html/na)` action is given. | | `...` | further arguments special methods could require. | ### Details `na.action` is a generic function, and `na.action.default` its default method. The latter extracts the `"na.action"` component of a list if present, otherwise the `"na.action"` attribute. When `<model.frame>` is called, it records any information on `NA` handling in a `"na.action"` attribute. Most model-fitting functions return this as a component of their result. ### Value Information from the action which was applied to `object` if `NA`s were handled specially, or `NULL`. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S.* Wadsworth & Brooks/Cole. ### See Also `[options](../../base/html/options)("na.action")`, `[na.omit](na.fail)`, `<na.fail>`, also for `na.exclude`, `na.pass`. ### Examples ``` na.action(na.omit(c(1, NA))) ``` r None `arima` ARIMA Modelling of Time Series --------------------------------------- ### Description Fit an ARIMA model to a univariate time series. ### Usage ``` arima(x, order = c(0L, 0L, 0L), seasonal = list(order = c(0L, 0L, 0L), period = NA), xreg = NULL, include.mean = TRUE, transform.pars = TRUE, fixed = NULL, init = NULL, method = c("CSS-ML", "ML", "CSS"), n.cond, SSinit = c("Gardner1980", "Rossignol2011"), optim.method = "BFGS", optim.control = list(), kappa = 1e6) ``` ### Arguments | | | | --- | --- | | `x` | a univariate time series | | `order` | A specification of the non-seasonal part of the ARIMA model: the three integer components *(p, d, q)* are the AR order, the degree of differencing, and the MA order. | | `seasonal` | A specification of the seasonal part of the ARIMA model, plus the period (which defaults to `frequency(x)`). This may be a list with components `order` and `period`, or just a numeric vector of length 3 which specifies the seasonal `order`. In the latter case the default period is used. | | `xreg` | Optionally, a vector or matrix of external regressors, which must have the same number of rows as `x`. | | `include.mean` | Should the ARMA model include a mean/intercept term? The default is `TRUE` for undifferenced series, and it is ignored for ARIMA models with differencing. | | `transform.pars` | logical; if true, the AR parameters are transformed to ensure that they remain in the region of stationarity. Not used for `method = "CSS"`. For `method = "ML"`, it has been advantageous to set `transform.pars = FALSE` in some cases, see also `fixed`. | | `fixed` | optional numeric vector of the same length as the total number of coefficients to be estimated. It should be of the form *(phi\_1, ..., phi\_p, theta\_1, ..., theta\_p,Phi\_1,..., Phi\_P, Theta\_1,...,Theta\_Q, mu),* where *phi\_i* are the AR coefficients, *theta\_i* are the MA coefficients, *Phi\_i* are the seasonal AR coefficients, *Theta\_i* are the seasonal MA coefficients and *mu* is the intercept term. Note that the *mu* entry is required if and only if `include.mean` is `TRUE`. In particular it should not be present if the model is an ARIMA model with differencing. The entries of the `fixed` vector should consist of the values at which the user wishes to “fix” the corresponding coefficient, or `NA` if that coefficient should *not* be fixed, but estimated. The argument `transform.pars` will be set to `FALSE` if any AR parameters are fixed. A warning will be given if `transform.pars` is set to (or left at its default) `TRUE`. It may be wise to set `transform.pars = FALSE` even when fixing MA parameters, especially at values that cause the model to be nearly non-invertible. | | `init` | optional numeric vector of initial parameter values. Missing values will be filled in, by zeroes except for regression coefficients. Values already specified in `fixed` will be ignored. | | `method` | fitting method: maximum likelihood or minimize conditional sum-of-squares. The default (unless there are missing values) is to use conditional-sum-of-squares to find starting values, then maximum likelihood. Can be abbreviated. | | `n.cond` | only used if fitting by conditional-sum-of-squares: the number of initial observations to ignore. It will be ignored if less than the maximum lag of an AR term. | | `SSinit` | a string specifying the algorithm to compute the state-space initialization of the likelihood; see `[KalmanLike](kalmanlike)` for details. Can be abbreviated. | | `optim.method` | The value passed as the `method` argument to `<optim>`. | | `optim.control` | List of control parameters for `<optim>`. | | `kappa` | the prior variance (as a multiple of the innovations variance) for the past observations in a differenced model. Do not reduce this. | ### Details Different definitions of ARMA models have different signs for the AR and/or MA coefficients. The definition used here has *X[t] = a[1]X[t-1] + … + a[p]X[t-p] + e[t] + b[1]e[t-1] + … + b[q]e[t-q]* and so the MA coefficients differ in sign from those of S-PLUS. Further, if `include.mean` is true (the default for an ARMA model), this formula applies to *X - m* rather than *X*. For ARIMA models with differencing, the differenced series follows a zero-mean ARMA model. If an `xreg` term is included, a linear regression (with a constant term if `include.mean` is true and there is no differencing) is fitted with an ARMA model for the error term. The variance matrix of the estimates is found from the Hessian of the log-likelihood, and so may only be a rough guide. Optimization is done by `<optim>`. It will work best if the columns in `xreg` are roughly scaled to zero mean and unit variance, but does attempt to estimate suitable scalings. ### Value A list of class `"Arima"` with components: | | | | --- | --- | | `coef` | a vector of AR, MA and regression coefficients, which can be extracted by the `<coef>` method. | | `sigma2` | the MLE of the innovations variance. | | `var.coef` | the estimated variance matrix of the coefficients `coef`, which can be extracted by the `<vcov>` method. | | `loglik` | the maximized log-likelihood (of the differenced data), or the approximation to it used. | | `arma` | A compact form of the specification, as a vector giving the number of AR, MA, seasonal AR and seasonal MA coefficients, plus the period and the number of non-seasonal and seasonal differences. | | `aic` | the AIC value corresponding to the log-likelihood. Only valid for `method = "ML"` fits. | | `residuals` | the fitted innovations. | | `call` | the matched call. | | `series` | the name of the series `x`. | | `code` | the convergence value returned by `<optim>`. | | `n.cond` | the number of initial observations not used in the fitting. | | `nobs` | the number of “used” observations for the fitting, can also be extracted via `<nobs>()` and is used by `[BIC](aic)`. | | `model` | A list representing the Kalman Filter used in the fitting. See `[KalmanLike](kalmanlike)`. | ### Fitting methods The exact likelihood is computed via a state-space representation of the ARIMA process, and the innovations and their variance found by a Kalman filter. The initialization of the differenced ARMA process uses stationarity and is based on Gardner *et al* (1980). For a differenced process the non-stationary components are given a diffuse prior (controlled by `kappa`). Observations which are still controlled by the diffuse prior (determined by having a Kalman gain of at least `1e4`) are excluded from the likelihood calculations. (This gives comparable results to `<arima0>` in the absence of missing values, when the observations excluded are precisely those dropped by the differencing.) Missing values are allowed, and are handled exactly in method `"ML"`. If `transform.pars` is true, the optimization is done using an alternative parametrization which is a variation on that suggested by Jones (1980) and ensures that the model is stationary. For an AR(p) model the parametrization is via the inverse tanh of the partial autocorrelations: the same procedure is applied (separately) to the AR and seasonal AR terms. The MA terms are not constrained to be invertible during optimization, but they will be converted to invertible form after optimization if `transform.pars` is true. Conditional sum-of-squares is provided mainly for expositional purposes. This computes the sum of squares of the fitted innovations from observation `n.cond` on, (where `n.cond` is at least the maximum lag of an AR term), treating all earlier innovations to be zero. Argument `n.cond` can be used to allow comparability between different fits. The ‘part log-likelihood’ is the first term, half the log of the estimated mean square. Missing values are allowed, but will cause many of the innovations to be missing. When regressors are specified, they are orthogonalized prior to fitting unless any of the coefficients is fixed. It can be helpful to roughly scale the regressors to zero mean and unit variance. ### Note The results are likely to be different from S-PLUS's `arima.mle`, which computes a conditional likelihood and does not include a mean in the model. Further, the convention used by `arima.mle` reverses the signs of the MA coefficients. `arima` is very similar to `<arima0>` for ARMA models or for differenced models without missing values, but handles differenced models with missing values exactly. It is somewhat slower than `arima0`, particularly for seasonally differenced models. ### References Brockwell, P. J. and Davis, R. A. (1996). *Introduction to Time Series and Forecasting*. Springer, New York. Sections 3.3 and 8.3. Durbin, J. and Koopman, S. J. (2001). *Time Series Analysis by State Space Methods*. Oxford University Press. Gardner, G, Harvey, A. C. and Phillips, G. D. A. (1980). Algorithm AS 154: An algorithm for exact maximum likelihood estimation of autoregressive-moving average models by means of Kalman filtering. *Applied Statistics*, **29**, 311–322. doi: [10.2307/2346910](https://doi.org/10.2307/2346910). Harvey, A. C. (1993). *Time Series Models*. 2nd Edition. Harvester Wheatsheaf. Sections 3.3 and 4.4. Jones, R. H. (1980). Maximum likelihood fitting of ARMA models to time series with missing observations. *Technometrics*, **22**, 389–395. doi: [10.2307/1268324](https://doi.org/10.2307/1268324). Ripley, B. D. (2002). “Time series in **R** 1.5.0”. *R News*, **2**(2), 2–7. <https://www.r-project.org/doc/Rnews/Rnews_2002-2.pdf> ### See Also `[predict.Arima](predict.arima)`, `<arima.sim>` for simulating from an ARIMA model, `<tsdiag>`, `<arima0>`, `<ar>` ### Examples ``` arima(lh, order = c(1,0,0)) arima(lh, order = c(3,0,0)) arima(lh, order = c(1,0,1)) arima(lh, order = c(3,0,0), method = "CSS") arima(USAccDeaths, order = c(0,1,1), seasonal = list(order = c(0,1,1))) arima(USAccDeaths, order = c(0,1,1), seasonal = list(order = c(0,1,1)), method = "CSS") # drops first 13 observations. # for a model with as few years as this, we want full ML arima(LakeHuron, order = c(2,0,0), xreg = time(LakeHuron) - 1920) ## presidents contains NAs ## graphs in example(acf) suggest order 1 or 3 require(graphics) (fit1 <- arima(presidents, c(1, 0, 0))) nobs(fit1) tsdiag(fit1) (fit3 <- arima(presidents, c(3, 0, 0))) # smaller AIC tsdiag(fit3) BIC(fit1, fit3) ## compare a whole set of models; BIC() would choose the smallest AIC(fit1, arima(presidents, c(2,0,0)), arima(presidents, c(2,0,1)), # <- chosen (barely) by AIC fit3, arima(presidents, c(3,0,1))) ## An example of using the 'fixed' argument: ## Note that the period of the seasonal component is taken to be ## frequency(presidents), i.e. 4. (fitSfx <- arima(presidents, order=c(2,0,1), seasonal=c(1,0,0), fixed=c(NA, NA, 0.5, -0.1, 50), transform.pars=FALSE)) ## The partly-fixed & smaller model seems better (as we "knew too much"): AIC(fitSfx, arima(presidents, order=c(2,0,1), seasonal=c(1,0,0))) ## An example of ARIMA forecasting: predict(fit3, 3) ```
programming_docs
r None `Logistic` The Logistic Distribution ------------------------------------- ### Description Density, distribution function, quantile function and random generation for the logistic distribution with parameters `location` and `scale`. ### Usage ``` dlogis(x, location = 0, scale = 1, log = FALSE) plogis(q, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) qlogis(p, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) rlogis(n, location = 0, scale = 1) ``` ### Arguments | | | | --- | --- | | `x, q` | vector of quantiles. | | `p` | vector of probabilities. | | `n` | number of observations. If `length(n) > 1`, the length is taken to be the number required. | | `location, scale` | location and scale parameters. | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details If `location` or `scale` are omitted, they assume the default values of `0` and `1` respectively. The Logistic distribution with `location` *= m* and `scale` *= s* has distribution function *F(x) = 1 / (1 + exp(-(x-m)/s))* and density *f(x) = 1/s exp((x-m)/s) (1 + exp((x-m)/s))^-2.* It is a long-tailed distribution with mean *m* and variance *π^2 /3 s^2*. ### Value `dlogis` gives the density, `plogis` gives the distribution function, `qlogis` gives the quantile function, and `rlogis` generates random deviates. The length of the result is determined by `n` for `rlogis`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `n` are recycled to the length of the result. Only the first elements of the logical arguments are used. ### Note `qlogis(p)` is the same as the well known ‘*logit*’ function, *logit(p) = log(p/(1-p))*, and `plogis(x)` has consequently been called the ‘inverse logit’. The distribution function is a rescaled hyperbolic tangent, `plogis(x) == (1+ [tanh](../../base/html/hyperbolic)(x/2))/2`, and it is called a *sigmoid function* in contexts such as neural networks. ### Source `[dpq]logis` are calculated directly from the definitions. `rlogis` uses inversion. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Johnson, N. L., Kotz, S. and Balakrishnan, N. (1995) *Continuous Univariate Distributions*, volume 2, chapter 23. Wiley, New York. ### See Also [Distributions](distributions) for other standard distributions. ### Examples ``` var(rlogis(4000, 0, scale = 5)) # approximately (+/- 3) pi^2/3 * 5^2 ``` r None `lag.plot` Time Series Lag Plots --------------------------------- ### Description Plot time series against lagged versions of themselves. Helps visualizing ‘auto-dependence’ even when auto-correlations vanish. ### Usage ``` lag.plot(x, lags = 1, layout = NULL, set.lags = 1:lags, main = NULL, asp = 1, diag = TRUE, diag.col = "gray", type = "p", oma = NULL, ask = NULL, do.lines = (n <= 150), labels = do.lines, ...) ``` ### Arguments | | | | --- | --- | | `x` | time-series (univariate or multivariate) | | `lags` | number of lag plots desired, see arg `set.lags`. | | `layout` | the layout of multiple plots, basically the `mfrow` `[par](../../graphics/html/par)()` argument. The default uses about a square layout (see `[n2mfrow](../../grdevices/html/n2mfrow)`) such that all plots are on one page. | | `set.lags` | vector of positive integers allowing specification of the set of lags used; defaults to `1:lags`. | | `main` | character with a main header title to be done on the top of each page. | | `asp` | Aspect ratio to be fixed, see `[plot.default](../../graphics/html/plot.default)`. | | `diag` | logical indicating if the x=y diagonal should be drawn. | | `diag.col` | color to be used for the diagonal `if(diag)`. | | `type` | plot type to be used, but see `<plot.ts>` about its restricted meaning. | | `oma` | outer margins, see `[par](../../graphics/html/par)`. | | `ask` | logical or `NULL`; if true, the user is asked to confirm before a new page is started. | | `do.lines` | logical indicating if lines should be drawn. | | `labels` | logical indicating if labels should be used. | | `...` | Further arguments to `<plot.ts>`. Several graphical parameters are set in this function and so cannot be changed: these include `xlab`, `ylab`, `mgp`, `col.lab` and `font.lab`: this also applies to the arguments `xy.labels` and `xy.lines`. | ### Details If just one plot is produced, this is a conventional plot. If more than one plot is to be produced, `par(mfrow)` and several other graphics parameters will be set, so it is not (easily) possible to mix such lag plots with other plots on the same page. If `ask = NULL`, `par(ask = TRUE)` will be called if more than one page of plots is to be produced and the device is interactive. ### Note It is more flexible and has different default behaviour than the S version. We use `main =` instead of `head =` for internal consistency. ### Author(s) Martin Maechler ### See Also `<plot.ts>` which is the basic work horse. ### Examples ``` require(graphics) lag.plot(nhtemp, 8, diag.col = "forest green") lag.plot(nhtemp, 5, main = "Average Temperatures in New Haven") ## ask defaults to TRUE when we have more than one page: lag.plot(nhtemp, 6, layout = c(2,1), asp = NA, main = "New Haven Temperatures", col.main = "blue") ## Multivariate (but non-stationary! ...) lag.plot(freeny.x, lags = 3) ## no lines for long series : lag.plot(sqrt(sunspots), set.lags = c(1:4, 9:12), pch = ".", col = "gold") ``` r None `selfStart` Construct Self-starting Nonlinear Models ----------------------------------------------------- ### Description Construct self-starting nonlinear models to be used in `<nls>`, etc. Via function `initial` to compute approximate parameter values from data, such models are “self-starting”, i.e., do not need a `start` argument in, e.g., `<nls>()`. ### Usage ``` selfStart(model, initial, parameters, template) ``` ### Arguments | | | | --- | --- | | `model` | a function object defining a nonlinear model or a nonlinear `<formula>` object of the form `~ expression`. | | `initial` | a function object, taking arguments `mCall`, `data`, and `LHS`, *and* `...`, representing, respectively, a matched call to the function `model`, a data frame in which to interpret the variables in `mCall`, and the expression from the left-hand side of the model formula in the call to `nls`. This function should return initial values for the parameters in `model`. The `...` is used by `<nls>()` to pass its `control` and `trace` arguments for the cases where `initial()` itself calls `nls()` as it does for the ten self-starting nonlinear models in **R**'s stats package. | | `parameters` | a character vector specifying the terms on the right hand side of `model` for which initial estimates should be calculated. Passed as the `namevec` argument to the `deriv` function. | | `template` | an optional prototype for the calling sequence of the returned object, passed as the `function.arg` argument to the `deriv` function. By default, a template is generated with the covariates in `model` coming first and the parameters in `model` coming last in the calling sequence. | ### Details `<nls>()` calls `[getInitial](getinitial)` and the `initial` function for these self-starting models. This function is generic; methods functions can be written to handle specific classes of objects. ### Value a `[function](../../base/html/function)` object of class `"selfStart"`, for the `formula` method obtained by applying `<deriv>` to the right hand side of the `model` formula. An `initial` attribute (defined by the `initial` argument) is added to the function to calculate starting estimates for the parameters in the model automatically. ### Author(s) José Pinheiro and Douglas Bates ### See Also `<nls>`, `[getInitial](getinitial)`. Each of the following are `"selfStart"` models (with examples) `[SSasymp](ssasymp)`, `[SSasympOff](ssasympoff)`, `[SSasympOrig](ssasymporig)`, `[SSbiexp](ssbiexp)`, `[SSfol](ssfol)`, `[SSfpl](ssfpl)`, `[SSgompertz](ssgompertz)`, `[SSlogis](sslogis)`, `[SSmicmen](ssmicmen)`, `[SSweibull](ssweibull)`. Further, package [nlme](https://CRAN.R-project.org/package=nlme)'s `[nlsList](../../nlme/html/nlslist)`. ### Examples ``` ## self-starting logistic model ## The "initializer" (finds initial values for parameters from data): initLogis <- function(mCall, data, LHS) { xy <- data.frame(sortedXyData(mCall[["input"]], LHS, data)) if(nrow(xy) < 4) stop("too few distinct input values to fit a logistic model") z <- xy[["y"]] ## transform to proportion, i.e. in (0,1) : rng <- range(z); dz <- diff(rng) z <- (z - rng[1L] + 0.05 * dz)/(1.1 * dz) xy[["z"]] <- log(z/(1 - z)) # logit transformation aux <- coef(lm(x ~ z, xy)) pars <- coef(nls(y ~ 1/(1 + exp((xmid - x)/scal)), data = xy, start = list(xmid = aux[[1L]], scal = aux[[2L]]), algorithm = "plinear")) setNames(pars[c(".lin", "xmid", "scal")], nm = mCall[c("Asym", "xmid", "scal")]) } SSlogis <- selfStart(~ Asym/(1 + exp((xmid - x)/scal)), initial = initLogis, parameters = c("Asym", "xmid", "scal")) # 'first.order.log.model' is a function object defining a first order # compartment model # 'first.order.log.initial' is a function object which calculates initial # values for the parameters in 'first.order.log.model' # # self-starting first order compartment model ## Not run: SSfol <- selfStart(first.order.log.model, first.order.log.initial) ## End(Not run) ## Explore the self-starting models already available in R's "stats": pos.st <- which("package:stats" == search()) mSS <- apropos("^SS..", where = TRUE, ignore.case = FALSE) (mSS <- unname(mSS[names(mSS) == pos.st])) fSS <- sapply(mSS, get, pos = pos.st, mode = "function") all(sapply(fSS, inherits, "selfStart")) # -> TRUE ## Show the argument list of each self-starting function: str(fSS, give.attr = FALSE) ``` r None `reshape` Reshape Grouped Data ------------------------------- ### Description This function reshapes a data frame between ‘wide’ format (with repeated measurements in separate columns of the same row) and ‘long’ format (with the repeated measurements in separate rows). ### Usage ``` reshape(data, varying = NULL, v.names = NULL, timevar = "time", idvar = "id", ids = 1:NROW(data), times = seq_along(varying[[1]]), drop = NULL, direction, new.row.names = NULL, sep = ".", split = if (sep == "") { list(regexp = "[A-Za-z][0-9]", include = TRUE) } else { list(regexp = sep, include = FALSE, fixed = TRUE)} ) ### Typical usage for converting from long to wide format: # reshape(data, direction = "wide", # idvar = "___", timevar = "___", # mandatory # v.names = c(___), # time-varying variables # varying = list(___)) # auto-generated if missing ### Typical usage for converting from wide to long format: ### If names of wide-format variables are in a 'nice' format # reshape(data, direction = "long", # varying = c(___), # vector # sep) # to help guess 'v.names' and 'times' ### To specify long-format variable names explicitly # reshape(data, direction = "long", # varying = ___, # list / matrix / vector (use with care) # v.names = ___, # vector of variable names in long format # timevar, times, # name / values of constructed time variable # idvar, ids) # name / values of constructed id variable ``` ### Arguments | | | | --- | --- | | `data` | a data frame | | `varying` | names of sets of variables in the wide format that correspond to single variables in long format (‘time-varying’). This is canonically a list of vectors of variable names, but it can optionally be a matrix of names, or a single vector of names. In each case, when `direction = "long"`, the names can be replaced by indices which are interpreted as referring to `names(data)`. See ‘Details’ for more details and options. | | `v.names` | names of variables in the long format that correspond to multiple variables in the wide format. See ‘Details’. | | `timevar` | the variable in long format that differentiates multiple records from the same group or individual. If more than one record matches, the first will be taken (with a warning). | | `idvar` | Names of one or more variables in long format that identify multiple records from the same group/individual. These variables may also be present in wide format. | | `ids` | the values to use for a newly created `idvar` variable in long format. | | `times` | the values to use for a newly created `timevar` variable in long format. See ‘Details’. | | `drop` | a vector of names of variables to drop before reshaping. | | `direction` | character string, partially matched to either `"wide"` to reshape to wide format, or `"long"` to reshape to long format. | | `new.row.names` | character or `NULL`: a non-null value will be used for the row names of the result. | | `sep` | A character vector of length 1, indicating a separating character in the variable names in the wide format. This is used for guessing `v.names` and `times` arguments based on the names in `varying`. If `sep == ""`, the split is just before the first numeral that follows an alphabetic character. This is also used to create variable names when reshaping to wide format. | | `split` | A list with three components, `regexp`, `include`, and (optionally) `fixed`. This allows an extended interface to variable name splitting. See ‘Details’. | ### Details Although `reshape()` can be used in a variety of contexts, the motivating application is data from longitudinal studies, and the arguments of this function are named and described in those terms. A longitudinal study is characterized by repeated measurements of the same variable(s), e.g., height and weight, on each unit being studied (e.g., individual persons) at different time points (which are assumed to be the same for all units). These variables are called time-varying variables. The study may include other variables that are measured only once for each unit and do not vary with time (e.g., gender and race); these are called time-constant variables. A ‘wide’ format representation of a longitudinal dataset will have one record (row) for each unit, typically with some time-constant variables that occupy single columns, and some time-varying variables that occupy multiple columns (one column for each time point). A ‘long’ format representation of the same dataset will have multiple records (rows) for each individual, with the time-constant variables being constant across these records and the time-varying variables varying across the records. The ‘long’ format dataset will have two additional variables: a ‘time’ variable identifying which time point each record comes from, and an ‘id’ variable showing which records refer to the same unit. The type of conversion (long to wide or wide to long) is determined by the `direction` argument, which is mandatory unless the `data` argument is the result of a previous call to `reshape`. In that case, the operation can be reversed simply using `reshape(data)` (the other arguments are stored as attributes on the data frame). Conversion from long to wide format with `direction = "wide"` is the simpler operation, and is mainly useful in the context of multivariate analysis where data is often expected as a wide-format matrix. In this case, the time variable `timevar` and id variable `idvar` must be specified. All other variables are assumed to be time-varying, unless the time-varying variables are explicitly specified via the `v.names` argument. A warning is issued if time-constant variables are not actually constant. Each time-varying variable is expanded into multiple variables in the wide format. The names of these expanded variables are generated automatically, unless they are specified as the `varying` argument in the form of a list (or matrix) with one component (or row) for each time-varying variable. If `varying` is a vector of names, it is implicitly converted into a matrix, with one row for each time-varying variable. Use this option with care if there are multiple time-varying variables, as the ordering (by column, the default in the `[matrix](../../base/html/matrix)` constructor) may be unintuitive, whereas the explicit list or matrix form is unambiguous. Conversion from wide to long with `direction = "long"` is the more common operation as most (univariate) statistical modeling functions expect data in the long format. In the simpler case where there is only one time-varying variable, the corresponding columns in the wide format input can be specified as the `varying` argument, which can be either a vector of column names or the corresponding column indices. The name of the corresponding variable in the long format output combining these columns can be optionally specified as the `v.names` argument, and the name of the time variables as the `timevar` argument. The values to use as the time values corresponding to the different columns in the wide format can be specified as the `times` argument. If `v.names` is unspecified, the function will attempt to guess `v.names` and `times` from `varying` (an explicitly specified `times` argument is unused in that case). The default expects variable names like `x.1`, `x.2`, where `sep = "."` specifies to split at the dot and drop it from the name. To have alphabetic followed by numeric times use `sep = ""`. Multiple time-varying variables can be specified in two ways, either with `varying` as an atomic vector as above, or as a list (or a matrix). The first form is useful (and mandatory) if the automatic variable name splitting as described above is used; this requires the names of all time-varying variables to be suitably formatted in the same manner, and `v.names` to be unspecified. If `varying` is a list (with one component for each time-varying variable) or a matrix (one row for each time-varying variable), variable name splitting is not attempted, and `v.names` and `times` will generally need to be specified, although they will default to, respectively, the first variable name in each set, and sequential times. Also, guessing is not attempted if `v.names` is given explicitly, even if `varying` is an atomic vector. In that case, the number of time-varying variables is taken to be the length of `v.names`, and `varying` is implicitly converted into a matrix, with one row for each time-varying variable. As in the case of long to wide conversion, the matrix is filled up by column, so careful attention needs to be paid to the order of variable names (or indices) in `varying`, which is taken to be like `x.1`, `y.1`, `x.2`, `y.2` (i.e., variables corresponding to the same time point need to be grouped together). The `split` argument should not usually be necessary. The `split$regexp` component is passed to either `[strsplit](../../base/html/strsplit)` or `[regexpr](../../base/html/grep)`, where the latter is used if `split$include` is `TRUE`, in which case the splitting occurs after the first character of the matched string. In the `[strsplit](../../base/html/strsplit)` case, the separator is not included in the result, and it is possible to specify fixed-string matching using `split$fixed`. ### Value The reshaped data frame with added attributes to simplify reshaping back to the original form. ### See Also `[stack](../../utils/html/stack)`, `[aperm](../../base/html/aperm)`; `[relist](../../utils/html/relist)` for reshaping the result of `[unlist](../../base/html/unlist)`. `<xtabs>` and `[as.data.frame.table](../../base/html/table)` for creating contingency tables and converting them back to data frames. ### Examples ``` summary(Indometh) # data in long format ## long to wide (direction = "wide") requires idvar and timevar at a minimum reshape(Indometh, direction = "wide", idvar = "Subject", timevar = "time") ## can also explicitly specify name of combined variable wide <- reshape(Indometh, direction = "wide", idvar = "Subject", timevar = "time", v.names = "conc", sep= "_") wide ## reverse transformation reshape(wide, direction = "long") reshape(wide, idvar = "Subject", varying = list(2:12), v.names = "conc", direction = "long") ## times need not be numeric df <- data.frame(id = rep(1:4, rep(2,4)), visit = I(rep(c("Before","After"), 4)), x = rnorm(4), y = runif(4)) df reshape(df, timevar = "visit", idvar = "id", direction = "wide") ## warns that y is really varying reshape(df, timevar = "visit", idvar = "id", direction = "wide", v.names = "x") ## unbalanced 'long' data leads to NA fill in 'wide' form df2 <- df[1:7, ] df2 reshape(df2, timevar = "visit", idvar = "id", direction = "wide") ## Alternative regular expressions for guessing names df3 <- data.frame(id = 1:4, age = c(40,50,60,50), dose1 = c(1,2,1,2), dose2 = c(2,1,2,1), dose4 = c(3,3,3,3)) reshape(df3, direction = "long", varying = 3:5, sep = "") ## an example that isn't longitudinal data state.x77 <- as.data.frame(state.x77) long <- reshape(state.x77, idvar = "state", ids = row.names(state.x77), times = names(state.x77), timevar = "Characteristic", varying = list(names(state.x77)), direction = "long") reshape(long, direction = "wide") reshape(long, direction = "wide", new.row.names = unique(long$state)) ## multiple id variables df3 <- data.frame(school = rep(1:3, each = 4), class = rep(9:10, 6), time = rep(c(1,1,2,2), 3), score = rnorm(12)) wide <- reshape(df3, idvar = c("school", "class"), direction = "wide") wide ## transform back reshape(wide) ```
programming_docs
r None `SSasympOff` Self-Starting Nls Asymptotic Regression Model with an Offset -------------------------------------------------------------------------- ### Description This `selfStart` model evaluates an alternative parametrization of the asymptotic regression function and the gradient with respect to those parameters. It has an `initial` attribute that creates initial estimates of the parameters `Asym`, `lrc`, and `c0`. ### Usage ``` SSasympOff(input, Asym, lrc, c0) ``` ### Arguments | | | | --- | --- | | `input` | a numeric vector of values at which to evaluate the model. | | `Asym` | a numeric parameter representing the horizontal asymptote on the right side (very large values of `input`). | | `lrc` | a numeric parameter representing the natural logarithm of the rate constant. | | `c0` | a numeric parameter representing the `input` for which the response is zero. | ### Value a numeric vector of the same length as `input`. It is the value of the expression `Asym*(1 - exp(-exp(lrc)*(input - c0)))`. If all of the arguments `Asym`, `lrc`, and `c0` are names of objects, the gradient matrix with respect to these names is attached as an attribute named `gradient`. ### Author(s) José Pinheiro and Douglas Bates ### See Also `<nls>`, `[selfStart](selfstart)`; `example(SSasympOff)` gives graph showing the `SSasympOff` parametrization. ### Examples ``` CO2.Qn1 <- CO2[CO2$Plant == "Qn1", ] SSasympOff(CO2.Qn1$conc, 32, -4, 43) # response only local({ Asym <- 32; lrc <- -4; c0 <- 43 SSasympOff(CO2.Qn1$conc, Asym, lrc, c0) # response and gradient }) getInitial(uptake ~ SSasympOff(conc, Asym, lrc, c0), data = CO2.Qn1) ## Initial values are in fact the converged values fm1 <- nls(uptake ~ SSasympOff(conc, Asym, lrc, c0), data = CO2.Qn1) summary(fm1) ## Visualize the SSasympOff() model parametrization : xx <- seq(0.25, 8, by=1/16) yy <- 5 * (1 - exp(-(xx - 3/4)*0.4)) stopifnot( all.equal(yy, SSasympOff(xx, Asym = 5, lrc = log(0.4), c0 = 3/4)) ) require(graphics) op <- par(mar = c(0, 0, 4.0, 0)) plot(xx, yy, type = "l", axes = FALSE, ylim = c(-.5,6), xlim = c(-1, 8), xlab = "", ylab = "", lwd = 2, main = "Parameters in the SSasympOff model") mtext(quote(list(phi[1] == "Asym", phi[2] == "lrc", phi[3] == "c0"))) usr <- par("usr") arrows(usr[1], 0, usr[2], 0, length = 0.1, angle = 25) arrows(0, usr[3], 0, usr[4], length = 0.1, angle = 25) text(usr[2] - 0.2, 0.1, "x", adj = c(1, 0)) text( -0.1, usr[4], "y", adj = c(1, 1)) abline(h = 5, lty = 3) arrows(-0.8, c(2.1, 2.9), -0.8, c(0 , 5 ), length = 0.1, angle = 25) text (-0.8, 2.5, quote(phi[1])) segments(3/4, -.2, 3/4, 1.6, lty = 2) text (3/4, c(-.3, 1.7), quote(phi[3])) arrows(c(1.1, 1.4), -.15, c(3/4, 7/4), -.15, length = 0.07, angle = 25) text (3/4 + 1/2, -.15, quote(1)) segments(c(3/4, 7/4, 7/4), c(0, 0, 2), # 5 * exp(log(0.4)) = 2 c(7/4, 7/4, 3/4), c(0, 2, 0), lty = 2, lwd = 2) text( 7/4 +.1, 2./2, quote(phi[1]*e^phi[2]), adj = c(0, .5)) par(op) ``` r None `Geometric` The Geometric Distribution --------------------------------------- ### Description Density, distribution function, quantile function and random generation for the geometric distribution with parameter `prob`. ### Usage ``` dgeom(x, prob, log = FALSE) pgeom(q, prob, lower.tail = TRUE, log.p = FALSE) qgeom(p, prob, lower.tail = TRUE, log.p = FALSE) rgeom(n, prob) ``` ### Arguments | | | | --- | --- | | `x, q` | vector of quantiles representing the number of failures in a sequence of Bernoulli trials before success occurs. | | `p` | vector of probabilities. | | `n` | number of observations. If `length(n) > 1`, the length is taken to be the number required. | | `prob` | probability of success in each trial. `0 < prob <= 1`. | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details The geometric distribution with `prob` *= p* has density *p(x) = p (1-p)^x* for *x = 0, 1, 2, …*, *0 < p ≤ 1*. If an element of `x` is not integer, the result of `dgeom` is zero, with a warning. The quantile is defined as the smallest value *x* such that *F(x) ≥ p*, where *F* is the distribution function. ### Value `dgeom` gives the density, `pgeom` gives the distribution function, `qgeom` gives the quantile function, and `rgeom` generates random deviates. Invalid `prob` will result in return value `NaN`, with a warning. The length of the result is determined by `n` for `rgeom`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `n` are recycled to the length of the result. Only the first elements of the logical arguments are used. `rgeom` returns a vector of type [integer](../../base/html/integer) unless generated values exceed the maximum representable integer when `[double](../../base/html/double)` values are returned since R version 4.0.0. ### Source `dgeom` computes via `dbinom`, using code contributed by Catherine Loader (see `[dbinom](family)`). `pgeom` and `qgeom` are based on the closed-form formulae. `rgeom` uses the derivation as an exponential mixture of Poissons, see Devroye, L. (1986) *Non-Uniform Random Variate Generation.* Springer-Verlag, New York. Page 480. ### See Also [Distributions](distributions) for other standard distributions, including `[dnbinom](negbinomial)` for the negative binomial which generalizes the geometric distribution. ### Examples ``` qgeom((1:9)/10, prob = .2) Ni <- rgeom(20, prob = 1/4); table(factor(Ni, 0:max(Ni))) ``` r None `cutree` Cut a Tree into Groups of Data ---------------------------------------- ### Description Cuts a tree, e.g., as resulting from `<hclust>`, into several groups either by specifying the desired number(s) of groups or the cut height(s). ### Usage ``` cutree(tree, k = NULL, h = NULL) ``` ### Arguments | | | | --- | --- | | `tree` | a tree as produced by `<hclust>`. `cutree()` only expects a list with components `merge`, `height`, and `labels`, of appropriate content each. | | `k` | an integer scalar or vector with the desired number of groups | | `h` | numeric scalar or vector with heights where the tree should be cut. | At least one of `k` or `h` must be specified, `k` overrides `h` if both are given. ### Details Cutting trees at a given height is only possible for ultrametric trees (with monotone clustering heights). ### Value `cutree` returns a vector with group memberships if `k` or `h` are scalar, otherwise a matrix with group memberships is returned where each column corresponds to the elements of `k` or `h`, respectively (which are also used as column names). ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<hclust>`, `<dendrogram>` for cutting trees themselves. ### Examples ``` hc <- hclust(dist(USArrests)) cutree(hc, k = 1:5) #k = 1 is trivial cutree(hc, h = 250) ## Compare the 2 and 4 grouping: g24 <- cutree(hc, k = c(2,4)) table(grp2 = g24[,"2"], grp4 = g24[,"4"]) ``` r None `formula` Model Formulae ------------------------- ### Description The generic function `formula` and its specific methods provide a way of extracting formulae which have been included in other objects. `as.formula` is almost identical, additionally preserving attributes when `object` already inherits from `"formula"`. ### Usage ``` formula(x, ...) DF2formula(x, env = parent.frame()) as.formula(object, env = parent.frame()) ## S3 method for class 'formula' print(x, showEnv = !identical(e, .GlobalEnv), ...) ``` ### Arguments | | | | --- | --- | | `x, object` | **R** object, for `DF2formula()` a `[data.frame](../../base/html/data.frame)`. | | `...` | further arguments passed to or from other methods. | | `env` | the environment to associate with the result, if not already a formula. | | `showEnv` | logical indicating if the environment should be printed as well. | ### Details The models fit by, e.g., the `<lm>` and `<glm>` functions are specified in a compact symbolic form. The `~` operator is basic in the formation of such models. An expression of the form `y ~ model` is interpreted as a specification that the response `y` is modelled by a linear predictor specified symbolically by `model`. Such a model consists of a series of terms separated by `+` operators. The terms themselves consist of variable and factor names separated by `:` operators. Such a term is interpreted as the interaction of all the variables and factors appearing in the term. In addition to `+` and `:`, a number of other operators are useful in model formulae. The `*` operator denotes factor crossing: `a*b` interpreted as `a+b+a:b`. The `^` operator indicates crossing to the specified degree. For example `(a+b+c)^2` is identical to `(a+b+c)*(a+b+c)` which in turn expands to a formula containing the main effects for `a`, `b` and `c` together with their second-order interactions. The `%in%` operator indicates that the terms on its left are nested within those on the right. For example `a + b %in% a` expands to the formula `a + a:b`. The `-` operator removes the specified terms, so that `(a+b+c)^2 - a:b` is identical to `a + b + c + b:c + a:c`. It can also used to remove the intercept term: when fitting a linear model `y ~ x - 1` specifies a line through the origin. A model with no intercept can be also specified as `y ~ x + 0` or `y ~ 0 + x`. While formulae usually involve just variable and factor names, they can also involve arithmetic expressions. The formula `log(y) ~ a + log(x)` is quite legal. When such arithmetic expressions involve operators which are also used symbolically in model formulae, there can be confusion between arithmetic and symbolic operator use. To avoid this confusion, the function `[I](../../base/html/asis)()` can be used to bracket those portions of a model formula where the operators are used in their arithmetic sense. For example, in the formula `y ~ a + I(b+c)`, the term `b+c` is to be interpreted as the sum of `b` and `c`. Variable names can be quoted by backticks ``like this`` in formulae, although there is no guarantee that all code using formulae will accept such non-syntactic names. Most model-fitting functions accept formulae with right-hand-side including the function `<offset>` to indicate terms with a fixed coefficient of one. Some functions accept other ‘specials’ such as `strata` or `cluster` (see the `specials` argument of `<terms.formula>)`. There are two special interpretations of `.` in a formula. The usual one is in the context of a `data` argument of model fitting functions and means ‘all columns not otherwise in the formula’: see `<terms.formula>`. In the context of `<update.formula>`, **only**, it means ‘what was previously in this part of the formula’. When `formula` is called on a fitted model object, either a specific method is used (such as that for class `"nls"`) or the default method. The default first looks for a `"formula"` component of the object (and evaluates it), then a `"terms"` component, then a `formula` parameter of the call (and evaluates its value) and finally a `"formula"` attribute. There is a `formula` method for data frames. When there's `"terms"` attribute with a formula, e.g., for a `<model.frame>()`, that formula is returned. If you'd like the previous (**R** *<=* 3.5.x) behavior, use the auxiliary `DF2formula()` which does not consider a `"terms"` attribute. Otherwise, if there is only one column this forms the RHS with an empty LHS. For more columns, the first column is the LHS of the formula and the remaining columns separated by `+` form the RHS. ### Value All the functions above produce an object of class `"formula"` which contains a symbolic model formula. ### Environments A formula object has an associated environment, and this environment (rather than the parent environment) is used by `<model.frame>` to evaluate variables that are not found in the supplied `data` argument. Formulas created with the `~` operator use the environment in which they were created. Formulas created with `as.formula` will use the `env` argument for their environment. ### Note In **R** versions up to 3.6.0, `[character](../../base/html/character)` `x` of length more than one were parsed as separate lines of **R** code and the first complete expression was evaluated into a formula when possible. This silently truncates such vectors of characters inefficiently and to some extent inconsistently as this behaviour had been undocumented. For this reason, such use has been deprecated. If you must work via character `x`, do use a string, i.e., a character vector of length one. E.g., `eval(call("~", quote(foo + bar)))` has been an order of magnitude more efficient than `formula(c("~", "foo + bar"))`. Further, character “expressions” needing an `[eval](../../base/html/eval)()` to return a formula are now deprecated. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical models.* Chapter 2 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `[I](../../base/html/asis)`, `<offset>`. For formula manipulation: `<terms>`, and `[all.vars](../../base/html/allnames)`; for typical use: `<lm>`, `<glm>`, and `[coplot](../../graphics/html/coplot)`. ### Examples ``` class(fo <- y ~ x1*x2) # "formula" fo typeof(fo) # R internal : "language" terms(fo) environment(fo) environment(as.formula("y ~ x")) environment(as.formula("y ~ x", env = new.env())) ## Create a formula for a model with a large number of variables: xnam <- paste0("x", 1:25) (fmla <- as.formula(paste("y ~ ", paste(xnam, collapse= "+")))) ``` r None `fitted.values` Extract Model Fitted Values -------------------------------------------- ### Description `fitted` is a generic function which extracts fitted values from objects returned by modeling functions. `fitted.values` is an alias for it. All object classes which are returned by model fitting functions should provide a `fitted` method. (Note that the generic is `fitted` and not `fitted.values`.) Methods can make use of `[napredict](nafns)` methods to compensate for the omission of missing values. The default and `<nls>` methods do. ### Usage ``` fitted(object, ...) fitted.values(object, ...) ``` ### Arguments | | | | --- | --- | | `object` | an object for which the extraction of model fitted values is meaningful. | | `...` | other arguments. | ### Value Fitted values extracted from the object `object`. ### References Chambers, J. M. and Hastie, T. J. (1992) *Statistical Models in S*. Wadsworth & Brooks/Cole. ### See Also `[coefficients](coef)`, `<glm>`, `<lm>`, `<residuals>`. r None `Chisquare` The (non-central) Chi-Squared Distribution ------------------------------------------------------- ### Description Density, distribution function, quantile function and random generation for the chi-squared (*chi^2*) distribution with `df` degrees of freedom and optional non-centrality parameter `ncp`. ### Usage ``` dchisq(x, df, ncp = 0, log = FALSE) pchisq(q, df, ncp = 0, lower.tail = TRUE, log.p = FALSE) qchisq(p, df, ncp = 0, lower.tail = TRUE, log.p = FALSE) rchisq(n, df, ncp = 0) ``` ### Arguments | | | | --- | --- | | `x, q` | vector of quantiles. | | `p` | vector of probabilities. | | `n` | number of observations. If `length(n) > 1`, the length is taken to be the number required. | | `df` | degrees of freedom (non-negative, but can be non-integer). | | `ncp` | non-centrality parameter (non-negative). | | `log, log.p` | logical; if TRUE, probabilities p are given as log(p). | | `lower.tail` | logical; if TRUE (default), probabilities are *P[X ≤ x]*, otherwise, *P[X > x]*. | ### Details The chi-squared distribution with `df`*= n ≥ 0* degrees of freedom has density *f\_n(x) = 1 / (2^(n/2) Γ(n/2)) x^(n/2-1) e^(-x/2)* for *x > 0*, where *f\_0(x) := \lim\_{n \to 0} f\_n(x) = δ\_0(x)*, a point mass at zero, is not a density function proper, but a “*δ* distribution”. The mean and variance are *n* and *2n*. The non-central chi-squared distribution with `df`*= n* degrees of freedom and non-centrality parameter `ncp` *= λ* has density *f(x) = exp(-λ/2) SUM\_{r=0}^∞ ((λ/2)^r / r!) dchisq(x, df + 2r)* for *x ≥ 0*. For integer *n*, this is the distribution of the sum of squares of *n* normals each with variance one, *λ* being the sum of squares of the normal means; further, *E(X) = n + λ*, *Var(X) = 2(n + 2\*λ)*, and *E((X - E(X))^3) = 8(n + 3\*λ)*. Note that the degrees of freedom `df`*= n*, can be non-integer, and also *n = 0* which is relevant for non-centrality *λ > 0*, see Johnson *et al* (1995, chapter 29). In that (noncentral, zero df) case, the distribution is a mixture of a point mass at *x = 0* (of size `pchisq(0, df=0, ncp=ncp)`) and a continuous part, and `dchisq()` is *not* a density with respect to that mixture measure but rather the limit of the density for *df -> 0*. Note that `ncp` values larger than about 1e5 (and even smaller) may give inaccurate results with many warnings for `pchisq` and `qchisq`. ### Value `dchisq` gives the density, `pchisq` gives the distribution function, `qchisq` gives the quantile function, and `rchisq` generates random deviates. Invalid arguments will result in return value `NaN`, with a warning. The length of the result is determined by `n` for `rchisq`, and is the maximum of the lengths of the numerical arguments for the other functions. The numerical arguments other than `n` are recycled to the length of the result. Only the first elements of the logical arguments are used. ### Note Supplying `ncp = 0` uses the algorithm for the non-central distribution, which is not the same algorithm used if `ncp` is omitted. This is to give consistent behaviour in extreme cases with values of `ncp` very near zero. The code for non-zero `ncp` is principally intended to be used for moderate values of `ncp`: it will not be highly accurate, especially in the tails, for large values. ### Source The central cases are computed via the gamma distribution. The non-central `dchisq` and `rchisq` are computed as a Poisson mixture of central chi-squares (Johnson *et al*, 1995, p.436). The non-central `pchisq` is for `ncp < 80` computed from the Poisson mixture of central chi-squares and for larger `ncp` *via* a C translation of Ding, C. G. (1992) Algorithm AS275: Computing the non-central chi-squared distribution function. *Appl.Statist.*, **41** 478–482. which computes the lower tail only (so the upper tail suffers from cancellation and a warning will be given when this is likely to be significant). The non-central `qchisq` is based on inversion of `pchisq`. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. Johnson, N. L., Kotz, S. and Balakrishnan, N. (1995) *Continuous Univariate Distributions*, chapters 18 (volume 1) and 29 (volume 2). Wiley, New York. ### See Also [Distributions](distributions) for other standard distributions. A central chi-squared distribution with *n* degrees of freedom is the same as a Gamma distribution with `shape` *a = n/2* and `scale` *s = 2*. Hence, see `[dgamma](gammadist)` for the Gamma distribution. The central chi-squared distribution with 2 d.f. is identical to the exponential distribution with rate 1/2: *χ^2\_2 = Exp(1/2)*, see `[dexp](exponential)`. ### Examples ``` require(graphics) dchisq(1, df = 1:3) pchisq(1, df = 3) pchisq(1, df = 3, ncp = 0:4) # includes the above x <- 1:10 ## Chi-squared(df = 2) is a special exponential distribution all.equal(dchisq(x, df = 2), dexp(x, 1/2)) all.equal(pchisq(x, df = 2), pexp(x, 1/2)) ## non-central RNG -- df = 0 with ncp > 0: Z0 has point mass at 0! Z0 <- rchisq(100, df = 0, ncp = 2.) graphics::stem(Z0) ## visual testing ## do P-P plots for 1000 points at various degrees of freedom L <- 1.2; n <- 1000; pp <- ppoints(n) op <- par(mfrow = c(3,3), mar = c(3,3,1,1)+.1, mgp = c(1.5,.6,0), oma = c(0,0,3,0)) for(df in 2^(4*rnorm(9))) { plot(pp, sort(pchisq(rr <- rchisq(n, df = df, ncp = L), df = df, ncp = L)), ylab = "pchisq(rchisq(.),.)", pch = ".") mtext(paste("df = ", formatC(df, digits = 4)), line = -2, adj = 0.05) abline(0, 1, col = 2) } mtext(expression("P-P plots : Noncentral "* chi^2 *"(n=1000, df=X, ncp= 1.2)"), cex = 1.5, font = 2, outer = TRUE) par(op) ## "analytical" test lam <- seq(0, 100, by = .25) p00 <- pchisq(0, df = 0, ncp = lam) p.0 <- pchisq(1e-300, df = 0, ncp = lam) stopifnot(all.equal(p00, exp(-lam/2)), all.equal(p.0, exp(-lam/2))) ```
programming_docs
r None `embed` Embedding a Time Series -------------------------------- ### Description Embeds the time series `x` into a low-dimensional Euclidean space. ### Usage ``` embed (x, dimension = 1) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector, matrix, or time series. | | `dimension` | a scalar representing the embedding dimension. | ### Details Each row of the resulting matrix consists of sequences `x[t]`, `x[t-1]`, ..., `x[t-dimension+1]`, where `t` is the original index of `x`. If `x` is a matrix, i.e., `x` contains more than one variable, then `x[t]` consists of the `t`th observation on each variable. ### Value A matrix containing the embedded time series `x`. ### Author(s) A. Trapletti, B.D. Ripley ### Examples ``` x <- 1:10 embed (x, 3) ``` r None `fligner.test` Fligner-Killeen Test of Homogeneity of Variances ---------------------------------------------------------------- ### Description Performs a Fligner-Killeen (median) test of the null that the variances in each of the groups (samples) are the same. ### Usage ``` fligner.test(x, ...) ## Default S3 method: fligner.test(x, g, ...) ## S3 method for class 'formula' fligner.test(formula, data, subset, na.action, ...) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector of data values, or a list of numeric data vectors. | | `g` | a vector or factor object giving the group for the corresponding elements of `x`. Ignored if `x` is a list. | | `formula` | a formula of the form `lhs ~ rhs` where `lhs` gives the data values and `rhs` the corresponding groups. | | `data` | an optional matrix or data frame (or similar: see `<model.frame>`) containing the variables in the formula `formula`. By default the variables are taken from `environment(formula)`. | | `subset` | an optional vector specifying a subset of observations to be used. | | `na.action` | a function which indicates what should happen when the data contain `NA`s. Defaults to `getOption("na.action")`. | | `...` | further arguments to be passed to or from methods. | ### Details If `x` is a list, its elements are taken as the samples to be compared for homogeneity of variances, and hence have to be numeric data vectors. In this case, `g` is ignored, and one can simply use `fligner.test(x)` to perform the test. If the samples are not yet contained in a list, use `fligner.test(list(x, ...))`. Otherwise, `x` must be a numeric data vector, and `g` must be a vector or factor object of the same length as `x` giving the group for the corresponding elements of `x`. The Fligner-Killeen (median) test has been determined in a simulation study as one of the many tests for homogeneity of variances which is most robust against departures from normality, see Conover, Johnson & Johnson (1981). It is a *k*-sample simple linear rank which uses the ranks of the absolute values of the centered samples and weights *a(i) = qnorm((1 + i/(n+1))/2)*. The version implemented here uses median centering in each of the samples (F-K:med *X^2* in the reference). ### Value A list of class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the Fligner-Killeen:med *X^2* test statistic. | | `parameter` | the degrees of freedom of the approximate chi-squared distribution of the test statistic. | | `p.value` | the p-value of the test. | | `method` | the character string `"Fligner-Killeen test of homogeneity of variances"`. | | `data.name` | a character string giving the names of the data. | ### References William J. Conover, Mark E. Johnson and Myrle M. Johnson (1981). A comparative study of tests for homogeneity of variances, with applications to the outer continental shelf bidding data. *Technometrics*, **23**, 351–361. doi: [10.2307/1268225](https://doi.org/10.2307/1268225). ### See Also `<ansari.test>` and `<mood.test>` for rank-based two-sample test for a difference in scale parameters; `<var.test>` and `<bartlett.test>` for parametric tests for the homogeneity of variances. ### Examples ``` require(graphics) plot(count ~ spray, data = InsectSprays) fligner.test(InsectSprays$count, InsectSprays$spray) fligner.test(count ~ spray, data = InsectSprays) ## Compare this to bartlett.test() ``` r None `acf2AR` Compute an AR Process Exactly Fitting an ACF ------------------------------------------------------ ### Description Compute an AR process exactly fitting an autocorrelation function. ### Usage ``` acf2AR(acf) ``` ### Arguments | | | | --- | --- | | `acf` | An autocorrelation or autocovariance sequence. | ### Value A matrix, with one row for the computed AR(p) coefficients for `1 <= p <= length(acf)`. ### See Also `[ARMAacf](armaacf)`, `[ar.yw](ar)` which does this from an empirical ACF. ### Examples ``` (Acf <- ARMAacf(c(0.6, 0.3, -0.2))) acf2AR(Acf) ``` r None `approxfun` Interpolation Functions ------------------------------------ ### Description Return a list of points which linearly interpolate given data points, or a function performing the linear (or constant) interpolation. ### Usage ``` approx (x, y = NULL, xout, method = "linear", n = 50, yleft, yright, rule = 1, f = 0, ties = mean, na.rm = TRUE) approxfun(x, y = NULL, method = "linear", yleft, yright, rule = 1, f = 0, ties = mean, na.rm = TRUE) ``` ### Arguments | | | | --- | --- | | `x, y` | numeric vectors giving the coordinates of the points to be interpolated. Alternatively a single plotting structure can be specified: see `[xy.coords](../../grdevices/html/xy.coords)`. | | `xout` | an optional set of numeric values specifying where interpolation is to take place. | | `method` | specifies the interpolation method to be used. Choices are `"linear"` or `"constant"`. | | `n` | If `xout` is not specified, interpolation takes place at `n` equally spaced points spanning the interval [`min(x)`, `max(x)`]. | | `yleft` | the value to be returned when input `x` values are less than `min(x)`. The default is defined by the value of `rule` given below. | | `yright` | the value to be returned when input `x` values are greater than `max(x)`. The default is defined by the value of `rule` given below. | | `rule` | an integer (of length 1 or 2) describing how interpolation is to take place outside the interval [`min(x)`, `max(x)`]. If `rule` is `1` then `NA`s are returned for such points and if it is `2`, the value at the closest data extreme is used. Use, e.g., `rule = 2:1`, if the left and right side extrapolation should differ. | | `f` | for `method = "constant"` a number between 0 and 1 inclusive, indicating a compromise between left- and right-continuous step functions. If `y0` and `y1` are the values to the left and right of the point then the value is `y0` if `f == 0`, `y1` if `f == 1`, and `y0*(1-f)+y1*f` for intermediate values. In this way the result is right-continuous for `f == 0` and left-continuous for `f == 1`, even for non-finite `y` values. | | `ties` | handling of tied `x` values. The string `"ordered"` or a function (or the name of a function) taking a single vector argument and returning a single number or a `[list](../../base/html/list)` of both, e.g., `list("ordered", mean)`, see ‘Details’. | | `na.rm` | logical specifying how missing values (`[NA](../../base/html/na)`'s) should be handled. Setting `na.rm=FALSE` will propagate `NA`'s in `y` to the interpolated values, also depending on the `rule` set. Note that in this case, `NA`'s in `x` are invalid, see also the examples. | ### Details The inputs can contain missing values which are deleted (if `na.rm` is true, i.e., by default), so at least two complete `(x, y)` pairs are required (for `method = "linear"`, one otherwise). If there are duplicated (tied) `x` values and `ties` contains a function it is applied to the `y` values for each distinct `x` value to produce `(x,y)` pairs with unique `x`. Useful functions in this context include `[mean](../../base/html/mean)`, `[min](../../base/html/extremes)`, and `[max](../../base/html/extremes)`. If `ties = "ordered"` the `x` values are assumed to be already ordered (and unique) and ties are *not* checked but kept if present. This is the fastest option for large `length(x)`. If `ties` is a `[list](../../base/html/list)` of length two, `ties[[2]]` must be a function to be applied to ties, see above, but if `ties[[1]]` is identical to `"ordered"`, the `x` values are assumed to be sorted and are only checked for ties. Consequently, `ties = list("ordered", mean)` will be slightly more efficient than the default `ties = mean` in such a case. The first `y` value will be used for interpolation to the left and the last one for interpolation to the right. ### Value `approx` returns a list with components `x` and `y`, containing `n` coordinates which interpolate the given data points according to the `method` (and `rule`) desired. The function `approxfun` returns a function performing (linear or constant) interpolation of the given data points. For a given set of `x` values, this function will return the corresponding interpolated values. It uses data stored in its environment when it was created, the details of which are subject to change. ### Warning The value returned by `approxfun` contains references to the code in the current version of **R**: it is not intended to be saved and loaded into a different **R** session. This is safer for **R** >= 3.0.0. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `[spline](splinefun)` and `<splinefun>` for spline interpolation. ### Examples ``` require(graphics) x <- 1:10 y <- rnorm(10) par(mfrow = c(2,1)) plot(x, y, main = "approx(.) and approxfun(.)") points(approx(x, y), col = 2, pch = "*") points(approx(x, y, method = "constant"), col = 4, pch = "*") f <- approxfun(x, y) curve(f(x), 0, 11, col = "green2") points(x, y) is.function(fc <- approxfun(x, y, method = "const")) # TRUE curve(fc(x), 0, 10, col = "darkblue", add = TRUE) ## different extrapolation on left and right side : plot(approxfun(x, y, rule = 2:1), 0, 11, col = "tomato", add = TRUE, lty = 3, lwd = 2) ### Treatment of 'NA's -- are kept if na.rm=FALSE : xn <- 1:4 yn <- c(1,NA,3:4) xout <- (1:9)/2 ## Default behavior (na.rm = TRUE): NA's omitted; extrapolation gives NA data.frame(approx(xn,yn, xout)) data.frame(approx(xn,yn, xout, rule = 2))# -> *constant* extrapolation ## New (2019-2020) na.rm = FALSE: NA's are "kept" data.frame(approx(xn,yn, xout, na.rm=FALSE, rule = 2)) data.frame(approx(xn,yn, xout, na.rm=FALSE, rule = 2, method="constant")) ## NA's in x[] are not allowed: stopifnot(inherits( try( approx(yn,yn, na.rm=FALSE) ), "try-error")) ## Give a nice overview of all possibilities rule * method * na.rm : ## ----------------------------- ==== ====== ===== ## extrapolations "N":= NA; "C":= Constant : rules <- list(N=1, C=2, NC=1:2, CN=2:1) methods <- c("constant","linear") ry <- sapply(rules, function(R) { sapply(methods, function(M) sapply(setNames(,c(TRUE,FALSE)), function(na.) approx(xn, yn, xout=xout, method=M, rule=R, na.rm=na.)$y), simplify="array") }, simplify="array") names(dimnames(ry)) <- c("x = ", "na.rm", "method", "rule") dimnames(ry)[[1]] <- format(xout) ftable(aperm(ry, 4:1)) # --> (4 * 2 * 2) x length(xout) = 16 x 9 matrix ## Show treatment of 'ties' : x <- c(2,2:4,4,4,5,5,7,7,7) y <- c(1:6, 5:4, 3:1) (amy <- approx(x, y, xout = x)$y) # warning, can be avoided by specifying 'ties=': op <- options(warn=2) # warnings would be error stopifnot(identical(amy, approx(x, y, xout = x, ties=mean)$y)) (ay <- approx(x, y, xout = x, ties = "ordered")$y) stopifnot(amy == c(1.5,1.5, 3, 5,5,5, 4.5,4.5, 2,2,2), ay == c(2, 2, 3, 6,6,6, 4, 4, 1,1,1)) approx(x, y, xout = x, ties = min)$y approx(x, y, xout = x, ties = max)$y options(op) # revert 'warn'ing level ``` r None `nls` Nonlinear Least Squares ------------------------------ ### Description Determine the nonlinear (weighted) least-squares estimates of the parameters of a nonlinear model. ### Usage ``` nls(formula, data, start, control, algorithm, trace, subset, weights, na.action, model, lower, upper, ...) ``` ### Arguments | | | | --- | --- | | `formula` | a nonlinear model <formula> including variables and parameters. Will be coerced to a formula if necessary. | | `data` | an optional data frame in which to evaluate the variables in `formula` and `weights`. Can also be a list or an environment, but not a matrix. | | `start` | a named list or named numeric vector of starting estimates. When `start` is missing (and `formula` is not a self-starting model, see `[selfStart](selfstart)`), a very cheap guess for `start` is tried (if `algorithm != "plinear"`). | | `control` | an optional `[list](../../base/html/list)` of control settings. See `<nls.control>` for the names of the settable control values and their effect. | | `algorithm` | character string specifying the algorithm to use. The default algorithm is a Gauss-Newton algorithm. Other possible values are `"plinear"` for the Golub-Pereyra algorithm for partially linear least-squares models and `"port"` for the ‘nl2sol’ algorithm from the Port library – see the references. Can be abbreviated. | | `trace` | logical value indicating if a trace of the iteration progress should be printed. Default is `FALSE`. If `TRUE` the residual (weighted) sum-of-squares, the convergence criterion and the parameter values are printed at the conclusion of each iteration. Note that `[format](../../base/html/format)()` is used, so these mostly depend on `[getOption](../../base/html/options)("digits")`. When the `"plinear"` algorithm is used, the conditional estimates of the linear parameters are printed after the nonlinear parameters. When the `"port"` algorithm is used the objective function value printed is half the residual (weighted) sum-of-squares. | | `subset` | an optional vector specifying a subset of observations to be used in the fitting process. | | `weights` | an optional numeric vector of (fixed) weights. When present, the objective function is weighted least squares. | | `na.action` | a function which indicates what should happen when the data contain `NA`s. The default is set by the `na.action` setting of `[options](../../base/html/options)`, and is `<na.fail>` if that is unset. The ‘factory-fresh’ default is `[na.omit](na.fail)`. Value `[na.exclude](na.fail)` can be useful. | | `model` | logical. If true, the model frame is returned as part of the object. Default is `FALSE`. | | `lower, upper` | vectors of lower and upper bounds, replicated to be as long as `start`. If unspecified, all parameters are assumed to be unconstrained. Bounds can only be used with the `"port"` algorithm. They are ignored, with a warning, if given for other algorithms. | | `...` | Additional optional arguments. None are used at present. | ### Details An `nls` object is a type of fitted model object. It has methods for the generic functions `<anova>`, `<coef>`, `<confint>`, `<deviance>`, `<df.residual>`, `[fitted](fitted.values)`, `<formula>`, `[logLik](loglik)`, `<predict>`, `[print](../../base/html/print)`, `<profile>`, `<residuals>`, `[summary](../../base/html/summary)`, `<vcov>` and `<weights>`. Variables in `formula` (and `weights` if not missing) are looked for first in `data`, then the environment of `formula` and finally along the search path. Functions in `formula` are searched for first in the environment of `formula` and then along the search path. Arguments `subset` and `na.action` are supported only when all the variables in the formula taken from `data` are of the same length: other cases give a warning. Note that the `<anova>` method does not check that the models are nested: this cannot easily be done automatically, so use with care. ### Value A list of | | | | --- | --- | | `m` | an `nlsModel` object incorporating the model. | | `data` | the expression that was passed to `nls` as the data argument. The actual data values are present in the `[environment](../../base/html/environment)` of the `m` components, e.g., `environment(m$conv)`. | | `call` | the matched call with several components, notably `algorithm`. | | `na.action` | the `"na.action"` attribute (if any) of the model frame. | | `dataClasses` | the `"dataClasses"` attribute (if any) of the `"terms"` attribute of the model frame. | | `model` | if `model = TRUE`, the model frame. | | `weights` | if `weights` is supplied, the weights. | | `convInfo` | a list with convergence information. | | `control` | the control `list` used, see the `control` argument. | | `convergence, message` | for an `algorithm = "port"` fit only, a convergence code (`0` for convergence) and message. To use these is *deprecated*, as they are available from `convInfo` now. | ### Warning **The default settings of `nls` generally fail on artificial “zero-residual” data problems.** The `nls` function uses a relative-offset convergence criterion that compares the numerical imprecision at the current parameter estimates to the residual sum-of-squares. This performs well on data of the form *y = f(x, θ) + eps* (with `var(eps) > 0`). It fails to indicate convergence on data of the form *y = f(x, θ)* because the criterion amounts to comparing two components of the round-off error. To avoid a zero-divide in computing the convergence testing value, a positive constant `scaleOffset` should be added to the denominator sum-of-squares; it is set in `control`, as in the example below; this does not yet apply to `algorithm = "port"`. The `algorithm = "port"` code appears unfinished, and does not even check that the starting value is within the bounds. Use with caution, especially where bounds are supplied. ### Note Setting `warnOnly = TRUE` in the `control` argument (see `<nls.control>`) returns a non-converged object (since **R** version 2.5.0) which might be useful for further convergence analysis, *but **not** for inference*. ### Author(s) Douglas M. Bates and Saikat DebRoy: David M. Gay for the Fortran code used by `algorithm = "port"`. ### References Bates, D. M. and Watts, D. G. (1988) *Nonlinear Regression Analysis and Its Applications*, Wiley Bates, D. M. and Chambers, J. M. (1992) *Nonlinear models.* Chapter 10 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. <https://www.netlib.org/port/> for the Port library documentation. ### See Also `<summary.nls>`, `<predict.nls>`, `<profile.nls>`. Self starting models (with ‘automatic initial values’): `[selfStart](selfstart)`. ### Examples ``` require(graphics) DNase1 <- subset(DNase, Run == 1) ## using a selfStart model fm1DNase1 <- nls(density ~ SSlogis(log(conc), Asym, xmid, scal), DNase1) summary(fm1DNase1) ## the coefficients only: coef(fm1DNase1) ## including their SE, etc: coef(summary(fm1DNase1)) ## using conditional linearity fm2DNase1 <- nls(density ~ 1/(1 + exp((xmid - log(conc))/scal)), data = DNase1, start = list(xmid = 0, scal = 1), algorithm = "plinear") summary(fm2DNase1) ## without conditional linearity fm3DNase1 <- nls(density ~ Asym/(1 + exp((xmid - log(conc))/scal)), data = DNase1, start = list(Asym = 3, xmid = 0, scal = 1)) summary(fm3DNase1) ## using Port's nl2sol algorithm fm4DNase1 <- nls(density ~ Asym/(1 + exp((xmid - log(conc))/scal)), data = DNase1, start = list(Asym = 3, xmid = 0, scal = 1), algorithm = "port") summary(fm4DNase1) ## weighted nonlinear regression Treated <- Puromycin[Puromycin$state == "treated", ] weighted.MM <- function(resp, conc, Vm, K) { ## Purpose: exactly as white book p. 451 -- RHS for nls() ## Weighted version of Michaelis-Menten model ## ---------------------------------------------------------- ## Arguments: 'y', 'x' and the two parameters (see book) ## ---------------------------------------------------------- ## Author: Martin Maechler, Date: 23 Mar 2001 pred <- (Vm * conc)/(K + conc) (resp - pred) / sqrt(pred) } Pur.wt <- nls( ~ weighted.MM(rate, conc, Vm, K), data = Treated, start = list(Vm = 200, K = 0.1)) summary(Pur.wt) ## Passing arguments using a list that can not be coerced to a data.frame lisTreat <- with(Treated, list(conc1 = conc[1], conc.1 = conc[-1], rate = rate)) weighted.MM1 <- function(resp, conc1, conc.1, Vm, K) { conc <- c(conc1, conc.1) pred <- (Vm * conc)/(K + conc) (resp - pred) / sqrt(pred) } Pur.wt1 <- nls( ~ weighted.MM1(rate, conc1, conc.1, Vm, K), data = lisTreat, start = list(Vm = 200, K = 0.1)) stopifnot(all.equal(coef(Pur.wt), coef(Pur.wt1))) ## Chambers and Hastie (1992) Statistical Models in S (p. 537): ## If the value of the right side [of formula] has an attribute called ## 'gradient' this should be a matrix with the number of rows equal ## to the length of the response and one column for each parameter. weighted.MM.grad <- function(resp, conc1, conc.1, Vm, K) { conc <- c(conc1, conc.1) K.conc <- K+conc dy.dV <- conc/K.conc dy.dK <- -Vm*dy.dV/K.conc pred <- Vm*dy.dV pred.5 <- sqrt(pred) dev <- (resp - pred) / pred.5 Ddev <- -0.5*(resp+pred)/(pred.5*pred) attr(dev, "gradient") <- Ddev * cbind(Vm = dy.dV, K = dy.dK) dev } Pur.wt.grad <- nls( ~ weighted.MM.grad(rate, conc1, conc.1, Vm, K), data = lisTreat, start = list(Vm = 200, K = 0.1)) rbind(coef(Pur.wt), coef(Pur.wt1), coef(Pur.wt.grad)) ## In this example, there seems no advantage to providing the gradient. ## In other cases, there might be. ## The two examples below show that you can fit a model to ## artificial data with noise but not to artificial data ## without noise. x <- 1:10 y <- 2*x + 3 # perfect fit ## terminates in an error, because convergence cannot be confirmed: try(nls(y ~ a + b*x, start = list(a = 0.12345, b = 0.54321))) ## adjusting the convergence test by adding 'scaleOffset' to its denominator RSS: nls(y ~ a + b*x, start = list(a = 0.12345, b = 0.54321), control = list(scaleOffset = 1, printEval=TRUE)) ## Alternatively jittering the "too exact" values, slightly: set.seed(27) yeps <- y + rnorm(length(y), sd = 0.01) # added noise nls(yeps ~ a + b*x, start = list(a = 0.12345, b = 0.54321)) ## the nls() internal cheap guess for starting values can be sufficient: x <- -(1:100)/10 y <- 100 + 10 * exp(x / 2) + rnorm(x)/10 nlmod <- nls(y ~ Const + A * exp(B * x)) plot(x,y, main = "nls(*), data, true function and fit, n=100") curve(100 + 10 * exp(x / 2), col = 4, add = TRUE) lines(x, predict(nlmod), col = 2) ## Here, requiring close convergence, you need to use more accurate numerical ## differentiation; this gives Error: "step factor .. reduced below 'minFactor' .." options(digits = 10) # more accuracy for 'trace' ## IGNORE_RDIFF_BEGIN try(nlm1 <- update(nlmod, control = list(tol = 1e-7))) # where central diff. work here: (nlm2 <- update(nlmod, control = list(tol = 8e-8, nDcentral=TRUE), trace=TRUE)) ## --> convergence tolerance 4.997e-8 (in 11 iter.) ## IGNORE_RDIFF_END ## The muscle dataset in MASS is from an experiment on muscle ## contraction on 21 animals. The observed variables are Strip ## (identifier of muscle), Conc (Cacl concentration) and Length ## (resulting length of muscle section). utils::data(muscle, package = "MASS") ## The non linear model considered is ## Length = alpha + beta*exp(-Conc/theta) + error ## where theta is constant but alpha and beta may vary with Strip. with(muscle, table(Strip)) # 2, 3 or 4 obs per strip ## We first use the plinear algorithm to fit an overall model, ## ignoring that alpha and beta might vary with Strip. musc.1 <- nls(Length ~ cbind(1, exp(-Conc/th)), muscle, start = list(th = 1), algorithm = "plinear") ## IGNORE_RDIFF_BEGIN summary(musc.1) ## IGNORE_RDIFF_END ## Then we use nls' indexing feature for parameters in non-linear ## models to use the conventional algorithm to fit a model in which ## alpha and beta vary with Strip. The starting values are provided ## by the previously fitted model. ## Note that with indexed parameters, the starting values must be ## given in a list (with names): b <- coef(musc.1) musc.2 <- nls(Length ~ a[Strip] + b[Strip]*exp(-Conc/th), muscle, start = list(a = rep(b[2], 21), b = rep(b[3], 21), th = b[1])) ## IGNORE_RDIFF_BEGIN summary(musc.2) ## IGNORE_RDIFF_END ```
programming_docs
r None `SSD` SSD Matrix and Estimated Variance Matrix in Multivariate Models ---------------------------------------------------------------------- ### Description Functions to compute matrix of residual sums of squares and products, or the estimated variance matrix for multivariate linear models. ### Usage ``` # S3 method for class 'mlm' SSD(object, ...) # S3 methods for class 'SSD' and 'mlm' estVar(object, ...) ``` ### Arguments | | | | --- | --- | | `object` | `object` of class `"mlm"`, or `"SSD"` in the case of `estVar`. | | `...` | Unused | ### Value `SSD()` returns a list of class `"SSD"` containing the following components | | | | --- | --- | | `SSD` | The residual sums of squares and products matrix | | `df` | Degrees of freedom | | `call` | Copied from `object` | `estVar` returns a matrix with the estimated variances and covariances. ### See Also `<mauchly.test>`, `<anova.mlm>` ### Examples ``` # Lifted from Baron+Li: # "Notes on the use of R for psychology experiments and questionnaires" # Maxwell and Delaney, p. 497 reacttime <- matrix(c( 420, 420, 480, 480, 600, 780, 420, 480, 480, 360, 480, 600, 480, 480, 540, 660, 780, 780, 420, 540, 540, 480, 780, 900, 540, 660, 540, 480, 660, 720, 360, 420, 360, 360, 480, 540, 480, 480, 600, 540, 720, 840, 480, 600, 660, 540, 720, 900, 540, 600, 540, 480, 720, 780, 480, 420, 540, 540, 660, 780), ncol = 6, byrow = TRUE, dimnames = list(subj = 1:10, cond = c("deg0NA", "deg4NA", "deg8NA", "deg0NP", "deg4NP", "deg8NP"))) mlmfit <- lm(reacttime ~ 1) SSD(mlmfit) estVar(mlmfit) ``` r None `plot.density` Plot Method for Kernel Density Estimation --------------------------------------------------------- ### Description The `plot` method for density objects. ### Usage ``` ## S3 method for class 'density' plot(x, main = NULL, xlab = NULL, ylab = "Density", type = "l", zero.line = TRUE, ...) ``` ### Arguments | | | | --- | --- | | `x` | a `"density"` object. | | `main, xlab, ylab, type` | plotting parameters with useful defaults. | | `...` | further plotting parameters. | | `zero.line` | logical; if `TRUE`, add a base line at *y = 0* | ### Value None. ### See Also `<density>`. r None `median` Median Value ---------------------- ### Description Compute the sample median. ### Usage ``` median(x, na.rm = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `x` | an object for which a method has been defined, or a numeric vector containing the values whose median is to be computed. | | `na.rm` | a logical value indicating whether `NA` values should be stripped before the computation proceeds. | | `...` | potentially further arguments for methods; not used in the default method. | ### Details This is a generic function for which methods can be written. However, the default method makes use of `is.na`, `sort` and `mean` from package base all of which are generic, and so the default method will work for most classes (e.g., `"[Date](../../base/html/dates)"`) for which a median is a reasonable concept. ### Value The default method returns a length-one object of the same type as `x`, except when `x` is logical or integer of even length, when the result will be double. If there are no values or if `na.rm = FALSE` and there are `NA` values the result is `NA` of the same type as `x` (or more generally the result of `x[FALSE][NA]`). ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) *The New S Language*. Wadsworth & Brooks/Cole. ### See Also `<quantile>` for general quantiles. ### Examples ``` median(1:4) # = 2.5 [even number] median(c(1:3, 100, 1000)) # = 3 [odd, robust] ``` r None `ave` Group Averages Over Level Combinations of Factors -------------------------------------------------------- ### Description Subsets of `x[]` are averaged, where each subset consist of those observations with the same factor levels. ### Usage ``` ave(x, ..., FUN = mean) ``` ### Arguments | | | | --- | --- | | `x` | A numeric. | | `...` | Grouping variables, typically factors, all of the same `length` as `x`. | | `FUN` | Function to apply for each factor level combination. | ### Value A numeric vector, say `y` of length `length(x)`. If `...` is `g1, g2`, e.g., `y[i]` is equal to `FUN(x[j]`, for all `j` with `g1[j] == g1[i]` and `g2[j] == g2[i])`. ### See Also `[mean](../../base/html/mean)`, `<median>`. ### Examples ``` require(graphics) ave(1:3) # no grouping -> grand mean attach(warpbreaks) ave(breaks, wool) ave(breaks, tension) ave(breaks, tension, FUN = function(x) mean(x, trim = 0.1)) plot(breaks, main = "ave( Warpbreaks ) for wool x tension combinations") lines(ave(breaks, wool, tension ), type = "s", col = "blue") lines(ave(breaks, wool, tension, FUN = median), type = "s", col = "green") legend(40, 70, c("mean", "median"), lty = 1, col = c("blue","green"), bg = "gray90") detach() ``` r None `smooth` Tukey's (Running Median) Smoothing -------------------------------------------- ### Description Tukey's smoothers, *3RS3R*, *3RSS*, *3R*, etc. ### Usage ``` smooth(x, kind = c("3RS3R", "3RSS", "3RSR", "3R", "3", "S"), twiceit = FALSE, endrule = c("Tukey", "copy"), do.ends = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | a vector or time series | | `kind` | a character string indicating the kind of smoother required; defaults to `"3RS3R"`. | | `twiceit` | logical, indicating if the result should be ‘twiced’. Twicing a smoother *S(y)* means *S(y) + S(y - S(y))*, i.e., adding smoothed residuals to the smoothed values. This decreases bias (increasing variance). | | `endrule` | a character string indicating the rule for smoothing at the boundary. Either `"Tukey"` (default) or `"copy"`. | | `do.ends` | logical, indicating if the 3-splitting of ties should also happen at the boundaries (ends). This is only used for `kind = "S"`. | ### Details *`3`* is Tukey's short notation for running `<median>`s of length **3**, *`3R`* stands for **R**epeated *`3`* until convergence, and *`S`* for **S**plitting of horizontal stretches of length 2 or 3. Hence, *`3RS3R`* is a concatenation of `3R`, `S` and `3R`, *`3RSS`* similarly, whereas *`3RSR`* means first `3R` and then `(S and 3)` **R**epeated until convergence – which can be bad. ### Value An object of class `"tukeysmooth"` (which has `print` and `summary` methods) and is a vector or time series containing the smoothed values with additional attributes. ### Note S and S-PLUS use a different (somewhat better) Tukey smoother in `smooth(*)`. Note that there are other smoothing methods which provide rather better results. These were designed for hand calculations and may be used mainly for didactical purposes. Since **R** version 1.2, `smooth` *does* really implement Tukey's end-point rule correctly (see argument `endrule`). `kind = "3RSR"` has been the default till **R**-1.1, but it can have very bad properties, see the examples. Note that repeated application of `smooth(*)` *does* smooth more, for the `"3RS*"` kinds. ### References Tukey, J. W. (1977). *Exploratory Data Analysis*, Reading Massachusetts: Addison-Wesley. ### See Also `<runmed>` for running medians; `<lowess>` and `<loess>`; `<supsmu>` and `<smooth.spline>`. ### Examples ``` require(graphics) ## see also demo(smooth) ! x1 <- c(4, 1, 3, 6, 6, 4, 1, 6, 2, 4, 2) # very artificial (x3R <- smooth(x1, "3R")) # 2 iterations of "3" smooth(x3R, kind = "S") sm.3RS <- function(x, ...) smooth(smooth(x, "3R", ...), "S", ...) y <- c(1, 1, 19:1) plot(y, main = "misbehaviour of \"3RSR\"", col.main = 3) lines(sm.3RS(y)) lines(smooth(y)) lines(smooth(y, "3RSR"), col = 3, lwd = 2) # the horror x <- c(8:10, 10, 0, 0, 9, 9) plot(x, main = "breakdown of 3R and S and hence 3RSS") matlines(cbind(smooth(x, "3R"), smooth(x, "S"), smooth(x, "3RSS"), smooth(x))) presidents[is.na(presidents)] <- 0 # silly summary(sm3 <- smooth(presidents, "3R")) summary(sm2 <- smooth(presidents,"3RSS")) summary(sm <- smooth(presidents)) all.equal(c(sm2), c(smooth(smooth(sm3, "S"), "S"))) # 3RSS === 3R S S all.equal(c(sm), c(smooth(smooth(sm3, "S"), "3R"))) # 3RS3R === 3R S 3R plot(presidents, main = "smooth(presidents0, *) : 3R and default 3RS3R") lines(sm3, col = 3, lwd = 1.5) lines(sm, col = 2, lwd = 1.25) ``` r None `binom.test` Exact Binomial Test --------------------------------- ### Description Performs an exact test of a simple null hypothesis about the probability of success in a Bernoulli experiment. ### Usage ``` binom.test(x, n, p = 0.5, alternative = c("two.sided", "less", "greater"), conf.level = 0.95) ``` ### Arguments | | | | --- | --- | | `x` | number of successes, or a vector of length 2 giving the numbers of successes and failures, respectively. | | `n` | number of trials; ignored if `x` has length 2. | | `p` | hypothesized probability of success. | | `alternative` | indicates the alternative hypothesis and must be one of `"two.sided"`, `"greater"` or `"less"`. You can specify just the initial letter. | | `conf.level` | confidence level for the returned confidence interval. | ### Details Confidence intervals are obtained by a procedure first given in Clopper and Pearson (1934). This guarantees that the confidence level is at least `conf.level`, but in general does not give the shortest-length confidence intervals. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the number of successes. | | `parameter` | the number of trials. | | `p.value` | the p-value of the test. | | `conf.int` | a confidence interval for the probability of success. | | `estimate` | the estimated probability of success. | | `null.value` | the probability of success under the null, `p`. | | `alternative` | a character string describing the alternative hypothesis. | | `method` | the character string `"Exact binomial test"`. | | `data.name` | a character string giving the names of the data. | ### References Clopper, C. J. & Pearson, E. S. (1934). The use of confidence or fiducial limits illustrated in the case of the binomial. *Biometrika*, **26**, 404–413. doi: [10.2307/2331986](https://doi.org/10.2307/2331986). William J. Conover (1971), *Practical nonparametric statistics*. New York: John Wiley & Sons. Pages 97–104. Myles Hollander & Douglas A. Wolfe (1973), *Nonparametric Statistical Methods.* New York: John Wiley & Sons. Pages 15–22. ### See Also `<prop.test>` for a general (approximate) test for equal or given proportions. ### Examples ``` ## Conover (1971), p. 97f. ## Under (the assumption of) simple Mendelian inheritance, a cross ## between plants of two particular genotypes produces progeny 1/4 of ## which are "dwarf" and 3/4 of which are "giant", respectively. ## In an experiment to determine if this assumption is reasonable, a ## cross results in progeny having 243 dwarf and 682 giant plants. ## If "giant" is taken as success, the null hypothesis is that p = ## 3/4 and the alternative that p != 3/4. binom.test(c(682, 243), p = 3/4) binom.test(682, 682 + 243, p = 3/4) # The same. ## => Data are in agreement with the null hypothesis. ``` r None `wilcox.test` Wilcoxon Rank Sum and Signed Rank Tests ------------------------------------------------------ ### Description Performs one- and two-sample Wilcoxon tests on vectors of data; the latter is also known as ‘Mann-Whitney’ test. ### Usage ``` wilcox.test(x, ...) ## Default S3 method: wilcox.test(x, y = NULL, alternative = c("two.sided", "less", "greater"), mu = 0, paired = FALSE, exact = NULL, correct = TRUE, conf.int = FALSE, conf.level = 0.95, tol.root = 1e-4, digits.rank = Inf, ...) ## S3 method for class 'formula' wilcox.test(formula, data, subset, na.action, ...) ``` ### Arguments | | | | --- | --- | | `x` | numeric vector of data values. Non-finite (e.g., infinite or missing) values will be omitted. | | `y` | an optional numeric vector of data values: as with `x` non-finite values will be omitted. | | `alternative` | a character string specifying the alternative hypothesis, must be one of `"two.sided"` (default), `"greater"` or `"less"`. You can specify just the initial letter. | | `mu` | a number specifying an optional parameter used to form the null hypothesis. See ‘Details’. | | `paired` | a logical indicating whether you want a paired test. | | `exact` | a logical indicating whether an exact p-value should be computed. | | `correct` | a logical indicating whether to apply continuity correction in the normal approximation for the p-value. | | `conf.int` | a logical indicating whether a confidence interval should be computed. | | `conf.level` | confidence level of the interval. | | `tol.root` | (when `conf.int` is true:) a positive numeric tolerance, used in `<uniroot>(*, tol=tol.root)` calls. | | `digits.rank` | a number; if finite, `[rank](../../base/html/rank)([signif](../../base/html/round)(r, digits.rank))` will be used to compute ranks for the test statistic instead of (the default) `rank(r)`. | | `formula` | a formula of the form `lhs ~ rhs` where `lhs` is a numeric variable giving the data values and `rhs` either `1` for a one-sample or paired test or a factor with two levels giving the corresponding groups. If `lhs` is of class `"Pair"` and `rhs` is `1`, a paired test is done | | `data` | an optional matrix or data frame (or similar: see `<model.frame>`) containing the variables in the formula `formula`. By default the variables are taken from `environment(formula)`. | | `subset` | an optional vector specifying a subset of observations to be used. | | `na.action` | a function which indicates what should happen when the data contain `NA`s. Defaults to `getOption("na.action")`. | | `...` | further arguments to be passed to or from methods. | ### Details The formula interface is only applicable for the 2-sample tests. If only `x` is given, or if both `x` and `y` are given and `paired` is `TRUE`, a Wilcoxon signed rank test of the null that the distribution of `x` (in the one sample case) or of `x - y` (in the paired two sample case) is symmetric about `mu` is performed. Otherwise, if both `x` and `y` are given and `paired` is `FALSE`, a Wilcoxon rank sum test (equivalent to the Mann-Whitney test: see the Note) is carried out. In this case, the null hypothesis is that the distributions of `x` and `y` differ by a location shift of `mu` and the alternative is that they differ by some other location shift (and the one-sided alternative `"greater"` is that `x` is shifted to the right of `y`). By default (if `exact` is not specified), an exact p-value is computed if the samples contain less than 50 finite values and there are no ties. Otherwise, a normal approximation is used. For stability reasons, it may be advisable to use rounded data or to set `digits.rank = 7`, say, such that determination of ties does not depend on very small numeric differences (see the example). Optionally (if argument `conf.int` is true), a nonparametric confidence interval and an estimator for the pseudomedian (one-sample case) or for the difference of the location parameters `x-y` is computed. (The pseudomedian of a distribution *F* is the median of the distribution of *(u+v)/2*, where *u* and *v* are independent, each with distribution *F*. If *F* is symmetric, then the pseudomedian and median coincide. See Hollander & Wolfe (1973), page 34.) Note that in the two-sample case the estimator for the difference in location parameters does **not** estimate the difference in medians (a common misconception) but rather the median of the difference between a sample from `x` and a sample from `y`. If exact p-values are available, an exact confidence interval is obtained by the algorithm described in Bauer (1972), and the Hodges-Lehmann estimator is employed. Otherwise, the returned confidence interval and point estimate are based on normal approximations. These are continuity-corrected for the interval but *not* the estimate (as the correction depends on the `alternative`). With small samples it may not be possible to achieve very high confidence interval coverages. If this happens a warning will be given and an interval with lower coverage will be substituted. When `x` (and `y` if applicable) are valid, the function now always returns, also in the `conf.int = TRUE` case when a confidence interval cannot be computed, in which case the interval boundaries and sometimes the `estimate` now contain `[NaN](../../base/html/is.finite)`. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the value of the test statistic with a name describing it. | | `parameter` | the parameter(s) for the exact distribution of the test statistic. | | `p.value` | the p-value for the test. | | `null.value` | the location parameter `mu`. | | `alternative` | a character string describing the alternative hypothesis. | | `method` | the type of test applied. | | `data.name` | a character string giving the names of the data. | | `conf.int` | a confidence interval for the location parameter. (Only present if argument `conf.int = TRUE`.) | | `estimate` | an estimate of the location parameter. (Only present if argument `conf.int = TRUE`.) | ### Warning This function can use large amounts of memory and stack (and even crash **R** if the stack limit is exceeded) if `exact = TRUE` and one sample is large (several thousands or more). ### Note The literature is not unanimous about the definitions of the Wilcoxon rank sum and Mann-Whitney tests. The two most common definitions correspond to the sum of the ranks of the first sample with the minimum value subtracted or not: **R** subtracts and S-PLUS does not, giving a value which is larger by *m(m+1)/2* for a first sample of size *m*. (It seems Wilcoxon's original paper used the unadjusted sum of the ranks but subsequent tables subtracted the minimum.) **R**'s value can also be computed as the number of all pairs `(x[i], y[j])` for which `y[j]` is not greater than `x[i]`, the most common definition of the Mann-Whitney test. ### References David F. Bauer (1972). Constructing confidence sets using rank statistics. *Journal of the American Statistical Association* **67**, 687–690. doi: [10.1080/01621459.1972.10481279](https://doi.org/10.1080/01621459.1972.10481279). Myles Hollander and Douglas A. Wolfe (1973). *Nonparametric Statistical Methods*. New York: John Wiley & Sons. Pages 27–33 (one-sample), 68–75 (two-sample). Or second edition (1999). ### See Also `[psignrank](signrank)`, `[pwilcox](wilcoxon)`. `[wilcox\_test](../../coin/html/locationtests)` in package [coin](https://CRAN.R-project.org/package=coin) for exact, asymptotic and Monte Carlo *conditional* p-values, including in the presence of ties. `<kruskal.test>` for testing homogeneity in location parameters in the case of two or more samples; `<t.test>` for an alternative under normality assumptions [or large samples] ### Examples ``` require(graphics) ## One-sample test. ## Hollander & Wolfe (1973), 29f. ## Hamilton depression scale factor measurements in 9 patients with ## mixed anxiety and depression, taken at the first (x) and second ## (y) visit after initiation of a therapy (administration of a ## tranquilizer). x <- c(1.83, 0.50, 1.62, 2.48, 1.68, 1.88, 1.55, 3.06, 1.30) y <- c(0.878, 0.647, 0.598, 2.05, 1.06, 1.29, 1.06, 3.14, 1.29) wilcox.test(x, y, paired = TRUE, alternative = "greater") wilcox.test(y - x, alternative = "less") # The same. wilcox.test(y - x, alternative = "less", exact = FALSE, correct = FALSE) # H&W large sample # approximation ## Formula interface to one-sample and paired tests depression <- data.frame(first = x, second = y, change = y - x) wilcox.test(change ~ 1, data = depression) wilcox.test(Pair(first, second) ~ 1, data = depression) ## Two-sample test. ## Hollander & Wolfe (1973), 69f. ## Permeability constants of the human chorioamnion (a placental ## membrane) at term (x) and between 12 to 26 weeks gestational ## age (y). The alternative of interest is greater permeability ## of the human chorioamnion for the term pregnancy. x <- c(0.80, 0.83, 1.89, 1.04, 1.45, 1.38, 1.91, 1.64, 0.73, 1.46) y <- c(1.15, 0.88, 0.90, 0.74, 1.21) wilcox.test(x, y, alternative = "g") # greater wilcox.test(x, y, alternative = "greater", exact = FALSE, correct = FALSE) # H&W large sample # approximation wilcox.test(rnorm(10), rnorm(10, 2), conf.int = TRUE) ## Formula interface. boxplot(Ozone ~ Month, data = airquality) wilcox.test(Ozone ~ Month, data = airquality, subset = Month %in% c(5, 8)) ## accuracy in ties determination via 'digits.rank': wilcox.test( 4:2, 3:1, paired=TRUE) # Warning: cannot compute exact p-value with ties wilcox.test((4:2)/10, (3:1)/10, paired=TRUE) # no ties => *no* warning wilcox.test((4:2)/10, (3:1)/10, paired=TRUE, digits.rank = 9) # same ties as (4:2, 3:1) ```
programming_docs
r None `reorder.dendrogram` Reorder a Dendrogram ------------------------------------------ ### Description A method for the generic function `[reorder](reorder.factor)`. There are many different orderings of a dendrogram that are consistent with the structure imposed. This function takes a dendrogram and a vector of values and reorders the dendrogram in the order of the supplied vector, maintaining the constraints on the dendrogram. ### Usage ``` ## S3 method for class 'dendrogram' reorder(x, wts, agglo.FUN = sum, ...) ``` ### Arguments | | | | --- | --- | | `x` | the (dendrogram) object to be reordered | | `wts` | numeric weights (arbitrary values) for reordering. | | `agglo.FUN` | a function for weights agglomeration, see below. | | `...` | additional arguments | ### Details Using the weights `wts`, the leaves of the dendrogram are reordered so as to be in an order as consistent as possible with the weights. At each node, the branches are ordered in increasing weights where the weight of a branch is defined as *f(w\_j)* where *f* is `agglo.FUN` and *w\_j* is the weight of the *j*-th sub branch. ### Value A dendrogram where each node has a further attribute `value` with its corresponding weight. ### Author(s) R. Gentleman and M. Maechler ### See Also `[reorder](reorder.factor)`. `[rev.dendrogram](dendrogram)` which simply reverses the nodes' order; `<heatmap>`, `<cophenetic>`. ### Examples ``` require(graphics) set.seed(123) x <- rnorm(10) hc <- hclust(dist(x)) dd <- as.dendrogram(hc) dd.reorder <- reorder(dd, 10:1) plot(dd, main = "random dendrogram 'dd'") op <- par(mfcol = 1:2) plot(dd.reorder, main = "reorder(dd, 10:1)") plot(reorder(dd, 10:1, agglo.FUN = mean), main = "reorder(dd, 10:1, mean)") par(op) ``` r None `factanal` Factor Analysis --------------------------- ### Description Perform maximum-likelihood factor analysis on a covariance matrix or data matrix. ### Usage ``` factanal(x, factors, data = NULL, covmat = NULL, n.obs = NA, subset, na.action, start = NULL, scores = c("none", "regression", "Bartlett"), rotation = "varimax", control = NULL, ...) ``` ### Arguments | | | | --- | --- | | `x` | A formula or a numeric matrix or an object that can be coerced to a numeric matrix. | | `factors` | The number of factors to be fitted. | | `data` | An optional data frame (or similar: see `<model.frame>`), used only if `x` is a formula. By default the variables are taken from `environment(formula)`. | | `covmat` | A covariance matrix, or a covariance list as returned by `<cov.wt>`. Of course, correlation matrices are covariance matrices. | | `n.obs` | The number of observations, used if `covmat` is a covariance matrix. | | `subset` | A specification of the cases to be used, if `x` is used as a matrix or formula. | | `na.action` | The `na.action` to be used if `x` is used as a formula. | | `start` | `NULL` or a matrix of starting values, each column giving an initial set of uniquenesses. | | `scores` | Type of scores to produce, if any. The default is none, `"regression"` gives Thompson's scores, `"Bartlett"` given Bartlett's weighted least-squares scores. Partial matching allows these names to be abbreviated. | | `rotation` | character. `"none"` or the name of a function to be used to rotate the factors: it will be called with first argument the loadings matrix, and should return a list with component `loadings` giving the rotated loadings, or just the rotated loadings. | | `control` | A list of control values, nstart The number of starting values to be tried if `start = NULL`. Default 1. trace logical. Output tracing information? Default `FALSE`. lower The lower bound for uniquenesses during optimization. Should be > 0. Default 0.005. opt A list of control values to be passed to `<optim>`'s `control` argument. rotate a list of additional arguments for the rotation function. | | `...` | Components of `control` can also be supplied as named arguments to `factanal`. | ### Details The factor analysis model is *x = Λ f + e* for a *p*–element vector *x*, a *p x k* matrix *Λ* of *loadings*, a *k*–element vector *f* of *scores* and a *p*–element vector *e* of errors. None of the components other than *x* is observed, but the major restriction is that the scores be uncorrelated and of unit variance, and that the errors be independent with variances *Psi*, the *uniquenesses*. It is also common to scale the observed variables to unit variance, and done in this function. Thus factor analysis is in essence a model for the correlation matrix of *x*, *Σ = Λ Λ' + Ψ* There is still some indeterminacy in the model for it is unchanged if *Λ* is replaced by *G Λ* for any orthogonal matrix *G*. Such matrices *G* are known as *rotations* (although the term is applied also to non-orthogonal invertible matrices). If `covmat` is supplied it is used. Otherwise `x` is used if it is a matrix, or a formula `x` is used with `data` to construct a model matrix, and that is used to construct a covariance matrix. (It makes no sense for the formula to have a response, and all the variables must be numeric.) Once a covariance matrix is found or calculated from `x`, it is converted to a correlation matrix for analysis. The correlation matrix is returned as component `correlation` of the result. The fit is done by optimizing the log likelihood assuming multivariate normality over the uniquenesses. (The maximizing loadings for given uniquenesses can be found analytically: Lawley & Maxwell (1971, p. 27).) All the starting values supplied in `start` are tried in turn and the best fit obtained is used. If `start = NULL` then the first fit is started at the value suggested by Jöreskog (1963) and given by Lawley & Maxwell (1971, p. 31), and then `control$nstart - 1` other values are tried, randomly selected as equal values of the uniquenesses. The uniquenesses are technically constrained to lie in *[0, 1]*, but near-zero values are problematical, and the optimization is done with a lower bound of `control$lower`, default 0.005 (Lawley & Maxwell, 1971, p. 32). Scores can only be produced if a data matrix is supplied and used. The first method is the regression method of Thomson (1951), the second the weighted least squares method of Bartlett (1937, 8). Both are estimates of the unobserved scores *f*. Thomson's method regresses (in the population) the unknown *f* on *x* to yield *hat f = Λ' Σ^-1 x* and then substitutes the sample estimates of the quantities on the right-hand side. Bartlett's method minimizes the sum of squares of standardized errors over the choice of *f*, given (the fitted) *Λ*. If `x` is a formula then the standard `NA`-handling is applied to the scores (if requested): see `[napredict](nafns)`. The `print` method (documented under `<loadings>`) follows the factor analysis convention of drawing attention to the patterns of the results, so the default precision is three decimal places, and small loadings are suppressed. ### Value An object of class `"factanal"` with components | | | | --- | --- | | `loadings` | A matrix of loadings, one column for each factor. The factors are ordered in decreasing order of sums of squares of loadings, and given the sign that will make the sum of the loadings positive. This is of class `"loadings"`: see `<loadings>` for its `print` method. | | `uniquenesses` | The uniquenesses computed. | | `correlation` | The correlation matrix used. | | `criteria` | The results of the optimization: the value of the criterion (a linear function of the negative log-likelihood) and information on the iterations used. | | `factors` | The argument `factors`. | | `dof` | The number of degrees of freedom of the factor analysis model. | | `method` | The method: always `"mle"`. | | `rotmat` | The rotation matrix if relevant. | | `scores` | If requested, a matrix of scores. `napredict` is applied to handle the treatment of values omitted by the `na.action`. | | `n.obs` | The number of observations if available, or `NA`. | | `call` | The matched call. | | `na.action` | If relevant. | | `STATISTIC, PVAL` | The significance-test statistic and P value, if it can be computed. | ### Note There are so many variations on factor analysis that it is hard to compare output from different programs. Further, the optimization in maximum likelihood factor analysis is hard, and many other examples we compared had less good fits than produced by this function. In particular, solutions which are ‘Heywood cases’ (with one or more uniquenesses essentially zero) are much more common than most texts and some other programs would lead one to believe. ### References Bartlett, M. S. (1937). The statistical conception of mental factors. *British Journal of Psychology*, **28**, 97–104. doi: [10.1111/j.2044-8295.1937.tb00863.x](https://doi.org/10.1111/j.2044-8295.1937.tb00863.x). Bartlett, M. S. (1938). Methods of estimating mental factors. *Nature*, **141**, 609–610. doi: [10.1038/141246a0](https://doi.org/10.1038/141246a0). Jöreskog, K. G. (1963). *Statistical Estimation in Factor Analysis*. Almqvist and Wicksell. Lawley, D. N. and Maxwell, A. E. (1971). *Factor Analysis as a Statistical Method*. Second edition. Butterworths. Thomson, G. H. (1951). *The Factorial Analysis of Human Ability*. London University Press. ### See Also `<loadings>` (which explains some details of the `print` method), `<varimax>`, `<princomp>`, `[ability.cov](../../datasets/html/ability.cov)`, `[Harman23.cor](../../datasets/html/harman23.cor)`, `[Harman74.cor](../../datasets/html/harman74.cor)`. Other rotation methods are available in various contributed packages, including [GPArotation](https://CRAN.R-project.org/package=GPArotation) and [psych](https://CRAN.R-project.org/package=psych). ### Examples ``` # A little demonstration, v2 is just v1 with noise, # and same for v4 vs. v3 and v6 vs. v5 # Last four cases are there to add noise # and introduce a positive manifold (g factor) v1 <- c(1,1,1,1,1,1,1,1,1,1,3,3,3,3,3,4,5,6) v2 <- c(1,2,1,1,1,1,2,1,2,1,3,4,3,3,3,4,6,5) v3 <- c(3,3,3,3,3,1,1,1,1,1,1,1,1,1,1,5,4,6) v4 <- c(3,3,4,3,3,1,1,2,1,1,1,1,2,1,1,5,6,4) v5 <- c(1,1,1,1,1,3,3,3,3,3,1,1,1,1,1,6,4,5) v6 <- c(1,1,1,2,1,3,3,3,4,3,1,1,1,2,1,6,5,4) m1 <- cbind(v1,v2,v3,v4,v5,v6) cor(m1) factanal(m1, factors = 3) # varimax is the default factanal(m1, factors = 3, rotation = "promax") # The following shows the g factor as PC1 prcomp(m1) # signs may depend on platform ## formula interface factanal(~v1+v2+v3+v4+v5+v6, factors = 3, scores = "Bartlett")$scores ## a realistic example from Bartholomew (1987, pp. 61-65) utils::example(ability.cov) ``` r None `ksmooth` Kernel Regression Smoother ------------------------------------- ### Description The Nadaraya–Watson kernel regression estimate. ### Usage ``` ksmooth(x, y, kernel = c("box", "normal"), bandwidth = 0.5, range.x = range(x), n.points = max(100L, length(x)), x.points) ``` ### Arguments | | | | --- | --- | | `x` | input x values. [Long vectors](../../base/html/longvectors) are supported. | | `y` | input y values. Long vectors are supported. | | `kernel` | the kernel to be used. Can be abbreviated. | | `bandwidth` | the bandwidth. The kernels are scaled so that their quartiles (viewed as probability densities) are at *+/-* `0.25*bandwidth`. | | `range.x` | the range of points to be covered in the output. | | `n.points` | the number of points at which to evaluate the fit. | | `x.points` | points at which to evaluate the smoothed fit. If missing, `n.points` are chosen uniformly to cover `range.x`. Long vectors are supported. | ### Value A list with components | | | | --- | --- | | `x` | values at which the smoothed fit is evaluated. Guaranteed to be in increasing order. | | `y` | fitted values corresponding to `x`. | ### Note This function was implemented for compatibility with S, although it is nowhere near as slow as the S function. Better kernel smoothers are available in other packages such as [KernSmooth](https://CRAN.R-project.org/package=KernSmooth). ### Examples ``` require(graphics) with(cars, { plot(speed, dist) lines(ksmooth(speed, dist, "normal", bandwidth = 2), col = 2) lines(ksmooth(speed, dist, "normal", bandwidth = 5), col = 3) }) ``` r None `pairwise.table` Tabulate p values for pairwise comparisons ------------------------------------------------------------ ### Description Creates table of p values for pairwise comparisons with corrections for multiple testing. ### Usage ``` pairwise.table(compare.levels, level.names, p.adjust.method) ``` ### Arguments | | | | --- | --- | | `compare.levels` | a `[function](../../base/html/function)` to compute (raw) p value given indices `i` and `j`. | | `level.names` | names of the group levels | | `p.adjust.method` | a character string specifying the method for multiple testing adjustment; almost always one of `[p.adjust.methods](p.adjust)`. Can be abbreviated. | ### Details Functions that do multiple group comparisons create separate `compare.levels` functions (assumed to be symmetrical in `i` and `j`) and passes them to this function. ### Value Table of p values in lower triangular form. ### See Also `<pairwise.t.test>` r None `ar.ols` Fit Autoregressive Models to Time Series by OLS --------------------------------------------------------- ### Description Fit an autoregressive time series model to the data by ordinary least squares, by default selecting the complexity by AIC. ### Usage ``` ar.ols(x, aic = TRUE, order.max = NULL, na.action = na.fail, demean = TRUE, intercept = demean, series, ...) ``` ### Arguments | | | | --- | --- | | `x` | A univariate or multivariate time series. | | `aic` | Logical flag. If `TRUE` then the Akaike Information Criterion is used to choose the order of the autoregressive model. If `FALSE`, the model of order `order.max` is fitted. | | `order.max` | Maximum order (or order) of model to fit. Defaults to *10\*log10(N)* where *N* is the number of observations. | | `na.action` | function to be called to handle missing values. | | `demean` | should the AR model be for `x` minus its mean? | | `intercept` | should a separate intercept term be fitted? | | `series` | names for the series. Defaults to `deparse1(substitute(x))`. | | `...` | further arguments to be passed to or from methods. | ### Details `ar.ols` fits the general AR model to a possibly non-stationary and/or multivariate system of series `x`. The resulting unconstrained least squares estimates are consistent, even if some of the series are non-stationary and/or co-integrated. For definiteness, note that the AR coefficients have the sign in *(x[t] - m) = a[0] + a[1]\*(x[t-1] - m) + … + a[p]\*(x[t-p] - m) + e[t]* where *a[0]* is zero unless `intercept` is true, and *m* is the sample mean if `demean` is true, zero otherwise. Order selection is done by AIC if `aic` is true. This is problematic, as `ar.ols` does not perform true maximum likelihood estimation. The AIC is computed as if the variance estimate (computed from the variance matrix of the residuals) were the MLE, omitting the determinant term from the likelihood. Note that this is not the same as the Gaussian likelihood evaluated at the estimated parameter values. Some care is needed if `intercept` is true and `demean` is false. Only use this is the series are roughly centred on zero. Otherwise the computations may be inaccurate or fail entirely. ### Value A list of class `"ar"` with the following elements: | | | | --- | --- | | `order` | The order of the fitted model. This is chosen by minimizing the AIC if `aic = TRUE`, otherwise it is `order.max`. | | `ar` | Estimated autoregression coefficients for the fitted model. | | `var.pred` | The prediction variance: an estimate of the portion of the variance of the time series that is not explained by the autoregressive model. | | `x.mean` | The estimated mean (or zero if `demean` is false) of the series used in fitting and for use in prediction. | | `x.intercept` | The intercept in the model for `x - x.mean`, or zero if `intercept` is false. | | `aic` | The differences in AIC between each model and the best-fitting model. Note that the latter can have an AIC of `-Inf`. | | `n.used` | The number of observations in the time series. | | `order.max` | The value of the `order.max` argument. | | `partialacf` | `NULL`. For compatibility with `ar`. | | `resid` | residuals from the fitted model, conditioning on the first `order` observations. The first `order` residuals are set to `NA`. If `x` is a time series, so is `resid`. | | `method` | The character string `"Unconstrained LS"`. | | `series` | The name(s) of the time series. | | `frequency` | The frequency of the time series. | | `call` | The matched call. | | `asy.se.coef` | The asymptotic-theory standard errors of the coefficient estimates. | ### Author(s) Adrian Trapletti, Brian Ripley. ### References Luetkepohl, H. (1991): *Introduction to Multiple Time Series Analysis.* Springer Verlag, NY, pp. 368–370. ### See Also `<ar>` ### Examples ``` ar(lh, method = "burg") ar.ols(lh) ar.ols(lh, FALSE, 4) # fit ar(4) ar.ols(ts.union(BJsales, BJsales.lead)) x <- diff(log(EuStockMarkets)) ar.ols(x, order.max = 6, demean = FALSE, intercept = TRUE) ``` r None `na.contiguous` Find Longest Contiguous Stretch of non-NAs ----------------------------------------------------------- ### Description Find the longest consecutive stretch of non-missing values in a time series object. (In the event of a tie, the first such stretch.) ### Usage ``` na.contiguous(object, ...) ``` ### Arguments | | | | --- | --- | | `object` | a univariate or multivariate time series. | | `...` | further arguments passed to or from other methods. | ### Value A time series without missing values. The class of `object` will be preserved. ### See Also `[na.omit](na.fail)` and `[na.omit.ts](ts-methods)`; `<na.fail>` ### Examples ``` na.contiguous(presidents) ``` r None `box.test` Box-Pierce and Ljung-Box Tests ------------------------------------------ ### Description Compute the Box–Pierce or Ljung–Box test statistic for examining the null hypothesis of independence in a given time series. These are sometimes known as ‘portmanteau’ tests. ### Usage ``` Box.test(x, lag = 1, type = c("Box-Pierce", "Ljung-Box"), fitdf = 0) ``` ### Arguments | | | | --- | --- | | `x` | a numeric vector or univariate time series. | | `lag` | the statistic will be based on `lag` autocorrelation coefficients. | | `type` | test to be performed: partial matching is used. | | `fitdf` | number of degrees of freedom to be subtracted if `x` is a series of residuals. | ### Details These tests are sometimes applied to the residuals from an `ARMA(p, q)` fit, in which case the references suggest a better approximation to the null-hypothesis distribution is obtained by setting `fitdf = p+q`, provided of course that `lag > fitdf`. ### Value A list with class `"htest"` containing the following components: | | | | --- | --- | | `statistic` | the value of the test statistic. | | `parameter` | the degrees of freedom of the approximate chi-squared distribution of the test statistic (taking `fitdf` into account). | | `p.value` | the p-value of the test. | | `method` | a character string indicating which type of test was performed. | | `data.name` | a character string giving the name of the data. | ### Note Missing values are not handled. ### Author(s) A. Trapletti ### References Box, G. E. P. and Pierce, D. A. (1970), Distribution of residual correlations in autoregressive-integrated moving average time series models. *Journal of the American Statistical Association*, **65**, 1509–1526. doi: [10.2307/2284333](https://doi.org/10.2307/2284333). Ljung, G. M. and Box, G. E. P. (1978), On a measure of lack of fit in time series models. *Biometrika*, **65**, 297–303. doi: [10.2307/2335207](https://doi.org/10.2307/2335207). Harvey, A. C. (1993) *Time Series Models*. 2nd Edition, Harvester Wheatsheaf, NY, pp. 44, 45. ### Examples ``` x <- rnorm (100) Box.test (x, lag = 1) Box.test (x, lag = 1, type = "Ljung") ```
programming_docs
r None `lm` Fitting Linear Models --------------------------- ### Description `lm` is used to fit linear models. It can be used to carry out regression, single stratum analysis of variance and analysis of covariance (although `<aov>` may provide a more convenient interface for these). ### Usage ``` lm(formula, data, subset, weights, na.action, method = "qr", model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE, contrasts = NULL, offset, ...) ``` ### Arguments | | | | --- | --- | | `formula` | an object of class `"<formula>"` (or one that can be coerced to that class): a symbolic description of the model to be fitted. The details of model specification are given under ‘Details’. | | `data` | an optional data frame, list or environment (or object coercible by `[as.data.frame](../../base/html/as.data.frame)` to a data frame) containing the variables in the model. If not found in `data`, the variables are taken from `environment(formula)`, typically the environment from which `lm` is called. | | `subset` | an optional vector specifying a subset of observations to be used in the fitting process. | | `weights` | an optional vector of weights to be used in the fitting process. Should be `NULL` or a numeric vector. If non-NULL, weighted least squares is used with weights `weights` (that is, minimizing `sum(w*e^2)`); otherwise ordinary least squares is used. See also ‘Details’, | | `na.action` | a function which indicates what should happen when the data contain `NA`s. The default is set by the `na.action` setting of `[options](../../base/html/options)`, and is `<na.fail>` if that is unset. The ‘factory-fresh’ default is `[na.omit](na.fail)`. Another possible value is `NULL`, no action. Value `[na.exclude](na.fail)` can be useful. | | `method` | the method to be used; for fitting, currently only `method = "qr"` is supported; `method = "model.frame"` returns the model frame (the same as with `model = TRUE`, see below). | | `model, x, y, qr` | logicals. If `TRUE` the corresponding components of the fit (the model frame, the model matrix, the response, the QR decomposition) are returned. | | `singular.ok` | logical. If `FALSE` (the default in S but not in **R**) a singular fit is an error. | | `contrasts` | an optional list. See the `contrasts.arg` of `[model.matrix.default](model.matrix)`. | | `offset` | this can be used to specify an *a priori* known component to be included in the linear predictor during fitting. This should be `NULL` or a numeric vector or matrix of extents matching those of the response. One or more `<offset>` terms can be included in the formula instead or as well, and if more than one are specified their sum is used. See `[model.offset](model.extract)`. | | `...` | additional arguments to be passed to the low level regression fitting functions (see below). | ### Details Models for `lm` are specified symbolically. A typical model has the form `response ~ terms` where `response` is the (numeric) response vector and `terms` is a series of terms which specifies a linear predictor for `response`. A terms specification of the form `first + second` indicates all the terms in `first` together with all the terms in `second` with duplicates removed. A specification of the form `first:second` indicates the set of terms obtained by taking the interactions of all terms in `first` with all terms in `second`. The specification `first*second` indicates the *cross* of `first` and `second`. This is the same as `first + second + first:second`. If the formula includes an `<offset>`, this is evaluated and subtracted from the response. If `response` is a matrix a linear model is fitted separately by least-squares to each column of the matrix. See `<model.matrix>` for some further details. The terms in the formula will be re-ordered so that main effects come first, followed by the interactions, all second-order, all third-order and so on: to avoid this pass a `terms` object as the formula (see `<aov>` and `demo(glm.vr)` for an example). A formula has an implied intercept term. To remove this use either `y ~ x - 1` or `y ~ 0 + x`. See `<formula>` for more details of allowed formulae. Non-`NULL` `weights` can be used to indicate that different observations have different variances (with the values in `weights` being inversely proportional to the variances); or equivalently, when the elements of `weights` are positive integers *w\_i*, that each response *y\_i* is the mean of *w\_i* unit-weight observations (including the case that there are *w\_i* observations equal to *y\_i* and the data have been summarized). However, in the latter case, notice that within-group variation is not used. Therefore, the sigma estimate and residual degrees of freedom may be suboptimal; in the case of replication weights, even wrong. Hence, standard errors and analysis of variance tables should be treated with care. `lm` calls the lower level functions `[lm.fit](lmfit)`, etc, see below, for the actual numerical computations. For programming only, you may consider doing likewise. All of `weights`, `subset` and `offset` are evaluated in the same way as variables in `formula`, that is first in `data` and then in the environment of `formula`. ### Value `lm` returns an object of `[class](../../base/html/class)` `"lm"` or for multiple responses of class `c("mlm", "lm")`. The functions `summary` and `<anova>` are used to obtain and print a summary and analysis of variance table of the results. The generic accessor functions `coefficients`, `effects`, `fitted.values` and `residuals` extract various useful features of the value returned by `lm`. An object of class `"lm"` is a list containing at least the following components: | | | | --- | --- | | `coefficients` | a named vector of coefficients | | `residuals` | the residuals, that is response minus fitted values. | | `fitted.values` | the fitted mean values. | | `rank` | the numeric rank of the fitted linear model. | | `weights` | (only for weighted fits) the specified weights. | | `df.residual` | the residual degrees of freedom. | | `call` | the matched call. | | `terms` | the `<terms>` object used. | | `contrasts` | (only where relevant) the contrasts used. | | `xlevels` | (only where relevant) a record of the levels of the factors used in fitting. | | `offset` | the offset used (missing if none were used). | | `y` | if requested, the response used. | | `x` | if requested, the model matrix used. | | `model` | if requested (the default), the model frame used. | | `na.action` | (where relevant) information returned by `<model.frame>` on the special handling of `NA`s. | In addition, non-null fits will have components `assign`, `effects` and (unless not requested) `qr` relating to the linear fit, for use by extractor functions such as `summary` and `<effects>`. ### Using time series Considerable care is needed when using `lm` with time series. Unless `na.action = NULL`, the time series attributes are stripped from the variables before the regression is done. (This is necessary as omitting `NA`s would invalidate the time series attributes, and if `NA`s are omitted in the middle of the series the result would no longer be a regular time series.) Even if the time series attributes are retained, they are not used to line up series, so that the time shift of a lagged or differenced regressor would be ignored. It is good practice to prepare a `data` argument by `[ts.intersect](ts.union)(..., dframe = TRUE)`, then apply a suitable `na.action` to that data frame and call `lm` with `na.action = NULL` so that residuals and fitted values are time series. ### Note Offsets specified by `offset` will not be included in predictions by `<predict.lm>`, whereas those specified by an offset term in the formula will be. ### Author(s) The design was inspired by the S function of the same name described in Chambers (1992). The implementation of model formula by Ross Ihaka was based on Wilkinson & Rogers (1973). ### References Chambers, J. M. (1992) *Linear models.* Chapter 4 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. Wilkinson, G. N. and Rogers, C. E. (1973). Symbolic descriptions of factorial models for analysis of variance. *Applied Statistics*, **22**, 392–399. doi: [10.2307/2346786](https://doi.org/10.2307/2346786). ### See Also `<summary.lm>` for summaries and `<anova.lm>` for the ANOVA table; `<aov>` for a different interface. The generic functions `<coef>`, `<effects>`, `<residuals>`, `[fitted](fitted.values)`, `<vcov>`. `<predict.lm>` (via `<predict>`) for prediction, including confidence and prediction intervals; `<confint>` for confidence intervals of *parameters*. `<lm.influence>` for regression diagnostics, and `<glm>` for **generalized** linear models. The underlying low level functions, `[lm.fit](lmfit)` for plain, and `[lm.wfit](lmfit)` for weighted regression fitting. More `lm()` examples are available e.g., in `[anscombe](../../datasets/html/anscombe)`, `[attitude](../../datasets/html/attitude)`, `[freeny](../../datasets/html/freeny)`, `[LifeCycleSavings](../../datasets/html/lifecyclesavings)`, `[longley](../../datasets/html/longley)`, `[stackloss](../../datasets/html/stackloss)`, `[swiss](../../datasets/html/swiss)`. `biglm` in package [biglm](https://CRAN.R-project.org/package=biglm) for an alternative way to fit linear models to large datasets (especially those with many cases). ### Examples ``` require(graphics) ## Annette Dobson (1990) "An Introduction to Generalized Linear Models". ## Page 9: Plant Weight Data. ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14) trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69) group <- gl(2, 10, 20, labels = c("Ctl","Trt")) weight <- c(ctl, trt) lm.D9 <- lm(weight ~ group) lm.D90 <- lm(weight ~ group - 1) # omitting intercept anova(lm.D9) summary(lm.D90) opar <- par(mfrow = c(2,2), oma = c(0, 0, 1.1, 0)) plot(lm.D9, las = 1) # Residuals, Fitted, ... par(opar) ### less simple examples in "See Also" above ``` r None `filter` Linear Filtering on a Time Series ------------------------------------------- ### Description Applies linear filtering to a univariate time series or to each series separately of a multivariate time series. ### Usage ``` filter(x, filter, method = c("convolution", "recursive"), sides = 2, circular = FALSE, init) ``` ### Arguments | | | | --- | --- | | `x` | a univariate or multivariate time series. | | `filter` | a vector of filter coefficients in reverse time order (as for AR or MA coefficients). | | `method` | Either `"convolution"` or `"recursive"` (and can be abbreviated). If `"convolution"` a moving average is used: if `"recursive"` an autoregression is used. | | `sides` | for convolution filters only. If `sides = 1` the filter coefficients are for past values only; if `sides = 2` they are centred around lag 0. In this case the length of the filter should be odd, but if it is even, more of the filter is forward in time than backward. | | `circular` | for convolution filters only. If `TRUE`, wrap the filter around the ends of the series, otherwise assume external values are missing (`NA`). | | `init` | for recursive filters only. Specifies the initial values of the time series just prior to the start value, in reverse time order. The default is a set of zeros. | ### Details Missing values are allowed in `x` but not in `filter` (where they would lead to missing values everywhere in the output). Note that there is an implied coefficient 1 at lag 0 in the recursive filter, which gives *y[i] = x[i] + f[1]\*y[i-1] + … + f[p]\*y[i-p]* No check is made to see if recursive filter is invertible: the output may diverge if it is not. The convolution filter is *y[i] = f[1]\*x[i+o] + … + f[p]\*x[i+o-(p-1)]* where `o` is the offset: see `sides` for how it is determined. ### Value A time series object. ### Note `<convolve>(, type = "filter")` uses the FFT for computations and so *may* be faster for long filters on univariate series, but it does not return a time series (and so the time alignment is unclear), nor does it handle missing values. `filter` is faster for a filter of length 100 on a series of length 1000, for example. ### See Also `<convolve>`, `<arima.sim>` ### Examples ``` x <- 1:100 filter(x, rep(1, 3)) filter(x, rep(1, 3), sides = 1) filter(x, rep(1, 3), sides = 1, circular = TRUE) filter(presidents, rep(1, 3)) ``` r None `model.matrix` Construct Design Matrices ----------------------------------------- ### Description `model.matrix` creates a design (or model) matrix, e.g., by expanding factors to a set of dummy variables (depending on the contrasts) and expanding interactions similarly. ### Usage ``` model.matrix(object, ...) ## Default S3 method: model.matrix(object, data = environment(object), contrasts.arg = NULL, xlev = NULL, ...) ``` ### Arguments | | | | --- | --- | | `object` | an object of an appropriate class. For the default method, a model <formula> or a `<terms>` object. | | `data` | a data frame created with `<model.frame>`. If another sort of object, `model.frame` is called first. | | `contrasts.arg` | a list, whose entries are values (numeric matrices, `[function](../../base/html/function)`s or character strings naming functions) to be used as replacement values for the `<contrasts>` replacement function and whose names are the names of columns of `data` containing `[factor](../../base/html/factor)`s. | | `xlev` | to be used as argument of `<model.frame>` if `data` is such that `model.frame` is called. | | `...` | further arguments passed to or from other methods. | ### Details `model.matrix` creates a design matrix from the description given in `terms(object)`, using the data in `data` which must supply variables with the same names as would be created by a call to `model.frame(object)` or, more precisely, by evaluating `attr(terms(object), "variables")`. If `data` is a data frame, there may be other columns and the order of columns is not important. Any character variables are coerced to factors. After coercion, all the variables used on the right-hand side of the formula must be logical, integer, numeric or factor. If `contrasts.arg` is specified for a factor it overrides the default factor coding for that variable and any `"contrasts"` attribute set by `[C](zc)` or `<contrasts>`. Whereas invalid `contrasts.arg`s have been ignored always, they are warned about since **R** version 3.6.0. In an interaction term, the variable whose levels vary fastest is the first one to appear in the formula (and not in the term), so in `~ a + b + b:a` the interaction will have `a` varying fastest. By convention, if the response variable also appears on the right-hand side of the formula it is dropped (with a warning), although interactions involving the term are retained. ### Value The design matrix for a regression-like model with the specified formula and data. There is an attribute `"assign"`, an integer vector with an entry for each column in the matrix giving the term in the formula which gave rise to the column. Value `0` corresponds to the intercept (if any), and positive values to terms in the order given by the `term.labels` attribute of the `terms` structure corresponding to `object`. If there are any factors in terms in the model, there is an attribute `"contrasts"`, a named list with an entry for each factor. This specifies the contrasts that would be used in terms in which the factor is coded by contrasts (in some terms dummy coding may be used), either as a character vector naming a function or as a numeric matrix. ### References Chambers, J. M. (1992) *Data for models.* Chapter 3 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<model.frame>`, `<model.extract>`, `<terms>` `[sparse.model.matrix](../../matrix/html/sparse.model.matrix)` from package [Matrix](https://CRAN.R-project.org/package=Matrix) for creating *sparse* model matrices, which may be more efficient in large dimensions. ### Examples ``` ff <- log(Volume) ~ log(Height) + log(Girth) utils::str(m <- model.frame(ff, trees)) mat <- model.matrix(ff, m) dd <- data.frame(a = gl(3,4), b = gl(4,1,12)) # balanced 2-way options("contrasts") # typically 'treatment' (for unordered factors) model.matrix(~ a + b, dd) model.matrix(~ a + b, dd, contrasts.arg = list(a = "contr.sum")) model.matrix(~ a + b, dd, contrasts.arg = list(a = "contr.sum", b = contr.poly)) m.orth <- model.matrix(~a+b, dd, contrasts.arg = list(a = "contr.helmert")) crossprod(m.orth) # m.orth is ALMOST orthogonal # invalid contrasts.. ignored with a warning: stopifnot(identical( model.matrix(~ a + b, dd), model.matrix(~ a + b, dd, contrasts.arg = "contr.FOO"))) ``` r None `acf` Auto- and Cross- Covariance and -Correlation Function Estimation ----------------------------------------------------------------------- ### Description The function `acf` computes (and by default plots) estimates of the autocovariance or autocorrelation function. Function `pacf` is the function used for the partial autocorrelations. Function `ccf` computes the cross-correlation or cross-covariance of two univariate series. ### Usage ``` acf(x, lag.max = NULL, type = c("correlation", "covariance", "partial"), plot = TRUE, na.action = na.fail, demean = TRUE, ...) pacf(x, lag.max, plot, na.action, ...) ## Default S3 method: pacf(x, lag.max = NULL, plot = TRUE, na.action = na.fail, ...) ccf(x, y, lag.max = NULL, type = c("correlation", "covariance"), plot = TRUE, na.action = na.fail, ...) ## S3 method for class 'acf' x[i, j] ``` ### Arguments | | | | --- | --- | | `x, y` | a univariate or multivariate (not `ccf`) numeric time series object or a numeric vector or matrix, or an `"acf"` object. | | `lag.max` | maximum lag at which to calculate the acf. Default is *10\*log10(N/m)* where *N* is the number of observations and *m* the number of series. Will be automatically limited to one less than the number of observations in the series. | | `type` | character string giving the type of acf to be computed. Allowed values are `"correlation"` (the default), `"covariance"` or `"partial"`. Will be partially matched. | | `plot` | logical. If `TRUE` (the default) the acf is plotted. | | `na.action` | function to be called to handle missing values. `na.pass` can be used. | | `demean` | logical. Should the covariances be about the sample means? | | `...` | further arguments to be passed to `plot.acf`. | | `i` | a set of lags (time differences) to retain. | | `j` | a set of series (names or numbers) to retain. | ### Details For `type` = `"correlation"` and `"covariance"`, the estimates are based on the sample covariance. (The lag 0 autocorrelation is fixed at 1 by convention.) By default, no missing values are allowed. If the `na.action` function passes through missing values (as `na.pass` does), the covariances are computed from the complete cases. This means that the estimate computed may well not be a valid autocorrelation sequence, and may contain missing values. Missing values are not allowed when computing the PACF of a multivariate time series. The partial correlation coefficient is estimated by fitting autoregressive models of successively higher orders up to `lag.max`. The generic function `plot` has a method for objects of class `"acf"`. The lag is returned and plotted in units of time, and not numbers of observations. There are `print` and subsetting methods for objects of class `"acf"`. ### Value An object of class `"acf"`, which is a list with the following elements: | | | | --- | --- | | `lag` | A three dimensional array containing the lags at which the acf is estimated. | | `acf` | An array with the same dimensions as `lag` containing the estimated acf. | | `type` | The type of correlation (same as the `type` argument). | | `n.used` | The number of observations in the time series. | | `series` | The name of the series `x`. | | `snames` | The series names for a multivariate time series. | The lag `k` value returned by `ccf(x, y)` estimates the correlation between `x[t+k]` and `y[t]`. The result is returned invisibly if `plot` is `TRUE`. ### Author(s) Original: Paul Gilbert, Martyn Plummer. Extensive modifications and univariate case of `pacf` by B. D. Ripley. ### References Venables, W. N. and Ripley, B. D. (2002) *Modern Applied Statistics with S*. Fourth Edition. Springer-Verlag. (This contains the exact definitions used.) ### See Also `<plot.acf>`, `[ARMAacf](armaacf)` for the exact autocorrelations of a given ARMA process. ### Examples ``` require(graphics) ## Examples from Venables & Ripley acf(lh) acf(lh, type = "covariance") pacf(lh) acf(ldeaths) acf(ldeaths, ci.type = "ma") acf(ts.union(mdeaths, fdeaths)) ccf(mdeaths, fdeaths, ylab = "cross-correlation") # (just the cross-correlations) presidents # contains missing values acf(presidents, na.action = na.pass) pacf(presidents, na.action = na.pass) ```
programming_docs
r None `proj` Projections of Models ----------------------------- ### Description `proj` returns a matrix or list of matrices giving the projections of the data onto the terms of a linear model. It is most frequently used for `<aov>` models. ### Usage ``` proj(object, ...) ## S3 method for class 'aov' proj(object, onedf = FALSE, unweighted.scale = FALSE, ...) ## S3 method for class 'aovlist' proj(object, onedf = FALSE, unweighted.scale = FALSE, ...) ## Default S3 method: proj(object, onedf = TRUE, ...) ## S3 method for class 'lm' proj(object, onedf = FALSE, unweighted.scale = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `object` | An object of class `"lm"` or a class inheriting from it, or an object with a similar structure including in particular components `qr` and `effects`. | | `onedf` | A logical flag. If `TRUE`, a projection is returned for all the columns of the model matrix. If `FALSE`, the single-column projections are collapsed by terms of the model (as represented in the analysis of variance table). | | `unweighted.scale` | If the fit producing `object` used weights, this determines if the projections correspond to weighted or unweighted observations. | | `...` | Swallow and ignore any other arguments. | ### Details A projection is given for each stratum of the object, so for `aov` models with an `Error` term the result is a list of projections. ### Value A projection matrix or (for multi-stratum objects) a list of projection matrices. Each projection is a matrix with a row for each observations and either a column for each term (`onedf = FALSE`) or for each coefficient (`onedf = TRUE`). Projection matrices from the default method have orthogonal columns representing the projection of the response onto the column space of the Q matrix from the QR decomposition. The fitted values are the sum of the projections, and the sum of squares for each column is the reduction in sum of squares from fitting that column (after those to the left of it). The methods for `lm` and `aov` models add a column to the projection matrix giving the residuals (the projection of the data onto the orthogonal complement of the model space). Strictly, when `onedf = FALSE` the result is not a projection, but the columns represent sums of projections onto the columns of the model matrix corresponding to that term. In this case the matrix does not depend on the coding used. ### Author(s) The design was inspired by the S function of the same name described in Chambers *et al* (1992). ### References Chambers, J. M., Freeny, A and Heiberger, R. M. (1992) *Analysis of variance; designed experiments.* Chapter 5 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<aov>`, `<lm>`, `<model.tables>` ### Examples ``` N <- c(0,1,0,1,1,1,0,0,0,1,1,0,1,1,0,0,1,0,1,0,1,1,0,0) P <- c(1,1,0,0,0,1,0,1,1,1,0,0,0,1,0,1,1,0,0,1,0,1,1,0) K <- c(1,0,0,1,0,1,1,0,0,1,0,1,0,1,1,0,0,0,1,1,1,0,1,0) yield <- c(49.5,62.8,46.8,57.0,59.8,58.5,55.5,56.0,62.8,55.8,69.5, 55.0, 62.0,48.8,45.5,44.2,52.0,51.5,49.8,48.8,57.2,59.0,53.2,56.0) npk <- data.frame(block = gl(6,4), N = factor(N), P = factor(P), K = factor(K), yield = yield) npk.aov <- aov(yield ~ block + N*P*K, npk) proj(npk.aov) ## as a test, not particularly sensible options(contrasts = c("contr.helmert", "contr.treatment")) npk.aovE <- aov(yield ~ N*P*K + Error(block), npk) proj(npk.aovE) ``` r None `make.link` Create a Link for GLM Families ------------------------------------------- ### Description This function is used with the `<family>` functions in `<glm>()`. Given the name of a link, it returns a link function, an inverse link function, the derivative *dmu/deta* and a function for domain checking. ### Usage ``` make.link(link) ``` ### Arguments | | | | --- | --- | | `link` | character; one of `"logit"`, `"probit"`, `"cauchit"`, `"cloglog"`, `"identity"`, `"log"`, `"sqrt"`, `"1/mu^2"`, `"inverse"`. | ### Value A object of class `"link-glm"`, a list with components | | | | --- | --- | | `linkfun` | Link function `function(mu)` | | `linkinv` | Inverse link function `function(eta)` | | `mu.eta` | Derivative `function(eta)` *dmu/deta* | | `valideta` | `function(eta)`{ `TRUE` if `eta` is in the domain of `linkinv` }. | | `name` | a name to be used for the link | . ### See Also `<power>`, `<glm>`, `<family>`. ### Examples ``` utils::str(make.link("logit")) ``` r None `power.t.test` Power calculations for one and two sample t tests ----------------------------------------------------------------- ### Description Compute the power of the one- or two- sample t test, or determine parameters to obtain a target power. ### Usage ``` power.t.test(n = NULL, delta = NULL, sd = 1, sig.level = 0.05, power = NULL, type = c("two.sample", "one.sample", "paired"), alternative = c("two.sided", "one.sided"), strict = FALSE, tol = .Machine$double.eps^0.25) ``` ### Arguments | | | | --- | --- | | `n` | number of observations (per group) | | `delta` | true difference in means | | `sd` | standard deviation | | `sig.level` | significance level (Type I error probability) | | `power` | power of test (1 minus Type II error probability) | | `type` | string specifying the type of t test. Can be abbreviated. | | `alternative` | one- or two-sided test. Can be abbreviated. | | `strict` | use strict interpretation in two-sided case | | `tol` | numerical tolerance used in root finding, the default providing (at least) four significant digits. | ### Details Exactly one of the parameters `n`, `delta`, `power`, `sd`, and `sig.level` must be passed as `NULL`, and that parameter is determined from the others. Notice that the last two have non-NULL defaults, so NULL must be explicitly passed if you want to compute them. If `strict = TRUE` is used, the power will include the probability of rejection in the opposite direction of the true effect, in the two-sided case. Without this the power will be half the significance level if the true difference is zero. ### Value Object of class `"power.htest"`, a list of the arguments (including the computed one) augmented with `method` and `note` elements. ### Note `uniroot` is used to solve the power equation for unknowns, so you may see errors from it, notably about inability to bracket the root when invalid arguments are given. ### Author(s) Peter Dalgaard. Based on previous work by Claus Ekstrøm ### See Also `<t.test>`, `<uniroot>` ### Examples ``` power.t.test(n = 20, delta = 1) power.t.test(power = .90, delta = 1) power.t.test(power = .90, delta = 1, alternative = "one.sided") ``` r None `line` Robust Line Fitting --------------------------- ### Description Fit a line robustly as recommended in *Exploratory Data Analysis*. Currently by default (`iter = 1`) the initial median-median line is *not* iterated (as opposed to Tukey's “resistant line” in the references). ### Usage ``` line(x, y, iter = 1) ``` ### Arguments | | | | --- | --- | | `x, y` | the arguments can be any way of specifying x-y pairs. See `[xy.coords](../../grdevices/html/xy.coords)`. | | `iter` | positive integer specifying the number of “polishing” iterations. Note that this was hard coded to `1` in **R** versions before 3.5.0, and more importantly that such simple iterations may not converge, see Siegel's 9-point example. | ### Details Cases with missing values are omitted. Contrary to the references where the data is split in three (almost) equally sized groups with symmetric sizes depending on *n* and `n %% 3` and computes medians inside each group, the `line()` code splits into three groups using all observations with `x[.] <= q1` and `x[.] >= q2`, where `q1, q2` are (a kind of) quantiles for probabilities *p = 1/3* and *p = 2/3* of the form `(x[j1]+x[j2])/2` where `j1 = floor(p*(n-1))` and `j2 = ceiling(p(n-1))`, `n = length(x)`. Long vectors are not supported yet. ### Value An object of class `"tukeyline"`. Methods are available for the generic functions `coef`, `residuals`, `fitted`, and `print`. ### References Tukey, J. W. (1977). *Exploratory Data Analysis*, Reading Massachusetts: Addison-Wesley. Velleman, P. F. and Hoaglin, D. C. (1981). *Applications, Basics and Computing of Exploratory Data Analysis*, Duxbury Press. Chapter 5. Emerson, J. D. and Hoaglin, D. C. (1983). Resistant Lines for *y* versus *x*. Chapter 5 of *Understanding Robust and Exploratory Data Analysis*, eds. David C. Hoaglin, Frederick Mosteller and John W. Tukey. Wiley. Iain M. Johnstone and Paul F. Velleman (1985). The Resistant Line and Related Regression Methods. *Journal of the American Statistical Association*, **80**, 1041–1054. doi: [10.2307/2288572](https://doi.org/10.2307/2288572). ### See Also `<lm>`. There are alternatives for robust linear regression more robust and more (statistically) efficient, see `[rlm](../../mass/html/rlm)()` from [MASS](https://CRAN.R-project.org/package=MASS), or `[lmrob](../../robustbase/html/lmrob)()` from [robustbase](https://CRAN.R-project.org/package=robustbase). ### Examples ``` require(graphics) plot(cars) (z <- line(cars)) abline(coef(z)) ## Tukey-Anscombe Plot : plot(residuals(z) ~ fitted(z), main = deparse(z$call)) ## Andrew Siegel's pathological 9-point data, y-values multiplied by 3: d.AS <- data.frame(x = c(-4:3, 12), y = 3*c(rep(0,6), -5, 5, 1)) cAS <- with(d.AS, t(sapply(1:10, function(it) line(x,y, iter=it)$coefficients))) dimnames(cAS) <- list(paste("it =", format(1:10)), c("intercept", "slope")) cAS ## iterations started to oscillate, repeating iteration 7,8 indefinitely ``` r None `lm.influence` Regression Diagnostics -------------------------------------- ### Description This function provides the basic quantities which are used in forming a wide variety of diagnostics for checking the quality of regression fits. ### Usage ``` influence(model, ...) ## S3 method for class 'lm' influence(model, do.coef = TRUE, ...) ## S3 method for class 'glm' influence(model, do.coef = TRUE, ...) lm.influence(model, do.coef = TRUE) ``` ### Arguments | | | | --- | --- | | `model` | an object as returned by `<lm>` or `<glm>`. | | `do.coef` | logical indicating if the changed `coefficients` (see below) are desired. These need *O(n^2 p)* computing time. | | `...` | further arguments passed to or from other methods. | ### Details The `<influence.measures>()` and other functions listed in **See Also** provide a more user oriented way of computing a variety of regression diagnostics. These all build on `lm.influence`. Note that for GLMs (other than the Gaussian family with identity link) these are based on one-step approximations which may be inadequate if a case has high influence. An attempt is made to ensure that computed hat values that are probably one are treated as one, and the corresponding rows in `sigma` and `coefficients` are `NaN`. (Dropping such a case would normally result in a variable being dropped, so it is not possible to give simple drop-one diagnostics.) `[naresid](nafns)` is applied to the results and so will fill in with `NA`s it the fit had `na.action = na.exclude`. ### Value A list containing the following components of the same length or number of rows *n*, which is the number of non-zero weights. Cases omitted in the fit are omitted unless a `<na.action>` method was used (such as `[na.exclude](na.fail)`) which restores them. | | | | --- | --- | | `hat` | a vector containing the diagonal of the ‘hat’ matrix. | | `coefficients` | (unless `do.coef` is false) a matrix whose i-th row contains the change in the estimated coefficients which results when the i-th case is dropped from the regression. Note that aliased coefficients are not included in the matrix. | | `sigma` | a vector whose i-th element contains the estimate of the residual standard deviation obtained when the i-th case is dropped from the regression. (The approximations needed for GLMs can result in this being `NaN`.) | | `wt.res` | a vector of *weighted* (or for class `glm` rather *deviance*) residuals. | ### Note The `coefficients` returned by the **R** version of `lm.influence` differ from those computed by S. Rather than returning the coefficients which result from dropping each case, we return the changes in the coefficients. This is more directly useful in many diagnostic measures. Since these need *O(n p^2)* computing time, they can be omitted by `do.coef = FALSE`. (Prior to R 4.0.0, this was much worse, using an *O(n^2 p)* algorithm.) Note that cases with `weights == 0` are *dropped* (contrary to the situation in S). If a model has been fitted with `na.action = na.exclude` (see `[na.exclude](na.fail)`), cases excluded in the fit *are* considered here. ### References See the list in the documentation for `<influence.measures>`. Chambers, J. M. (1992) *Linear models.* Chapter 4 of *Statistical Models in S* eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole. ### See Also `<summary.lm>` for `[summary](../../base/html/summary)` and related methods; `<influence.measures>`, `[hat](influence.measures)` for the hat matrix diagonals, `[dfbetas](influence.measures)`, `[dffits](influence.measures)`, `[covratio](influence.measures)`, `[cooks.distance](influence.measures)`, `<lm>`. ### Examples ``` ## Analysis of the life-cycle savings data ## given in Belsley, Kuh and Welsch. summary(lm.SR <- lm(sr ~ pop15 + pop75 + dpi + ddpi, data = LifeCycleSavings), correlation = TRUE) utils::str(lmI <- lm.influence(lm.SR)) ## For more "user level" examples, use example(influence.measures) ``` r None `ppr` Projection Pursuit Regression ------------------------------------ ### Description Fit a projection pursuit regression model. ### Usage ``` ppr(x, ...) ## S3 method for class 'formula' ppr(formula, data, weights, subset, na.action, contrasts = NULL, ..., model = FALSE) ## Default S3 method: ppr(x, y, weights = rep(1, n), ww = rep(1, q), nterms, max.terms = nterms, optlevel = 2, sm.method = c("supsmu", "spline", "gcvspline"), bass = 0, span = 0, df = 5, gcvpen = 1, trace = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `formula` | a formula specifying one or more numeric response variables and the explanatory variables. | | `x` | numeric matrix of explanatory variables. Rows represent observations, and columns represent variables. Missing values are not accepted. | | `y` | numeric matrix of response variables. Rows represent observations, and columns represent variables. Missing values are not accepted. | | `nterms` | number of terms to include in the final model. | | `data` | a data frame (or similar: see `<model.frame>`) from which variables specified in `formula` are preferentially to be taken. | | `weights` | a vector of weights `w_i` for each *case*. | | `ww` | a vector of weights for each *response*, so the fit criterion is the sum over case `i` and responses `j` of `w_i ww_j (y_ij - fit_ij)^2` divided by the sum of `w_i`. | | `subset` | an index vector specifying the cases to be used in the training sample. (NOTE: If given, this argument must be named.) | | `na.action` | a function to specify the action to be taken if `[NA](../../base/html/na)`s are found. The default action is given by `getOption("na.action")`. (NOTE: If given, this argument must be named.) | | `contrasts` | the contrasts to be used when any factor explanatory variables are coded. | | `max.terms` | maximum number of terms to choose from when building the model. | | `optlevel` | integer from 0 to 3 which determines the thoroughness of an optimization routine in the SMART program. See the ‘Details’ section. | | `sm.method` | the method used for smoothing the ridge functions. The default is to use Friedman's super smoother `<supsmu>`. The alternatives are to use the smoothing spline code underlying `<smooth.spline>`, either with a specified (equivalent) degrees of freedom for each ridge functions, or to allow the smoothness to be chosen by GCV. Can be abbreviated. | | `bass` | super smoother bass tone control used with automatic span selection (see `supsmu`); the range of values is 0 to 10, with larger values resulting in increased smoothing. | | `span` | super smoother span control (see `<supsmu>`). The default, `0`, results in automatic span selection by local cross validation. `span` can also take a value in `(0, 1]`. | | `df` | if `sm.method` is `"spline"` specifies the smoothness of each ridge term via the requested equivalent degrees of freedom. | | `gcvpen` | if `sm.method` is `"gcvspline"` this is the penalty used in the GCV selection for each degree of freedom used. | | `trace` | logical indicating if each spline fit should produce diagnostic output (about `lambda` and `df`), and the supsmu fit about its steps. | | `...` | arguments to be passed to or from other methods. | | `model` | logical. If true, the model frame is returned. | ### Details The basic method is given by Friedman (1984), and is essentially the same code used by S-PLUS's `ppreg`. This code is extremely sensitive to the compiler used. The algorithm first adds up to `max.terms` ridge terms one at a time; it will use less if it is unable to find a term to add that makes sufficient difference. It then removes the least important term at each step until `nterms` terms are left. The levels of optimization (argument `optlevel`) differ in how thoroughly the models are refitted during this process. At level 0 the existing ridge terms are not refitted. At level 1 the projection directions are not refitted, but the ridge functions and the regression coefficients are. Levels 2 and 3 refit all the terms and are equivalent for one response; level 3 is more careful to re-balance the contributions from each regressor at each step and so is a little less likely to converge to a saddle point of the sum of squares criterion. ### Value A list with the following components, many of which are for use by the method functions. | | | | --- | --- | | `call` | the matched call | | `p` | the number of explanatory variables (after any coding) | | `q` | the number of response variables | | `mu` | the argument `nterms` | | `ml` | the argument `max.terms` | | `gof` | the overall residual (weighted) sum of squares for the selected model | | `gofn` | the overall residual (weighted) sum of squares against the number of terms, up to `max.terms`. Will be invalid (and zero) for less than `nterms`. | | `df` | the argument `df` | | `edf` | if `sm.method` is `"spline"` or `"gcvspline"` the equivalent number of degrees of freedom for each ridge term used. | | `xnames` | the names of the explanatory variables | | `ynames` | the names of the response variables | | `alpha` | a matrix of the projection directions, with a column for each ridge term | | `beta` | a matrix of the coefficients applied for each response to the ridge terms: the rows are the responses and the columns the ridge terms | | `yb` | the weighted means of each response | | `ys` | the overall scale factor used: internally the responses are divided by `ys` to have unit total weighted sum of squares. | | `fitted.values` | the fitted values, as a matrix if `q > 1`. | | `residuals` | the residuals, as a matrix if `q > 1`. | | `smod` | internal work array, which includes the ridge functions evaluated at the training set points. | | `model` | (only if `model = TRUE`) the model frame. | ### Source Friedman (1984): converted to double precision and added interface to smoothing splines by B. D. Ripley, originally for the [MASS](https://CRAN.R-project.org/package=MASS) package. ### References Friedman, J. H. and Stuetzle, W. (1981). Projection pursuit regression. *Journal of the American Statistical Association*, **76**, 817–823. doi: [10.2307/2287576](https://doi.org/10.2307/2287576). Friedman, J. H. (1984). SMART User's Guide. Laboratory for Computational Statistics, Stanford University Technical Report No. 1. Venables, W. N. and Ripley, B. D. (2002). *Modern Applied Statistics with S*. Springer. ### See Also `<plot.ppr>`, `<supsmu>`, `<smooth.spline>` ### Examples ``` require(graphics) # Note: your numerical values may differ attach(rock) area1 <- area/10000; peri1 <- peri/10000 rock.ppr <- ppr(log(perm) ~ area1 + peri1 + shape, data = rock, nterms = 2, max.terms = 5) rock.ppr # Call: # ppr.formula(formula = log(perm) ~ area1 + peri1 + shape, data = rock, # nterms = 2, max.terms = 5) # # Goodness of fit: # 2 terms 3 terms 4 terms 5 terms # 8.737806 5.289517 4.745799 4.490378 summary(rock.ppr) # ..... (same as above) # ..... # # Projection direction vectors ('alpha'): # term 1 term 2 # area1 0.34357179 0.37071027 # peri1 -0.93781471 -0.61923542 # shape 0.04961846 0.69218595 # # Coefficients of ridge terms: # term 1 term 2 # 1.6079271 0.5460971 par(mfrow = c(3,2)) # maybe: , pty = "s") plot(rock.ppr, main = "ppr(log(perm)~ ., nterms=2, max.terms=5)") plot(update(rock.ppr, bass = 5), main = "update(..., bass = 5)") plot(update(rock.ppr, sm.method = "gcv", gcvpen = 2), main = "update(..., sm.method=\"gcv\", gcvpen=2)") cbind(perm = rock$perm, prediction = round(exp(predict(rock.ppr)), 1)) detach() ```
programming_docs
r None `mgus` Monoclonal gammopathy data ---------------------------------- ### Description Natural history of 241 subjects with monoclonal gammopathy of undetermined significance (MGUS). ### Usage ``` mgus mgus1 data(cancer, package="survival") ``` ### Format mgus: A data frame with 241 observations on the following 12 variables. | | | | --- | --- | | id: | subject id | | age: | age in years at the detection of MGUS | | sex: | `male` or `female` | | dxyr: | year of diagnosis | | pcdx: | for subjects who progress to a plasma cell malignancy | | | the subtype of malignancy: multiple myeloma (MM) is the | | | most common, followed by amyloidosis (AM), macroglobulinemia (MA), | | | and other lymphprolifative disorders (LP) | | pctime: | days from MGUS until diagnosis of a plasma cell malignancy | | futime: | days from diagnosis to last follow-up | | death: | 1= follow-up is until death | | alb: | albumin level at MGUS diagnosis | | creat: | creatinine at MGUS diagnosis | | hgb: | hemoglobin at MGUS diagnosis | | mspike: | size of the monoclonal protein spike at diagnosis | | | mgus1: The same data set in start,stop format. Contains the id, age, sex, and laboratory variable described above along with | | | | --- | --- | | start, stop: | sequential intervals of time for each subject | | status: | =1 if the interval ends in an event | | event: | a factor containing the event type: censor, death, or plasma cell malignancy | | enum: | event number for each subject: 1 or 2 | ### Details Plasma cells are responsible for manufacturing immunoglobulins, an important part of the immune defense. At any given time there are estimated to be about *10^6* different immunoglobulins in the circulation at any one time. When a patient has a plasma cell malignancy the distribution will become dominated by a single isotype, the product of the malignant clone, visible as a spike on a serum protein electrophoresis. Monoclonal gammopathy of undertermined significance (MGUS) is the presence of such a spike, but in a patient with no evidence of overt malignancy. This data set of 241 sequential subjects at Mayo Clinic was the groundbreaking study defining the natural history of such subjects. Due to the diligence of the principle investigator 0 subjects have been lost to follow-up. Three subjects had MGUS detected on the day of death. In data set `mgus1` these subjects have the time to MGUS coded as .5 day before the death in order to avoid tied times. These data sets were updated in Jan 2015 to correct some small errors. ### Source Mayo Clinic data courtesy of Dr. Robert Kyle. ### References R Kyle, Benign monoclonal gammopathy – after 20 to 35 years of follow-up, Mayo Clinic Proc 1993; 68:26-36. ### Examples ``` # Create the competing risk curves for time to first of death or PCM sfit <- survfit(Surv(start, stop, event) ~ sex, mgus1, id=id, subset=(enum==1)) print(sfit) # the order of printout is the order in which they plot plot(sfit, xscale=365.25, lty=c(2,2,1,1), col=c(1,2,1,2), xlab="Years after MGUS detection", ylab="Proportion") legend(0, .8, c("Death/male", "Death/female", "PCM/male", "PCM/female"), lty=c(1,1,2,2), col=c(2,1,2,1), bty='n') title("Curves for the first of plasma cell malignancy or death") # The plot shows that males have a higher death rate than females (no # surprise) but their rates of conversion to PCM are essentially the same. ``` r None `survreg` Regression for a Parametric Survival Model ----------------------------------------------------- ### Description Fit a parametric survival regression model. These are location-scale models for an arbitrary transform of the time variable; the most common cases use a log transformation, leading to accelerated failure time models. ### Usage ``` survreg(formula, data, weights, subset, na.action, dist="weibull", init=NULL, scale=0, control,parms=NULL,model=FALSE, x=FALSE, y=TRUE, robust=FALSE, cluster, score=FALSE, ...) ``` ### Arguments | | | | --- | --- | | `formula` | a formula expression as for other regression models. The response is usually a survival object as returned by the `Surv` function. See the documentation for `Surv`, `lm` and `formula` for details. | | `data` | a data frame in which to interpret the variables named in the `formula`, `weights` or the `subset` arguments. | | `weights` | optional vector of case weights | | `subset` | subset of the observations to be used in the fit | | `na.action` | a missing-data filter function, applied to the model.frame, after any `subset` argument has been used. Default is `options()\$na.action`. | | `dist` | assumed distribution for y variable. If the argument is a character string, then it is assumed to name an element from `<survreg.distributions>`. These include `"weibull"`, `"exponential"`, `"gaussian"`, `"logistic"`,`"lognormal"` and `"loglogistic"`. Otherwise, it is assumed to be a user defined list conforming to the format described in `<survreg.distributions>`. | | `parms` | a list of fixed parameters. For the t-distribution for instance this is the degrees of freedom; most of the distributions have no parameters. | | `init` | optional vector of initial values for the parameters. | | `scale` | optional fixed value for the scale. If set to <=0 then the scale is estimated. | | `control` | a list of control values, in the format produced by `<survreg.control>`. The default value is `survreg.control()` | | `model,x,y` | flags to control what is returned. If any of these is true, then the model frame, the model matrix, and/or the vector of response times will be returned as components of the final result, with the same names as the flag arguments. | | `score` | return the score vector. (This is expected to be zero upon successful convergence.) | | `robust` | Use robust sandwich error instead of the asymptotic formula. Defaults to TRUE if there is a `cluster` argument. | | `cluster` | Optional variable that identifies groups of subjects, used in computing the robust variance. Like `model` variables, this is searched for in the dataset pointed to by the `data` argument. | | `...` | other arguments which will be passed to `survreg.control`. | ### Details All the distributions are cast into a location-scale framework, based on chapter 2.2 of Kalbfleisch and Prentice. The resulting parameterization of the distributions is sometimes (e.g. gaussian) identical to the usual form found in statistics textbooks, but other times (e.g. Weibull) it is not. See the book for detailed formulas. ### Value an object of class `survreg` is returned. ### References Kalbfleisch, J. D. and Prentice, R. L., The statistical analysis of failure time data, Wiley, 2002. ### See Also `<survreg.object>`, `<survreg.distributions>`, `<pspline>`, `<frailty>`, `<ridge>` ### Examples ``` # Fit an exponential model: the two fits are the same survreg(Surv(futime, fustat) ~ ecog.ps + rx, ovarian, dist='weibull', scale=1) survreg(Surv(futime, fustat) ~ ecog.ps + rx, ovarian, dist="exponential") # # A model with different baseline survival shapes for two groups, i.e., # two different scale parameters survreg(Surv(time, status) ~ ph.ecog + age + strata(sex), lung) # There are multiple ways to parameterize a Weibull distribution. The survreg # function embeds it in a general location-scale family, which is a # different parameterization than the rweibull function, and often leads # to confusion. # survreg's scale = 1/(rweibull shape) # survreg's intercept = log(rweibull scale) # For the log-likelihood all parameterizations lead to the same value. y <- rweibull(1000, shape=2, scale=5) survreg(Surv(y)~1, dist="weibull") # Economists fit a model called `tobit regression', which is a standard # linear regression with Gaussian errors, and left censored data. tobinfit <- survreg(Surv(durable, durable>0, type='left') ~ age + quant, data=tobin, dist='gaussian') ``` r None `vcov.coxph` Variance-covariance matrix ---------------------------------------- ### Description Extract and return the variance-covariance matrix. ### Usage ``` ## S3 method for class 'coxph' vcov(object, complete=TRUE, ...) ## S3 method for class 'survreg' vcov(object, complete=TRUE, ...) ``` ### Arguments | | | | --- | --- | | `object` | a fitted model object | | `complete` | logical indicating if the full variance-covariance matrix should be returned. This has an effect only for an over-determined fit where some of the coefficients are undefined, and `coef(object)` contains corresponding NA values. If `complete=TRUE` the returned matrix will have row/column for each coefficient, if FALSE it will contain rows/columns corresponding to the non-missing coefficients. The coef() function has a simpilar `complete` argument. | | `...` | additional arguments for method functions | ### Details For the `coxph` and `survreg` functions the returned matrix is a particular generalized inverse: the row and column corresponding to any NA coefficients will be zero. This is a side effect of the generalized cholesky decomposion used in the unerlying compuatation. ### Value a matrix r None `survfit.coxph` Compute a Survival Curve from a Cox model ---------------------------------------------------------- ### Description Computes the predicted survivor function for a Cox proportional hazards model. ### Usage ``` ## S3 method for class 'coxph' survfit(formula, newdata, se.fit=TRUE, conf.int=.95, individual=FALSE, stype=2, ctype, conf.type=c("log","log-log","plain","none", "logit", "arcsin"), censor=TRUE, start.time, id, influence=FALSE, na.action=na.pass, type, ...) ``` ### Arguments | | | | --- | --- | | `formula` | A `coxph` object. | | `newdata` | a data frame with the same variable names as those that appear in the `coxph` formula. It is also valid to use a vector, if the data frame would consist of a single row. The curve(s) produced will be representative of a cohort whose covariates correspond to the values in `newdata`. Default is the mean of the covariates used in the `coxph` fit. | | `se.fit` | a logical value indicating whether standard errors should be computed. Default is `TRUE`. | | `conf.int` | the level for a two-sided confidence interval on the survival curve(s). Default is 0.95. | | `individual` | depricated argument, replaced by the general `id` | | `stype` | computation of the survival curve, 1=direct, 2= exponenial of the cumulative hazard. | | `ctype` | whether the cumulative hazard computation should have a correction for ties, 1=no, 2=yes. | | `conf.type` | One of `"none"`, `"plain"`, `"log"` (the default), `"log-log"` or `"logit"`. Only enough of the string to uniquely identify it is necessary. The first option causes confidence intervals not to be generated. The second causes the standard intervals `curve +- k *se(curve)`, where k is determined from `conf.int`. The log option calculates intervals based on the cumulative hazard or log(survival). The log-log option uses the log hazard or log(-log(survival)), and the logit log(survival/(1-survival)). | | `censor` | if FALSE time points at which there are no events (only censoring) are not included in the result. | | `id` | optional variable name of subject identifiers. If this is present, it will be search for in the `newdata` data frame. Each group of rows in `newdata` with the same subject id represents the covariate path through time of a single subject, and the result will contain one curve per subject. If the `coxph` fit had strata then that must also be specified in `newdata`. If `newid` is not present, then each individual row of `newdata` is presumed to represent a distinct subject. | | `start.time` | optional starting time, a single numeric value. If present the returned curve contains survival after `start.time` conditional on surviving to `start.time`. | | `influence` | option to return the influence values | | `na.action` | the na.action to be used on the newdata argument | | `type` | older argument that encompassed `stype` and `ctype`, now depricated | | `...` | for future methods | ### Details This routine produces Pr(state) curves based on a `coxph` model fit. For single state models it produces the single curve for S(t) = Pr(remain in initial state at time t), known as the survival curve; for multi-state models a matrix giving probabilities for all states. The `stype` argument states the type of estimate, and defaults to the exponential of the cumulative hazard, better known as the Breslow estimate. For a multi-state Cox model this involves the exponential of a matrix. The argument `stype=1` uses a non-exponential or ‘direct’ estimate. For a single endpoint coxph model the code evaluates the Kalbfleich-Prentice estimate, and for a multi-state model it uses an analog of the Aalen-Johansen estimator. The latter approach is the default in the `mstate` package. The `ctype` option affects the estimated cumulative hazard, and if `stype=2` the estimated P(state) curves as well. If not present it is chosen so as to be concordant with the `ties` option in the `coxph` call. (For multistate `coxphms` objects only `ctype=1` is currently implemented.) Likewise the choice between a model based and robust variance estimate for the curve will mirror the choice made in the `coxph` call, any clustering is also inherited from the parent model. If the `newdata` argument is missing, then a curve is produced for a single "pseudo" subject with covariate values equal to the means of the data set. The resulting curve(s) almost never make sense, but The default remains due to an unwarranted attachment to the option shown by some users and by other packages. Two particularly egregious examples are factor variables and interactions. Suppose one were studying interspecies transmission of a virus, and the data set has a factor variable with levels ("pig", "chicken") and about equal numbers of observations for each. The “mean” covariate level will be 0.5 – is this a flying pig? As to interactions assume data with sex coded as 0/1, ages ranging from 50 to 80, and a model with age\*sex. The “mean” value for the age:sex interaction term will be about 30, a value that does not occur in the data. Users are strongly advised to use the newdata argument. For these reasons predictions from a multistate coxph model require the newdata argument. When the original model contains time-dependent covariates, then the path of that covariate through time needs to be specified in order to obtain a predicted curve. This requires `newdata` to contain multiple lines for each hypothetical subject which gives the covariate values, time interval, and strata for each line (a subject can change strata), along with an `id` variable which demarks which rows belong to each subject. The time interval must have the same (start, stop, status) variables as the original model: although the status variable is not used and thus can be set to a dummy value of 0 or 1, it is necessary for the response to be recognized as a `Surv` object. Last, although predictions with a time-dependent covariate path can be useful, it is very easy to create a prediction that is senseless. Users are encouraged to seek out a text that discusses the issue in detail. When a model contains strata but no time-dependent covariates the user of this routine has a choice. If newdata argument does not contain strata variables then the returned object will be a matrix of survival curves with one row for each strata in the model and one column for each row in newdata. (This is the historical behavior of the routine.) If newdata does contain strata variables, then the result will contain one curve per row of newdata, based on the indicated stratum of the original model. In the rare case of a model with strata by covariate interactions the strata variable must be included in newdata, the routine does not allow it to be omitted (predictions become too confusing). (Note that the model Surv(time, status) ~ age\*strata(sex) expands internally to strata(sex) + age:sex; the sex variable is needed for the second term of the model.) See `<survfit>` for more details about the counts (number of events, number at risk, etc.) ### Value an object of class `"survfit"`. See `survfit.object` for details. Methods defined for survfit objects are `print`, `plot`, `lines`, and `points`. ### Notes If the following pair of lines is used inside of another function then the `model=TRUE` argument must be added to the coxph call: `fit <- coxph(...); survfit(fit)`. This is a consequence of the non-standard evaluation process used by the `model.frame` function when a formula is involved. ### References Fleming, T. H. and Harrington, D. P. (1984). Nonparametric estimation of the survival distribution in censored data. *Comm. in Statistics* **13**, 2469-86. Kalbfleisch, J. D. and Prentice, R. L. (1980). *The Statistical Analysis of Failure Time Data.* New York:Wiley. Link, C. L. (1984). Confidence intervals for the survival function using Cox's proportional hazards model with covariates. *Biometrics* **40**, 601-610. Therneau T and Grambsch P (2000), Modeling Survival Data: Extending the Cox Model, Springer-Verlag. Tsiatis, A. (1981). A large sample study of the estimate for the integrated hazard function in Cox's regression model for survival data. *Annals of Statistics* **9**, 93-108. ### See Also `<print.survfit>`, `<plot.survfit>`, `<lines.survfit>`, `<coxph>`, `[Surv](surv)`, `<strata>`. r None `survobrien` O'Brien's Test for Association of a Single Variable with Survival ------------------------------------------------------------------------------- ### Description Peter O'Brien's test for association of a single variable with survival This test is proposed in Biometrics, June 1978. ### Usage ``` survobrien(formula, data, subset, na.action, transform) ``` ### Arguments | | | | --- | --- | | `formula` | a valid formula for a cox model. | | `data` | a data.frame in which to interpret the variables named in the `formula`, or in the `subset` and the `weights` argument. | | `subset` | expression indicating which subset of the rows of data should be used in the fit. All observations are included by default. | | `na.action` | a missing-data filter function. This is applied to the model.frame after any subset argument has been used. Default is `options()\$na.action`. | | `transform` | the transformation function to be applied at each time point. The default is O'Brien's suggestion logit(tr) where tr = (rank(x)- 1/2)/ length(x) is the rank shifted to the range 0-1 and logit(x) = log(x/(1-x)) is the logit transform. | ### Value a new data frame. The response variables will be column names returned by the `Surv` function, i.e., "time" and "status" for simple survival data, or "start", "stop", "status" for counting process data. Each individual event time is identified by the value of the variable `.strata.`. Other variables retain their original names. If a predictor variable is a factor or is protected with `I()`, it is retained as is. Other predictor variables have been replaced with time-dependent logit scores. The new data frame will have many more rows that the original data, approximately the original number of rows \* number of deaths/2. ### Method A time-dependent cox model can now be fit to the new data. The univariate statistic, as originally proposed, is equivalent to single variable score tests from the time-dependent model. This equivalence is the rationale for using the time dependent model as a multivariate extension of the original paper. In O'Brien's method, the x variables are re-ranked at each death time. A simpler method, proposed by Prentice, ranks the data only once at the start. The results are usually similar. ### Note A prior version of the routine returned new time variables rather than a strata. Unfortunately, that strategy does not work if the original formula has a strata statement. This new data set will be the same size, but the `coxph` routine will process it slightly faster. ### References O'Brien, Peter, "A Nonparametric Test for Association with Censored Data", *Biometrics* 34: 243-250, 1978. ### See Also `<survdiff>` ### Examples ``` xx <- survobrien(Surv(futime, fustat) ~ age + factor(rx) + I(ecog.ps), data=ovarian) coxph(Surv(time, status) ~ age + strata(.strata.), data=xx) ```
programming_docs
r None `strata` Identify Stratification Variables ------------------------------------------- ### Description This is a special function used in the context of the Cox survival model. It identifies stratification variables when they appear on the right hand side of a formula. ### Usage ``` strata(..., na.group=FALSE, shortlabel, sep=', ') ``` ### Arguments | | | | --- | --- | | `...` | any number of variables. All must be the same length. | | `na.group` | a logical variable, if `TRUE`, then missing values are treated as a distinct level of each variable. | | `shortlabel` | if `TRUE` omit variable names from resulting factor labels. The default action is to omit the names if all of the arguments are factors, and none of them was named. | | `sep` | the character used to separate groups, in the created label | ### Details When used outside of a `coxph` formula the result of the function is essentially identical to the `interaction` function, though the labels from `strata` are often more verbose. ### Value a new factor, whose levels are all possible combinations of the factors supplied as arguments. ### See Also `<coxph>`, `[interaction](../../base/html/interaction)` ### Examples ``` a <- factor(rep(1:3,4), labels=c("low", "medium", "high")) b <- factor(rep(1:4,3)) levels(strata(b)) levels(strata(a,b,shortlabel=TRUE)) coxph(Surv(futime, fustat) ~ age + strata(rx), data=ovarian) ``` r None `agreg.fit` Cox model fitting functions ---------------------------------------- ### Description These are the the functions called by coxph that do the actual computation. In certain situations, e.g. a simulation, it may be advantageous to call these directly rather than the usual `coxph` call using a model formula. ### Usage ``` agreg.fit(x, y, strata, offset, init, control, weights, method, rownames, resid=TRUE, nocenter=NULL) coxph.fit(x, y, strata, offset, init, control, weights, method, rownames, resid=TRUE, nocenter=NULL) ``` ### Arguments | | | | --- | --- | | `x` | Matix of predictors. This should *not* include an intercept. | | `y` | a `Surv` object containing either 2 columns (coxph.fit) or 3 columns (agreg.fit). | | `strata` | a vector containing the stratification, or NULL | | `offset` | optional offset vector | | `init` | initial values for the coefficients | | `control` | the result of a call to `coxph.control` | | `weights` | optional vector of weights | | `method` | method for hanling ties, one of "breslow" or "efron" | | `rownames` | this is only needed for a NULL model, in which case it contains the rownames (if any) of the original data. | | `resid` | compute and return residuals. | | `nocenter` | an optional list of values. Any column of the X matrix whose values lie strictly within that set will not be recentered. Note that the coxph function has (-1, 0, 1) as the default. | ### Details This routine does no checking that arguments are the proper length or type. Only use it if you know what you are doing! The `resid` and `concordance` arguments will save some compute time for calling routines that only need the likelihood, the generation of a permutation distribution for instance. ### Value a list containing results of the fit ### Author(s) Terry Therneau ### See Also `<coxph>` r None `print.aareg` Print an aareg object ------------------------------------ ### Description Print out a fit of Aalen's additive regression model ### Usage ``` ## S3 method for class 'aareg' print(x, maxtime, test=c("aalen", "nrisk"),scale=1,...) ``` ### Arguments | | | | --- | --- | | `x` | the result of a call to the `aareg` function | | `maxtime` | the upper time point to be used in the test for non-zero slope | | `test` | the weighting to be used in the test for non-zero slope. The default weights are based on the variance of each coefficient, as a function of time. The alternative weight is proportional to the number of subjects still at risk at each time point. | | `scale` | scales the coefficients. For some data sets, the coefficients of the Aalen model will be very small (10-4); this simply multiplies the printed values by a constant, say 1e6, to make the printout easier to read. | | `...` | for future methods | ### Details The estimated increments in the coefficient estimates can become quite unstable near the end of follow-up, due to the small number of observations still at risk in a data set. Thus, the test for slope will sometimes be more powerful if this last ‘tail’ is excluded. ### Value the calling argument is returned. ### Side Effects the results of the fit are displayed. ### References Aalen, O.O. (1989). A linear regression model for the analysis of life times. Statistics in Medicine, 8:907-925. ### See Also aareg r None `print.summary.coxph` Print method for summary.coxph objects ------------------------------------------------------------- ### Description Produces a printed summary of a fitted coxph model ### Usage ``` ## S3 method for class 'summary.coxph' print(x, digits=max(getOption("digits") - 3, 3), signif.stars = getOption("show.signif.stars"), expand=FALSE, ...) ``` ### Arguments | | | | --- | --- | | `x` | the result of a call to `summary.coxph` | | `digits` | significant digits to print | | `signif.stars` | Show stars to highlight small p-values | | `expand` | if the summary is for a multi-state coxph fit, print the results in an expanded format. | | `...` | For future methods | r None `coxph.object` Proportional Hazards Regression Object ------------------------------------------------------ ### Description This class of objects is returned by the `coxph` class of functions to represent a fitted proportional hazards model. Objects of this class have methods for the functions `print`, `summary`, `residuals`, `predict` and `survfit`. ### Arguments | | | | --- | --- | | `coefficients` | the vector of coefficients. If the model is over-determined there will be missing values in the vector corresponding to the redundant columns in the model matrix. | | `var` | the variance matrix of the coefficients. Rows and columns corresponding to any missing coefficients are set to zero. | | `naive.var` | this component will be present only if the `robust` option was true. If so, the `var` component will contain the robust estimate of variance, and this component will contain the ordinary estimate. | | `loglik` | a vector of length 2 containing the log-likelihood with the initial values and with the final values of the coefficients. | | `score` | value of the efficient score test, at the initial value of the coefficients. | | `rscore` | the robust log-rank statistic, if a robust variance was requested. | | `wald.test` | the Wald test of whether the final coefficients differ from the initial values. | | `iter` | number of iterations used. | | `linear.predictors` | the vector of linear predictors, one per subject. Note that this vector has been centered, see `predict.coxph` for more details. | | `residuals` | the martingale residuals. | | `means` | vector of column means of the X matrix. Subsequent survival curves are adjusted to this value. | | `n` | the number of observations used in the fit. | | `nevent` | the number of events (usually deaths) used in the fit. | | `concordance` | a vector of length 6, containing the number of pairs that are concordant, discordant, tied on x, tied on y, and tied on both, followed by the standard error of the concordance statistic. | | `first` | the first derivative vector at the solution. | | `weights` | the vector of case weights, if one was used. | | `method` | the method used for handling tied survival times. | | `na.action` | the na.action attribute, if any, that was returned by the `na.action` routine. | | `timefix` | the value of the timefix option used in the fit | | `cmap` | the coefficient map, present for multi-state coxph fits. There a column for each transition and a row for each coefficient, the value maps that transition/coefficient pair to a position in the coefficient vector. If a particular covariate is not used by the transition the matrix will contain a zero, if two transitions share a coefficient the matrix will contain repeats. | | `stratum_map` | stratum mapping, present for multi-state coxph fits. The row labeled ‘(Baseline)’ identifies transitions that do or do not share a baseline stratum. Further rows correspond to strata() terms in the model, each of which may apply to some transitions and not others. | | `...` | The object will also contain the following, for documentation see the `lm` object: `terms`, `assign`, `formula`, `call`, and, optionally, `x`, `y`, and/or `frame`. | ### Components The following components must be included in a legitimate `coxph` object. ### See Also `<coxph>`, `<coxph.detail>`, `<cox.zph>`, `<residuals.coxph>`, `<survfit>`, `<survreg>`. r None `Survmethods` Methods for Surv objects --------------------------------------- ### Description The list of methods that apply to `Surv` objects ### Usage ``` ## S3 method for class 'Surv' anyDuplicated(x, ...) ## S3 method for class 'Surv' as.character(x, ...) ## S3 method for class 'Surv' as.data.frame(x, ...) ## S3 method for class 'Surv' as.integer(x, ...) ## S3 method for class 'Surv' as.matrix(x, ...) ## S3 method for class 'Surv' as.numeric(x, ...) ## S3 method for class 'Surv' c(...) ## S3 method for class 'Surv' duplicated(x, ...) ## S3 method for class 'Surv' format(x, ...) ## S3 method for class 'Surv' head(x, ...) ## S3 method for class 'Surv' is.na(x) ## S3 method for class 'Surv' length(x) ## S3 method for class 'Surv' mean(x, ...) ## S3 method for class 'Surv' median(x, ...) ## S3 method for class 'Surv' names(x) ## S3 replacement method for class 'Surv' names(x) <- value ## S3 method for class 'Surv' quantile(x, probs, na.rm=FALSE, ...) ## S3 method for class 'Surv' plot(x, ...) ## S3 method for class 'Surv' rep(x, ...) ## S3 method for class 'Surv' rep.int(x, ...) ## S3 method for class 'Surv' rep_len(x, ...) ## S3 method for class 'Surv' rev(x) ## S3 method for class 'Surv' t(x) ## S3 method for class 'Surv' tail(x, ...) ## S3 method for class 'Surv' unique(x, ...) ``` ### Arguments | | | | --- | --- | | `x` | a `Surv` object | | `probs` | a vector of probabilities | | `na.rm` | remove missing values from the calculation | | `value` | a character vector of up to the same length as `x`, or `NULL` | | `...` | other arguments to the method | ### Details These functions extend the standard methods to `Surv` objects. The arguments and results from these are mostly as expected, with the following further details: * The `as.character` function uses "5+" for right censored at time 5, "5-" for left censored at time 5, "[2,7]" for an observation that was interval censored between 2 and 7, "(1,6]" for a counting process data denoting an observation which was at risk from time 1 to 6, with an event at time 6, and "(1,6+]" for an observation over the same interval but not ending with and event. For a multi-state survival object the type of event is appended to the event time using ":type". * The `print` and `format` methods make use of `as.character`. * The `as.numeric` and `as.integer` methods perform these actions on the survival times, but do not affect the censoring indicator. * The `as.matrix` and `t` methods return a matrix * The `length` of a `Surv` object is the number of survival times it contains, not the number of items required to encode it, e.g., `x <- Surv(1:4, 5:9, c(1,0,1,0)); length(x)` has a value of 4. Likewise `names(x)` will be NULL or a vector of length 4. (For technical reasons, any names are actually stored in the `rownames` attribute of the object.) * For a multi-state survival object `levels` returns the names of the endpoints, otherwise it is NULL. * The `median`, `quantile` and `plot` methods first construct a survival curve using `survfit`, then apply the appropriate method to that curve. * The concatonation method `c()` is asymmetric, its first argument determines the exection path. For instance `c(Surv(1:4), Surv(5:6))` will concatonate the two objects, `c(Surv(1:4), 5:6)` will give an error, and `c(5:6, Surv(1:4))` is equivalent to `c(5:6, as.vector(Surv(1:4)))`. ### See Also `[Surv](surv)` r None `cch` Fits proportional hazards regression model to case-cohort data --------------------------------------------------------------------- ### Description Returns estimates and standard errors from relative risk regression fit to data from case-cohort studies. A choice is available among the Prentice, Self-Prentice and Lin-Ying methods for unstratified data. For stratified data the choice is between Borgan I, a generalization of the Self-Prentice estimator for unstratified case-cohort data, and Borgan II, a generalization of the Lin-Ying estimator. ### Usage ``` cch(formula, data, subcoh, id, stratum=NULL, cohort.size, method =c("Prentice","SelfPrentice","LinYing","I.Borgan","II.Borgan"), robust=FALSE) ``` ### Arguments | | | | --- | --- | | `formula` | A formula object that must have a `[Surv](surv)` object as the response. The Surv object must be of type `"right"`, or of type `"counting"`. | | `subcoh` | Vector of indicators for subjects sampled as part of the sub-cohort. Code `1` or `TRUE` for members of the sub-cohort, `0` or `FALSE` for others. If `data` is a data frame then `subcoh` may be a one-sided formula. | | `id` | Vector of unique identifiers, or formula specifying such a vector. | | `stratum` | A vector of stratum indicators or a formula specifying such a vector | | `cohort.size` | Vector with size of each stratum original cohort from which subcohort was sampled | | `data` | An optional data frame in which to interpret the variables occurring in the formula. | | `method` | Three procedures are available. The default method is "Prentice", with options for "SelfPrentice" or "LinYing". | | `robust` | For `"LinYing"` only, if `robust=TRUE`, use design-based standard errors even for phase I | ### Details Implements methods for case-cohort data analysis described by Therneau and Li (1999). The three methods differ in the choice of "risk sets" used to compare the covariate values of the failure with those of others at risk at the time of failure. "Prentice" uses the sub-cohort members "at risk" plus the failure if that occurs outside the sub-cohort and is score unbiased. "SelfPren" (Self-Prentice) uses just the sub-cohort members "at risk". These two have the same asymptotic variance-covariance matrix. "LinYing" (Lin-Ying) uses the all members of the sub-cohort and all failures outside the sub-cohort who are "at risk". The methods also differ in the weights given to different score contributions. The `data` argument must not have missing values for any variables in the model. There must not be any censored observations outside the subcohort. ### Value An object of class "cch" incorporating a list of estimated regression coefficients and two estimates of their asymptotic variance-covariance matrix. | | | | --- | --- | | `coef` | regression coefficients. | | `naive.var` | Self-Prentice model based variance-covariance matrix. | | `var` | Lin-Ying empirical variance-covariance matrix. | ### Author(s) Norman Breslow, modified by Thomas Lumley ### References Prentice, RL (1986). A case-cohort design for epidemiologic cohort studies and disease prevention trials. Biometrika 73: 1–11. Self, S and Prentice, RL (1988). Asymptotic distribution theory and efficiency results for case-cohort studies. Annals of Statistics 16: 64–81. Lin, DY and Ying, Z (1993). Cox regression with incomplete covariate measurements. Journal of the American Statistical Association 88: 1341–1349. Barlow, WE (1994). Robust variance estimation for the case-cohort design. Biometrics 50: 1064–1072 Therneau, TM and Li, H (1999). Computing the Cox model for case-cohort designs. Lifetime Data Analysis 5: 99–112. Borgan, *O*, Langholz, B, Samuelsen, SO, Goldstein, L and Pogoda, J (2000) Exposure stratified case-cohort designs. Lifetime Data Analysis 6, 39-58. ### See Also `twophase` and `svycoxph` in the "survey" package for more general two-phase designs. <http://faculty.washington.edu/tlumley/survey/> ### Examples ``` ## The complete Wilms Tumor Data ## (Breslow and Chatterjee, Applied Statistics, 1999) ## subcohort selected by simple random sampling. ## subcoh <- nwtco$in.subcohort selccoh <- with(nwtco, rel==1|subcoh==1) ccoh.data <- nwtco[selccoh,] ccoh.data$subcohort <- subcoh[selccoh] ## central-lab histology ccoh.data$histol <- factor(ccoh.data$histol,labels=c("FH","UH")) ## tumour stage ccoh.data$stage <- factor(ccoh.data$stage,labels=c("I","II","III","IV")) ccoh.data$age <- ccoh.data$age/12 # Age in years ## ## Standard case-cohort analysis: simple random subcohort ## fit.ccP <- cch(Surv(edrel, rel) ~ stage + histol + age, data =ccoh.data, subcoh = ~subcohort, id=~seqno, cohort.size=4028) fit.ccP fit.ccSP <- cch(Surv(edrel, rel) ~ stage + histol + age, data =ccoh.data, subcoh = ~subcohort, id=~seqno, cohort.size=4028, method="SelfPren") summary(fit.ccSP) ## ## (post-)stratified on instit ## stratsizes<-table(nwtco$instit) fit.BI<- cch(Surv(edrel, rel) ~ stage + histol + age, data =ccoh.data, subcoh = ~subcohort, id=~seqno, stratum=~instit, cohort.size=stratsizes, method="I.Borgan") summary(fit.BI) ``` r None `summary.aareg` Summarize an aareg fit --------------------------------------- ### Description Creates the overall test statistics for an Aalen additive regression model ### Usage ``` ## S3 method for class 'aareg' summary(object, maxtime, test=c("aalen", "nrisk"), scale=1,...) ``` ### Arguments | | | | --- | --- | | `object` | the result of a call to the `aareg` function | | `maxtime` | truncate the input to the model at time "maxtime" | | `test` | the relative time weights that will be used to compute the test | | `scale` | scales the coefficients. For some data sets, the coefficients of the Aalen model will be very small (10-4); this simply multiplies the printed values by a constant, say 1e6, to make the printout easier to read. | | `...` | for future methods | ### Details It is not uncommon for the very right-hand tail of the plot to have large outlying values, particularly for the standard error. The `maxtime` parameter can then be used to truncate the range so as to avoid these. This gives an updated value for the test statistics, without refitting the model. The slope is based on a weighted linear regression to the cumulative coefficient plot, and may be a useful measure of the overall size of the effect. For instance when two models include a common variable, "age" for instance, this may help to assess how much the fit changed due to the other variables, in leiu of overlaying the two plots. (Of course the plots are often highly non-linear, so it is only a rough substitute). The slope is not directly related to the test statistic, as the latter is invariant to any monotone transformation of time. ### Value a list is returned with the following components | | | | --- | --- | | `table` | a matrix with rows for the intercept and each covariate, and columns giving a slope estimate, the test statistic, it's standard error, the z-score and a p-value | | `test` | the time weighting used for computing the test statistics | | `test.statistic` | the vector of test statistics | | `test.var` | the model based variance matrix for the test statistic | | `test.var2` | optionally, a robust variance matrix for the test statistic | | `chisq` | the overall test (ignoring the intercept term) for significance of any variable | | `n` | a vector containing the number of observations, the number of unique death times used in the computation, and the total number of unique death times | ### See Also aareg, plot.aareg ### Examples ``` afit <- aareg(Surv(time, status) ~ age + sex + ph.ecog, data=lung, dfbeta=TRUE) summary(afit) ## Not run: slope test se(test) robust se z p Intercept 5.05e-03 1.9 1.54 1.55 1.23 0.219000 age 4.01e-05 108.0 109.00 106.00 1.02 0.307000 sex -3.16e-03 -19.5 5.90 5.95 -3.28 0.001030 ph.ecog 3.01e-03 33.2 9.18 9.17 3.62 0.000299 Chisq=22.84 on 3 df, p=4.4e-05; test weights=aalen ## End(Not run) summary(afit, maxtime=600) ## Not run: slope test se(test) robust se z p Intercept 4.16e-03 2.13 1.48 1.47 1.450 0.146000 age 2.82e-05 85.80 106.00 100.00 0.857 0.392000 sex -2.54e-03 -20.60 5.61 5.63 -3.660 0.000256 ph.ecog 2.47e-03 31.60 8.91 8.67 3.640 0.000271 Chisq=27.08 on 3 df, p=5.7e-06; test weights=aalen ## End(Not run) ```
programming_docs
r None `mgus2` Monoclonal gammopathy data ----------------------------------- ### Description Natural history of 1341 sequential patients with monoclonal gammopathy of undetermined significance (MGUS). This is a superset of the `mgus` data, at a later point in the accrual process ### Usage ``` mgus2 data(cancer, package="survival") ``` ### Format A data frame with 1384 observations on the following 10 variables. `id` subject identifier `age` age at diagnosis, in years `sex` a factor with levels `F` `M` `dxyr` year of diagnosis `hgb` hemoglobin `creat` creatinine `mspike` size of the monoclonal serum splike `ptime` time until progression to a plasma cell malignancy (PCM) or last contact, in months `pstat` occurrence of PCM: 0=no, 1=yes `futime` time until death or last contact, in months `death` occurrence of death: 0=no, 1=yes ### Details This is an extension of the study found in the `mgus` data set, containing enrollment through 1994 and follow-up through 1999. ### Source Mayo Clinic data courtesy of Dr. Robert Kyle. All patient identifiers have been removed, age rounded to the nearest year, and follow-up times rounded to the nearest month. ### References R. Kyle, T. Therneau, V. Rajkumar, J. Offord, D. Larson, M. Plevak, and L. J. Melton III, A long-terms study of prognosis in monoclonal gammopathy of undertermined significance. New Engl J Med, 346:564-569 (2002). r None `survfit0` Convert the format of a survfit object. --------------------------------------------------- ### Description Add the point for a starting time (time 0) to a survfit object's elements. This is useful for plotting. ### Usage ``` survfit0(x, start.time=0) ``` ### Arguments | | | | --- | --- | | `x` | a survfit object | | `start.time` | the desired starting time; see details below. | ### Details Survival curves are traditionally plotted forward from time 0, but since the true starting time is not known as a part of the data, the `survfit` routine does not include a time 0 value in the resulting object. Someone might look at cumulative mortgage defaults versus calendar year, for instance, with the ‘time’ value a Date object. The plotted curve probably should not start at 0 = 1970/01/01. Due to this uncertainty, it was decided not to include a "time 0" as part of a survfit object. If the original `survfit` call included a `start.time` argument, that value is of course retained. Whether that (1989) decision was wise or foolish, it is now far too late to change it. (We tried it once as a trial, resulting in over 20 errors in the survival test suite. We extrapolate that it might break 1/2 - 2/3 of the other CRAN packages that depend on survival, if made a default.) If the original `survfit` call included a `start.time` argument, that value is of course retained. One problem with this choice is that some functions must choose a starting point, plots for example. This utility function is used by `plot.survfit` and `summary.survfit` to do so, adding a new time point at the front of each curve in a consistent way: the optional argument to the `survfit0` function as the first choice (if supplied), then the user's `start.time` if present, otherwise `min(0, x$time)`. The resulting object is *not* guarranteed to work with functions that further manipulate a `survfit` object such as subscripting, aggregation, pseudovalues, etc. (remember the 20 errors). Rather it is intended as a penultimate step, most often when creating a plot. ### Value a reformulated version of the object with an initial data point at `start.time` added. The `time`, `surv`, `pstate`, `cumhaz`, `std.err`, `std.cumhaz` and other components will all be aligned, so as to make plots and summaries easier to produce. r None `survreg.control` Package options for survreg and coxph -------------------------------------------------------- ### Description This functions checks and packages the fitting options for `<survreg>` ### Usage ``` survreg.control(maxiter=30, rel.tolerance=1e-09, toler.chol=1e-10, iter.max, debug=0, outer.max=10) ``` ### Arguments | | | | --- | --- | | `maxiter` | maximum number of iterations | | `rel.tolerance` | relative tolerance to declare convergence | | `toler.chol` | Tolerance to declare Cholesky decomposition singular | | `iter.max` | same as `maxiter` | | `debug` | print debugging information | | `outer.max` | maximum number of outer iterations for choosing penalty parameters | ### Value A list with the same elements as the input ### See Also `<survreg>` r None `ovarian` Ovarian Cancer Survival Data --------------------------------------- ### Description Survival in a randomised trial comparing two treatments for ovarian cancer ### Usage ``` ovarian data(cancer, package="survival") ``` ### Format | | | | --- | --- | | futime: | survival or censoring time | | fustat: | censoring status | | age: | in years | | resid.ds: | residual disease present (1=no,2=yes) | | rx: | treatment group | | ecog.ps: | ECOG performance status (1 is better, see reference) | | | ### Source Terry Therneau ### References Edmunson, J.H., Fleming, T.R., Decker, D.G., Malkasian, G.D., Jefferies, J.A., Webb, M.J., and Kvols, L.K., Different Chemotherapeutic Sensitivities and Host Factors Affecting Prognosis in Advanced Ovarian Carcinoma vs. Minimal Residual Disease. Cancer Treatment Reports, 63:241-47, 1979. r None `pspline` Smoothing splines using a pspline basis -------------------------------------------------- ### Description Specifies a penalised spline basis for the predictor. This is done by fitting a comparatively small set of splines and penalising the integrated second derivative. Traditional smoothing splines use one basis per observation, but several authors have pointed out that the final results of the fit are indistinguishable for any number of basis functions greater than about 2-3 times the degrees of freedom. Eilers and Marx point out that if the basis functions are evenly spaced, this leads to significant computational simplification, they refer to the result as a p-spline. ### Usage ``` pspline(x, df=4, theta, nterm=2.5 * df, degree=3, eps=0.1, method, Boundary.knots=range(x), intercept=FALSE, penalty=TRUE, combine, ...) psplineinverse(x) ``` ### Arguments | | | | --- | --- | | `x` | for psline: a covariate vector. The function does not apply to factor variables. For psplineinverse x will be the result of a pspline call. | | `df` | the desired degrees of freedom. One of the arguments `df` or `theta`' must be given, but not both. If `df=0`, then the AIC = (loglik -df) is used to choose an "optimal" degrees of freedom. If AIC is chosen, then an optional argument ‘caic=T’ can be used to specify the corrected AIC of Hurvich et. al. | | `theta` | roughness penalty for the fit. It is a monotone function of the degrees of freedom, with theta=1 corresponding to a linear fit and theta=0 to an unconstrained fit of nterm degrees of freedom. | | `nterm` | number of splines in the basis | | `degree` | degree of splines | | `eps` | accuracy for `df` | | `method` | the method for choosing the tuning parameter `theta`. If theta is given, then 'fixed' is assumed. If the degrees of freedom is given, then 'df' is assumed. If method='aic' then the degrees of freedom is chosen automatically using Akaike's information criterion. | | `...` | optional arguments to the control function | | `Boundary.knots` | the spline is linear beyond the boundary knots. These default to the range of the data. | | `intercept` | if TRUE, the basis functions include the intercept. | | `penalty` | if FALSE a large number of attributes having to do with penalized fits are excluded. This is useful to create a pspline basis matrix for other uses. | | `combine` | an optional vector of increasing integers. If two adjacent values of `combine` are equal, then the corresponding coefficients of the fit are forced to be equal. This is useful for monotone fits, see the vignette for more details. | ### Value Object of class `pspline, coxph.penalty` containing the spline basis, with the appropriate attributes to be recognized as a penalized term by the coxph or survreg functions. For psplineinverse the original x vector is reconstructed. ### References Eilers, Paul H. and Marx, Brian D. (1996). Flexible smoothing with B-splines and penalties. Statistical Science, 11, 89-121. Hurvich, C.M. and Simonoff, J.S. and Tsai, Chih-Ling (1998). Smoothing parameter selection in nonparametric regression using an improved Akaike information criterion, JRSSB, volume 60, 271–293. ### See Also `<coxph>`,`<survreg>`,`<ridge>`, `<frailty>` ### Examples ``` lfit6 <- survreg(Surv(time, status)~pspline(age, df=2), lung) plot(lung$age, predict(lfit6), xlab='Age', ylab="Spline prediction") title("Cancer Data") fit0 <- coxph(Surv(time, status) ~ ph.ecog + age, lung) fit1 <- coxph(Surv(time, status) ~ ph.ecog + pspline(age,3), lung) fit3 <- coxph(Surv(time, status) ~ ph.ecog + pspline(age,8), lung) fit0 fit1 fit3 ``` r None `rats2` Rat data from Gail et al. ---------------------------------- ### Description 48 rats were injected with a carcinogen, and then randomized to either drug or placebo. The number of tumors ranges from 0 to 13; all rats were censored at 6 months after randomization. ### Usage ``` rats2 data(cancer, package="survival") ``` ### Format | | | | --- | --- | | rat: | id | | trt: | treatment,(1=drug, 0=control) | | observation: | within rat | | start: | entry time | | stop: | exit time | | status: | event status, 1=tumor, 0=censored | | | ### Source MH Gail, TJ Santner, and CC Brown (1980), An analysis of comparative carcinogenesis experiments based on multiple times to tumor. *Biometrics* **36**, 255–266. r None `Surv2` Create a survival object --------------------------------- ### Description Create a survival object from a timeline style data set. This will almost always be the response variable in a formula. ### Usage ``` Surv2(time, event, repeated=FALSE) ``` ### Arguments | | | | --- | --- | | `time` | a timeline variable, such as age, time from enrollment, date, etc. | | `event` | the outcome at that time. This can be a 0/1 variable, TRUE/FALSE, or a factor. If the latter, the first level of the factor corresponds to ‘no event was observed at this time’. | | `repeated` | if the same level of the outcome repeats, without an intervening event of another type, should this be treated as a new event? | ### Details This function is still experimental. When used in a `coxph` or `survfit` model, Surv2 acts as a trigger to internally convert a timeline style data set into counting process style data, which is then acted on by the routine. The `repeated` argument controls how repeated instances of the same event code are treated. If TRUE, they are treated as new events, an example where this might be desired is repeated infections in a subject. If FALSE, then repeats are not a new event. An example would be a data set where we wanted to use diabetes, say, as an endpoint, but this is repeated at each medical visit. ### Value An object of class `Surv2`. There are methods for `print`, `is.na` and subscripting. ### See Also `[Surv2data](surv2data)`, `<coxph>`, `<survfit>` r None `nwtco` Data from the National Wilm's Tumor Study -------------------------------------------------- ### Description Measurement error example. Tumor histology predicts survival, but prediction is stronger with central lab histology than with the local institution determination. ### Usage ``` nwtco data(nwtco, package="survival") ``` ### Format A data frame with 4028 observations on the following 9 variables. `seqno` id number `instit` Histology from local institution `histol` Histology from central lab `stage` Disease stage `study` study `rel` indicator for relapse `edrel` time to relapse `age` age in months `in.subcohort` Included in the subcohort for the example in the paper ### References NE Breslow and N Chatterjee (1999), Design and analysis of two-phase studies with binary outcome applied to Wilms tumour prognosis. *Applied Statistics* **48**, 457–68. ### Examples ``` with(nwtco, table(instit,histol)) anova(coxph(Surv(edrel,rel)~histol+instit,data=nwtco)) anova(coxph(Surv(edrel,rel)~instit+histol,data=nwtco)) ``` r None `concordancefit` Compute the concordance ----------------------------------------- ### Description This is the working routine behind the `concordance` function. It is not meant to be called by users, but is available for other packages to use. Input arguments, for instance, are assumed to all be the correct length and type, and missing values are not allowed: the calling routine is responsible for these things. ### Usage ``` concordancefit(y, x, strata, weights, ymin = NULL, ymax = NULL, timewt = c("n", "S", "S/G", "n/G", "n/G2", "I"), cluster, influence =0, ranks = FALSE, reverse = FALSE, timefix = TRUE, keepstrata=10) ``` ### Arguments | | | | --- | --- | | `y` | the response. It can be numeric, factor, or a Surv object | | `x` | the predictor, a numeric vector | | `strata` | optional numeric vector that stratifies the data | | `weights` | options vector of case weights | | `ymin, ymax` | restrict the comparison to response values in this range | | `timewt` | the time weighting to be used | | `cluster, influence,ranks, reverse, timefix` | see the help for the `concordance` function | | `keepstrata` | either TRUE, FALSE, or an integer value. Computations are always done within stratum, then added. If the total number of strata greater than `keepstrata`, or `keepstrata=FALSE`, those subtotals are not kept in the output. | ### Value a list containing the results ### Author(s) Terry Therneau ### See Also `<concordance>` r None `coxph` Fit Proportional Hazards Regression Model -------------------------------------------------- ### Description Fits a Cox proportional hazards regression model. Time dependent variables, time dependent strata, multiple events per subject, and other extensions are incorporated using the counting process formulation of Andersen and Gill. ### Usage ``` coxph(formula, data=, weights, subset, na.action, init, control, ties=c("efron","breslow","exact"), singular.ok=TRUE, robust, model=FALSE, x=FALSE, y=TRUE, tt, method=ties, id, cluster, istate, statedata, nocenter=c(-1, 0, 1), ...) ``` ### Arguments | | | | --- | --- | | `formula` | a formula object, with the response on the left of a `~` operator, and the terms on the right. The response must be a survival object as returned by the `Surv` function. | | `data` | a data.frame in which to interpret the variables named in the `formula`, or in the `subset` and the `weights` argument. | | `weights` | vector of case weights, see the note below. For a thorough discussion of these see the book by Therneau and Grambsch. | | `subset` | expression indicating which subset of the rows of data should be used in the fit. All observations are included by default. | | `na.action` | a missing-data filter function. This is applied to the model.frame after any subset argument has been used. Default is `options()\$na.action`. | | `init` | vector of initial values of the iteration. Default initial value is zero for all variables. | | `control` | Object of class `<coxph.control>` specifying iteration limit and other control options. Default is `coxph.control(...)`. | | `ties` | a character string specifying the method for tie handling. If there are no tied death times all the methods are equivalent. Nearly all Cox regression programs use the Breslow method by default, but not this one. The Efron approximation is used as the default here, it is more accurate when dealing with tied death times, and is as efficient computationally. The “exact partial likelihood” is equivalent to a conditional logistic model, and is appropriate when the times are a small set of discrete values. See further below. | | `singular.ok` | logical value indicating how to handle collinearity in the model matrix. If `TRUE`, the program will automatically skip over columns of the X matrix that are linear combinations of earlier columns. In this case the coefficients for such columns will be NA, and the variance matrix will contain zeros. For ancillary calculations, such as the linear predictor, the missing coefficients are treated as zeros. | | `robust` | should a robust variance be computed. The default is TRUE if: there is a `cluster` argument, there are case weights that are not 0 or 1, or there are `id` values with more than one event. | | `id` | optional variable name that identifies subjects. Only necessary when a subject can have multiple rows in the data, and there is more than one event type. This variable will normally be found in `data`. | | `cluster` | optional variable which clusters the observations, for the purposes of a robust variance. If present, it implies `robust`. This variable will normally be found in `data`. | | `istate` | optional variable giving the current state at the start each interval. This variable will normally be found in `data`. | | `statedata` | optional data set used to describe multistate models. | | `model` | logical value: if `TRUE`, the model frame is returned in component `model`. | | `x` | logical value: if `TRUE`, the x matrix is returned in component `x`. | | `y` | logical value: if `TRUE`, the response vector is returned in component `y`. | | `tt` | optional list of time-transform functions. | | `method` | alternate name for the `ties` argument. | | `nocenter` | columns of the X matrix whose values lie strictly within this set are not recentered | | `...` | Other arguments will be passed to `<coxph.control>` | ### Details The proportional hazards model is usually expressed in terms of a single survival time value for each person, with possible censoring. Andersen and Gill reformulated the same problem as a counting process; as time marches onward we observe the events for a subject, rather like watching a Geiger counter. The data for a subject is presented as multiple rows or "observations", each of which applies to an interval of observation (start, stop]. The routine internally scales and centers data to avoid overflow in the argument to the exponential function. These actions do not change the result, but lead to more numerical stability. Any column of the X matrix whose values lie within `nocenter` list are not recentered. The practical consequence of the default is to not recenter dummy variables corresponding to factors. However, arguments to offset are not scaled since there are situations where a large offset value is a purposefully used. In general, however, users should not avoid very large numeric values for an offset due to possible loss of precision in the estimates. ### Value an object of class `coxph` representing the fit. See `coxph.object` for details. ### Side Effects Depending on the call, the `predict`, `residuals`, and `survfit` routines may need to reconstruct the x matrix created by `coxph`. It is possible for this to fail, as in the example below in which the predict function is unable to find `tform`. ``` tfun <- function(tform) coxph(tform, data=lung) fit <- tfun(Surv(time, status) ~ age) predict(fit) ``` In such a case add the `model=TRUE` option to the `coxph` call to obviate the need for reconstruction, at the expense of a larger `fit` object. ### Case weights Case weights are treated as replication weights, i.e., a case weight of 2 is equivalent to having 2 copies of that subject's observation. When computers were much smaller grouping like subjects together was a common trick to used to conserve memory. Setting all weights to 2 for instance will give the same coefficient estimate but halve the variance. When the Efron approximation for ties (default) is employed replication of the data will not give exactly the same coefficients as the weights option, and in this case the weighted fit is arguably the correct one. When the model includes a `cluster` term or the `robust=TRUE` option the computed variance treats any weights as sampling weights; setting all weights to 2 will in this case give the same variance as weights of 1. ### Special terms There are three special terms that may be used in the model equation. A `strata` term identifies a stratified Cox model; separate baseline hazard functions are fit for each strata. The `cluster` term is used to compute a robust variance for the model. The term `+ cluster(id)` where each value of `id` is unique is equivalent to specifying the `robust=TRUE` argument. If the `id` variable is not unique, it is assumed that it identifies clusters of correlated observations. The robust estimate arises from many different arguments and thus has had many labels. It is variously known as the Huber sandwich estimator, White's estimate (linear models/econometrics), the Horvitz-Thompson estimate (survey sampling), the working independence variance (generalized estimating equations), the infinitesimal jackknife, and the Wei, Lin, Weissfeld (WLW) estimate. A time-transform term allows variables to vary dynamically in time. In this case the `tt` argument will be a function or a list of functions (if there are more than one tt() term in the model) giving the appropriate transform. See the examples below. One user mistake that has recently arisen is to slavishly follow the advice of some coding guides and prepend `survival::` onto everthing, including the special terms, e.g., `survival::coxph(survival:Surv(time, status) ~ age + survival::cluster(inst), data=lung)` First, this is unnecessary: arguments within the `coxph` call will be evaluated within the survival namespace, so another package's Surv or cluster function would not be noticed. (Full qualification of the coxph call itself may be protective, however.) Second, and more importantly, the call just above will not give the correct answer. The specials are recognized by their name, and `survival::cluster` is not the same as `cluster`; the above model would treat `inst` as an ordinary variable. A similar issue arises from using `stats::offset` as a term, in either survival or glm models. ### Convergence In certain data cases the actual MLE estimate of a coefficient is infinity, e.g., a dichotomous variable where one of the groups has no events. When this happens the associated coefficient grows at a steady pace and a race condition will exist in the fitting routine: either the log likelihood converges, the information matrix becomes effectively singular, an argument to exp becomes too large for the computer hardware, or the maximum number of interactions is exceeded. (Most often number 1 is the first to occur.) The routine attempts to detect when this has happened, not always successfully. The primary consequence for the user is that the Wald statistic = coefficient/se(coefficient) is not valid in this case and should be ignored; the likelihood ratio and score tests remain valid however. ### Ties There are three possible choices for handling tied event times. The Breslow approximation is the easiest to program and hence became the first option coded for almost all computer routines. It then ended up as the default option when other options were added in order to "maintain backwards compatability". The Efron option is more accurate if there are a large number of ties, and it is the default option here. In practice the number of ties is usually small, in which case all the methods are statistically indistinguishable. Using the "exact partial likelihood" approach the Cox partial likelihood is equivalent to that for matched logistic regression. (The `clogit` function uses the `coxph` code to do the fit.) It is technically appropriate when the time scale is discrete and has only a few unique values, and some packages refer to this as the "discrete" option. There is also an "exact marginal likelihood" due to Prentice which is not implemented here. The calculation of the exact partial likelihood is numerically intense. Say for instance 180 subjects are at risk on day 7 of which 15 had an event; then the code needs to compute sums over all 180-choose-15 > 10^43 different possible subsets of size 15. There is an efficient recursive algorithm for this task, but even with this the computation can be insufferably long. With (start, stop) data it is much worse since the recursion needs to start anew for each unique start time. A separate issue is that of artificial ties due to floating-point imprecision. See the vignette on this topic for a full explanation or the `timefix` option in `coxph.control`. Users may need to add `timefix=FALSE` for simulated data sets. ### Penalized regression `coxph` can maximise a penalised partial likelihood with arbitrary user-defined penalty. Supplied penalty functions include ridge regression (<ridge>), smoothing splines (<pspline>), and frailty models (<frailty>). ### References Andersen, P. and Gill, R. (1982). Cox's regression model for counting processes, a large sample study. *Annals of Statistics* **10**, 1100-1120. Therneau, T., Grambsch, P., Modeling Survival Data: Extending the Cox Model. Springer-Verlag, 2000. ### See Also `<coxph.object>`, `<coxph.control>`, `<cluster>`, `<strata>`, `[Surv](surv)`, `<survfit>`, `<pspline>`, `<ridge>`. ### Examples ``` # Create the simplest test data set test1 <- list(time=c(4,3,1,1,2,2,3), status=c(1,1,1,0,1,1,0), x=c(0,2,1,1,1,0,0), sex=c(0,0,0,0,1,1,1)) # Fit a stratified model coxph(Surv(time, status) ~ x + strata(sex), test1) # Create a simple data set for a time-dependent model test2 <- list(start=c(1,2,5,2,1,7,3,4,8,8), stop=c(2,3,6,7,8,9,9,9,14,17), event=c(1,1,1,1,1,1,1,0,0,0), x=c(1,0,0,1,0,1,1,1,0,0)) summary(coxph(Surv(start, stop, event) ~ x, test2)) # # Create a simple data set for a time-dependent model # test2 <- list(start=c(1, 2, 5, 2, 1, 7, 3, 4, 8, 8), stop =c(2, 3, 6, 7, 8, 9, 9, 9,14,17), event=c(1, 1, 1, 1, 1, 1, 1, 0, 0, 0), x =c(1, 0, 0, 1, 0, 1, 1, 1, 0, 0) ) summary( coxph( Surv(start, stop, event) ~ x, test2)) # Fit a stratified model, clustered on patients bladder1 <- bladder[bladder$enum < 5, ] coxph(Surv(stop, event) ~ (rx + size + number) * strata(enum), cluster = id, bladder1) # Fit a time transform model using current age coxph(Surv(time, status) ~ ph.ecog + tt(age), data=lung, tt=function(x,t,...) pspline(x + t/365.25)) ```
programming_docs
r None `cox.zph` Test the Proportional Hazards Assumption of a Cox Regression ----------------------------------------------------------------------- ### Description Test the proportional hazards assumption for a Cox regression model fit (`coxph`). ### Usage ``` cox.zph(fit, transform="km", terms=TRUE, singledf=FALSE, global=TRUE) ``` ### Arguments | | | | --- | --- | | `fit` | the result of fitting a Cox regression model, using the `coxph` or `coxme` functions. | | `transform` | a character string specifying how the survival times should be transformed before the test is performed. Possible values are `"km"`, `"rank"`, `"identity"` or a function of one argument. | | `terms` | if TRUE, do a test for each term in the model rather than for each separate covariate. For a factor variable with k levels, for instance, this would lead to a k-1 degree of freedom test. The plot for such variables will be a single curve evaluating the linear predictor over time. | | `singledf` | use a single degree of freedom test for terms that have multiple coefficients, i.e., the test that corresponds most closely to the plot. If `terms=FALSE` this argument has no effect. | | `global` | should a global chi-square test be done, in addition to the per-variable or per-term tests tests. | ### Details The computations require the original `x` matrix of the Cox model fit. Thus it saves time if the `x=TRUE` option is used in `coxph`. This function would usually be followed by both a plot and a print of the result. The plot gives an estimate of the time-dependent coefficient *beta(t)*. If the proportional hazards assumption holds then the true *beta(t)* function would be a horizontal line. The `table` component provides the results of a formal score test for slope=0, a linear fit to the plot would approximate the test. Random effects terms such a `frailty` or random effects in a `coxme` model are not checked for proportional hazards, rather they are treated as a fixed offset in model. If the model contains strata by covariate interactions, then the `y` matrix may contain structural zeros, i.e., deaths (rows) that had no role in estimation of a given coefficient (column). These are marked as NA. If an entire row is NA, for instance after subscripting a `cox.zph` object, that row is removed. ### Value an object of class `"cox.zph"`, with components: | | | | --- | --- | | `table` | a matrix with one row for each variable, and optionally a last row for the global test. Columns of the matrix contain a score test of for addition of the time-dependent term, the degrees of freedom, and the two-sided p-value. | | `x` | the transformed time axis. | | `time` | the untransformed time values; there is one entry for each event time in the data | | `strata` | for a stratified `coxph model`, the stratum of each of the events | | `y` | the matrix of scaled Schoenfeld residuals. There will be one column per term or per variable (depending on the `terms` option above), and one row per event. The row labels are a rounded form of the original times. | | `var` | a variance matrix for the covariates, used to create an approximate standard error band for plots | | `transform` | the transform of time that was used | | `call` | the calling sequence for the routine. | ### Note In versions of the package before survival3.0 the function computed a fast approximation to the score test. Later versions compute the actual score test. ### References P. Grambsch and T. Therneau (1994), Proportional hazards tests and diagnostics based on weighted residuals. *Biometrika,* **81**, 515-26. ### See Also `<coxph>`, `[Surv](surv)`. ### Examples ``` fit <- coxph(Surv(futime, fustat) ~ age + ecog.ps, data=ovarian) temp <- cox.zph(fit) print(temp) # display the results plot(temp) # plot curves ``` r None `print.summary.survfit` Print Survfit Summary ---------------------------------------------- ### Description Prints the result of `summary.survfit`. ### Usage ``` ## S3 method for class 'summary.survfit' print(x, digits = max(options() $digits-4, 3), ...) ``` ### Arguments | | | | --- | --- | | `x` | an object of class `"summary.survfit"`, which is the result of the `summary.survfit` function. | | `digits` | the number of digits to use in printing the numbers. | | `...` | for future methods | ### Value `x`, with the invisible flag set to prevent printing. ### Side Effects prints the summary created by `summary.survfit`. ### See Also `[options](../../base/html/options)`, `[print](../../base/html/print)`, `<summary.survfit>`. r None `summary.survexp` Summary function for a survexp object -------------------------------------------------------- ### Description Returns a list containing the values of the survival at specified times. ### Usage ``` ## S3 method for class 'survexp' summary(object, times, scale = 1, ...) ``` ### Arguments | | | | --- | --- | | `object` | the result of a call to the `survexp` function | | `times` | vector of times; the returned matrix will contain 1 row for each time. Missing values are not allowed. | | `scale` | numeric value to rescale the survival time, e.g., if the input data to `survfit` were in days, `scale = 365.25` would scale the output to years. | | `...` | For future methods | ### Details A primary use of this function is to retrieve survival at fixed time points, which will be properly interpolated by the function. ### Value a list with the following components: | | | | --- | --- | | `surv` | the estimate of survival at time t. | | `time` | the timepoints on the curve. | | `n.risk` | In expected survival each subject from the data set is matched to a hypothetical person from the parent population, matched on the characteristics of the parent population. The number at risk is the number of those hypothetical subject who are still part of the calculation. | ### Author(s) Terry Therneau ### See Also `<survexp>` r None `rotterdam` Breast cancer data set used in Royston and Altman (2013) --------------------------------------------------------------------- ### Description The `rotterdam` data set includes 2982 primary breast cancers patients whose data whose records were included in the Rotterdam tumor bank. ### Usage ``` rotterdam data(cancer, package="survival") ``` ### Format A data frame with 2982 observations on the following 15 variables. `pid` patient identifier `year` year of surgery `age` age at surgery `meno` menopausal status (0= premenopausal, 1= postmenopausal) `size` tumor size, a factor with levels `<=20` `20-50` `>50` `grade` differentiation grade `nodes` number of positive lymph nodes `pgr` progesterone receptors (fmol/l) `er` estrogen receptors (fmol/l) `hormon` hormonal treatment (0=no, 1=yes) `chemo` chemotherapy `rtime` days to relapse or last follow-up `recur` 0= no relapse, 1= relapse `dtime` days to death or last follow-up `death` 0= alive, 1= dead ### Details These data sets are used in the paper by Royston and Altman. The Rotterdam data is used to create a fitted model, and the GBSG data for validation of the model. The paper gives references for the data source. ### References Patrick Royston and Douglas Altman, External validation of a Cox prognostic model: principles and methods. BMC Medical Research Methodology 2013, 13:33 ### See Also `<gbsg>` ### Examples ``` rfstime <- pmin(rotterdam$rtime, rotterdam$dtime) status <- pmax(rotterdam$recur, rotterdam$death) fit1 <- coxph(Surv(rfstime, status) ~ pspline(age) + meno + size + pspline(nodes) + er, data=rotterdam, subset = (nodes > 0)) # Royston and Altman used fractional polynomials for the nonlinear terms ``` r None `basehaz` Alias for the survfit function ----------------------------------------- ### Description Compute the predicted survival curve for a Cox model. ### Usage ``` basehaz(fit, centered=TRUE) ``` ### Arguments | | | | --- | --- | | `fit` | a coxph fit | | `centered` | if TRUE return data from a predicted survival curve at the mean values of the covariates `fit$mean`, if FALSE return a prediction for all covariates equal to zero. | ### Details This function is simply an alias for `survfit`, which does the actual work and has a richer set of options. The alias exists only because some users look for predicted survival estimates under this name. The function returns a data frame containing the `time`, `cumhaz` and optionally the strata (if the fitted Cox model used a strata statement), which are copied the `survfit` result. If there are factor variables in the model, then the default predictions at the "mean" are meaningless since they do not correspond to any possible subject; correct results require use of the `newdata` argument of survfit. Results for all covariates =0 are normally only of use as a building block for further calculations. ### Value a data frame with variable names of `hazard`, `time` and optionally `strata`. The first is actually the cumulative hazard. ### See Also `<survfit.coxph>` r None `survfitcoxph.fit` A direct interface to the ‘computational engine’ of survfit.coxph ------------------------------------------------------------------------------------- ### Description This program is mainly supplied to allow other packages to invoke the survfit.coxph function at a ‘data’ level rather than a ‘user’ level. It does no checks on the input data that is provided, which can lead to unexpected errors if that data is wrong. ### Usage ``` survfitcoxph.fit(y, x, wt, x2, risk, newrisk, strata, se.fit, survtype, vartype, varmat, id, y2, strata2, unlist=TRUE) ``` ### Arguments | | | | --- | --- | | `y` | the response variable used in the Cox model. (Missing values removed of course.) | | `x` | covariate matrix used in the Cox model | | `wt` | weight vector for the Cox model. If the model was unweighted use a vector of 1s. | | `x2` | matrix describing the hypothetical subjects for which a curve is desired. Must have the same number of columns as `x`. | | `risk` | the risk score exp(X beta) from the fitted Cox model. If the model had an offset, include it in the argument to exp. | | `newrisk` | risk scores for the hypothetical subjects | | `strata` | strata variable used in the Cox model. This will be a factor. | | `se.fit` | if `TRUE` the standard errors of the curve(s) are returned | | `survtype` | 1=Kalbfleisch-Prentice, 2=Nelson-Aalen, 3=Efron. It is usual to match this to the approximation for ties used in the `coxph` model: KP for ‘exact’, N-A for ‘breslow’ and Efron for ‘efron’. | | `vartype` | 1=Greenwood, 2=Aalen, 3=Efron | | `varmat` | the variance matrix of the coefficients | | `id` | optional; if present and not NULL this should be a vector of identifiers of length `nrow(x2)`. A mon-null value signifies that `x2` contains time dependent covariates, in which case this identifies which rows of `x2` go with each subject. | | `y2` | survival times, for time dependent prediction. It gives the time range (time1,time2] for each row of `x2`. Note: this must be a Surv object and thus contains a status indicator, which is never used in the routine, however. | | `strata2` | vector of strata indicators for `x2`. This must be a factor. | | `unlist` | if `FALSE` the result will be a list with one element for each strata. Otherwise the strata are “unpacked” into the form found in a `survfit` object. | ### Value a list containing nearly all the components of a `survfit` object. All that is missing is to add the confidence intervals, the type of the original model's response (as in a coxph object), and the class. ### Note The source code for for both this function and `survfit.coxph` is written using noweb. For complete documentation see the `inst/sourcecode.pdf` file. ### Author(s) Terry Therneau ### See Also `<survfit.coxph>` r None `cgd` Chronic Granulotamous Disease data ----------------------------------------- ### Description Data are from a placebo controlled trial of gamma interferon in chronic granulotomous disease (CGD). Contains the data on time to serious infections observed through end of study for each patient. ### Usage ``` cgd data(cgd) ``` ### Format id subject identification number center enrolling center random date of randomization treatment placebo or gamma interferon sex sex age age in years, at study entry height height in cm at study entry weight weight in kg at study entry inherit pattern of inheritance steroids use of steroids at study entry,1=yes propylac use of prophylactic antibiotics at study entry hos.cat a categorization of the centers into 4 groups tstart, tstop start and end of each time interval status 1=the interval ends with an infection enum observation number within subject ### Details The `cgd0` data set is in the form found in the references, with one line per patient and no recoding of the variables. The `cgd` data set (this one) has been cast into (start, stop] format with one line per event, and covariates such as center recoded as factors to include meaningful labels. ### Source Fleming and Harrington, Counting Processes and Survival Analysis, appendix D.2. ### See Also `link{cgd0}` r None `survSplit` Split a survival data set at specified times --------------------------------------------------------- ### Description Given a survival data set and a set of specified cut times, split each record into multiple subrecords at each cut time. The new data set will be in ‘counting process’ format, with a start time, stop time, and event status for each record. ### Usage ``` survSplit(formula, data, subset, na.action=na.pass, cut, start="tstart", id, zero=0, episode, end="tstop", event="event") ``` ### Arguments | | | | --- | --- | | `formula` | a model formula | | `data` | a data frame | | `subset, na.action` | rows of the data to be retained | | `cut` | the vector of timepoints to cut at | | `start` | character string with the name of a start time variable (will be created if needed) | | `id` | character string with the name of new id variable to create (optional). This can be useful if the data set does not already contain an identifier. | | `zero` | If `start` doesn't already exist, this is the time that the original records start. | | `episode` | character string with the name of new episode variable (optional) | | `end` | character string with the name of event time variable | | `event` | character string with the name of censoring indicator | ### Details Each interval in the original data is cut at the given points; if an original row were (15, 60] with a cut vector of (10,30, 40) the resulting data set would have intervals of (15,30], (30,40] and (40, 60]. Each row in the final data set will lie completely within one of the cut intervals. Which interval for each row of the output is shown by the `episode` variable, where 1= less than the first cutpoint, 2= between the first and the second, etc. For the example above the values would be 2, 3, and 4. The routine is called with a formula as the first argument. The right hand side of the formula can be used to delimit variables that should be retained; normally one will use `~ .` as a shorthand to retain them all. The routine will try to retain variable names, e.g. `Surv(adam, joe, fred)~.` will result in a data set with those same variable names for `tstart`, `end`, and `event` options rather than the defaults. Any user specified values for these options will be used if they are present, of course. However, the routine is not sophisticated; it only does this substitution for simple names. A call of `Surv(time, stat==2)` for instance will not retain "stat" as the name of the event variable. Rows of data with a missing time or status are copied across unchanged, unless the na.action argument is changed from its default value of `na.pass`. But in the latter case any row that is missing for any variable will be removed, which is rarely what is desired. ### Value New, longer, data frame. ### See Also `[Surv](surv)`, `[cut](../../base/html/cut)`, `[reshape](../../stats/html/reshape)` ### Examples ``` fit1 <- coxph(Surv(time, status) ~ karno + age + trt, veteran) plot(cox.zph(fit1)[1]) # a cox.zph plot of the data suggests that the effect of Karnofsky score # begins to diminish by 60 days and has faded away by 120 days. # Fit a model with separate coefficients for the three intervals. # vet2 <- survSplit(Surv(time, status) ~., veteran, cut=c(60, 120), episode ="timegroup") fit2 <- coxph(Surv(tstart, time, status) ~ karno* strata(timegroup) + age + trt, data= vet2) c(overall= coef(fit1)[1], t0_60 = coef(fit2)[1], t60_120= sum(coef(fit2)[c(1,4)]), t120 = sum(coef(fit2)[c(1,5)])) ``` r None `neardate` Find the index of the closest value in data set 2, for each entry in data set one. ---------------------------------------------------------------------------------------------- ### Description A common task in medical work is to find the closest lab value to some index date, for each subject. ### Usage ``` neardate(id1, id2, y1, y2, best = c("after", "prior"), nomatch = NA_integer_) ``` ### Arguments | | | | --- | --- | | `id1` | vector of subject identifiers for the index group | | `id2` | vector of identifiers for the reference group | | `y1` | normally a vector of dates for the index group, but any orderable data type is allowed | | `y2` | reference set of dates | | `best` | if `best='prior'` find the index of the first y2 value less than or equal to the target y1 value, for each subject. If `best='after'` find the first y2 value which is greater than or equal to the target y1 value, for each subject. | | `nomatch` | the value to return for items without a match | ### Details This routine is closely related to `match` and to `findInterval`, the first of which finds exact matches and the second closest matches. This finds the closest matching date within sets of exactly matching identifiers. Closest date matching is often needed in clinical studies. For example data set 1 might contain the subject identifier and the date of some procedure and data set set 2 has the dates and values for laboratory tests, and the query is to find the first test value after the intervention but no closer than 7 days. The `id1` and `id2` arguments are similar to `match` in that we are searching for instances of `id1` that will be found in `id2`, and the result is the same length as `id1`. However, instead of returning the first match with `id2` this routine returns the one that best matches with respect to `y1`. The `y1` and `y2` arguments need not be dates, the function works for any data type such that the expression `c(y1, y2)` gives a sensible, sortable result. Be careful about matching Date and DateTime values and the impact of time zones, however, see `[as.POSIXct](../../base/html/as.posixlt)`. If `y1` and `y2` are not of the same class the user is on their own. Since there exist pairs of unmatched data types where the result could be sensible, the routine will in this case proceed under the assumption that "the user knows what they are doing". Caveat emptor. ### Value the index of the matching observations in the second data set, or the `nomatch` value for no successful match ### Author(s) Terry Therneau ### See Also `[match](../../base/html/match)`, `[findInterval](../../base/html/findinterval)` ### Examples ``` data1 <- data.frame(id = 1:10, entry.dt = as.Date(paste("2011", 1:10, "5", sep='-'))) temp1 <- c(1,4,5,1,3,6,9, 2,7,8,12,4,6,7,10,12,3) data2 <- data.frame(id = c(1,1,1,2,2,4,4,5,5,5,6,8,8,9,10,10,12), lab.dt = as.Date(paste("2011", temp1, "1", sep='-')), chol = round(runif(17, 130, 280))) #first cholesterol on or after enrollment indx1 <- neardate(data1$id, data2$id, data1$entry.dt, data2$lab.dt) data2[indx1, "chol"] # Closest one, either before or after. # indx2 <- neardate(data1$id, data2$id, data1$entry.dt, data2$lab.dt, best="prior") ifelse(is.na(indx1), indx2, # none after, take before ifelse(is.na(indx2), indx1, #none before ifelse(abs(data2$lab.dt[indx2]- data1$entry.dt) < abs(data2$lab.dt[indx1]- data1$entry.dt), indx2, indx1))) # closest date before or after, but no more than 21 days prior to index indx2 <- ifelse((data1$entry.dt - data2$lab.dt[indx2]) >21, NA, indx2) ifelse(is.na(indx1), indx2, # none after, take before ifelse(is.na(indx2), indx1, #none before ifelse(abs(data2$lab.dt[indx2]- data1$entry.dt) < abs(data2$lab.dt[indx1]- data1$entry.dt), indx2, indx1))) ```
programming_docs
r None `survexp` Compute Expected Survival ------------------------------------ ### Description Returns either the expected survival of a cohort of subjects, or the individual expected survival for each subject. ### Usage ``` survexp(formula, data, weights, subset, na.action, rmap, times, method=c("ederer", "hakulinen", "conditional", "individual.h", "individual.s"), cohort=TRUE, conditional=FALSE, ratetable=survival::survexp.us, scale=1, se.fit, model=FALSE, x=FALSE, y=FALSE) ``` ### Arguments | | | | --- | --- | | `formula` | formula object. The response variable is a vector of follow-up times and is optional. The predictors consist of optional grouping variables separated by the `+` operator (as in `survfit`), and is often `~1`, i.e., expected survival for the entire group. | | `data` | data frame in which to interpret the variables named in the `formula`, `subset` and `weights` arguments. | | `weights` | case weights. This is most useful when conditional survival for a known population is desired, e.g., the data set would contain all unique age/sex combinations and the weights would be the proportion of each. | | `subset` | expression indicating a subset of the rows of `data` to be used in the fit. | | `na.action` | function to filter missing data. This is applied to the model frame after `subset` has been applied. Default is `options()$na.action`. | | `rmap` | an optional list that maps data set names to the ratetable names. See the details section below. | | `times` | vector of follow-up times at which the resulting survival curve is evaluated. If absent, the result will be reported for each unique value of the vector of times supplied in the response value of the `formula`. | | `method` | computational method for the creating the survival curves. The `individual` option does not create a curve, rather it retrieves the predicted survival `individual.s` or cumulative hazard `individual.h` for each subject. The default is to use `method='ederer'` if the formula has no response, and `method='hakulinen'` otherwise. | | `cohort` | logical value. This argument has been superseded by the `method` argument. To maintain backwards compatability, if is present and FALSE, it implies `method='individual.s'`. | | `conditional` | logical value. This argument has been superseded by the `method` argument. To maintain backwards compatability, if it is present and TRUE it implies `method='conditional'`. | | `ratetable` | a table of event rates, such as `survexp.mn`, or a fitted Cox model. Note the `survival::` prefix in the default argument is present to avoid the (rare) case of a user who expects the default table but just happens to have an object named "survexp.us" in their own directory. | | `scale` | numeric value to scale the results. If `ratetable` is in units/day, `scale = 365.25` causes the output to be reported in years. | | `se.fit` | compute the standard error of the predicted survival. This argument is currently ignored. Standard errors are not a defined concept for population rate tables (they are treated as coming from a complete census), and for Cox models the calculation is hard. Despite good intentions standard errors for this latter case have not been coded and validated. | | `model,x,y` | flags to control what is returned. If any of these is true, then the model frame, the model matrix, and/or the vector of response times will be returned as components of the final result, with the same names as the flag arguments. | ### Details Individual expected survival is usually used in models or testing, to ‘correct’ for the age and sex composition of a group of subjects. For instance, assume that birth date, entry date into the study, sex and actual survival time are all known for a group of subjects. The `survexp.us` population tables contain expected death rates based on calendar year, sex and age. Then ``` haz <- survexp(fu.time ~ 1, data=mydata, rmap = list(year=entry.dt, age=(birth.dt-entry.dt)), method='individual.h')) ``` gives for each subject the total hazard experienced up to their observed death time or last follow-up time (variable fu.time) This probability can be used as a rescaled time value in models: ``` glm(status ~ 1 + offset(log(haz)), family=poisson) glm(status ~ x + offset(log(haz)), family=poisson) ``` In the first model, a test for intercept=0 is the one sample log-rank test of whether the observed group of subjects has equivalent survival to the baseline population. The second model tests for an effect of variable `x` after adjustment for age and sex. The ratetable being used may have different variable names than the user's data set, this is dealt with by the `rmap` argument. The rate table for the above calculation was `survexp.us`, a call to `summary{survexp.us}` reveals that it expects to have variables `age` = age in days, `sex`, and `year` = the date of study entry, we create them in the `rmap` line. The sex variable was not mapped, therefore the function assumes that it exists in `mydata` in the correct format. (Note: for factors such as sex, the program will match on any unique abbreviation, ignoring case.) Cohort survival is used to produce an overall survival curve. This is then added to the Kaplan-Meier plot of the study group for visual comparison between these subjects and the population at large. There are three common methods of computing cohort survival. In the "exact method" of Ederer the cohort is not censored, for this case no response variable is required in the formula. Hakulinen recommends censoring the cohort at the anticipated censoring time of each patient, and Verheul recommends censoring the cohort at the actual observation time of each patient. The last of these is the conditional method. These are obtained by using the respective time values as the follow-up time or response in the formula. ### Value if `cohort=TRUE` an object of class `survexp`, otherwise a vector of per-subject expected survival values. The former contains the number of subjects at risk and the expected survival for the cohort at each requested time. The cohort survival is the hypothetical survival for a cohort of subjects enrolled from the population at large, but matching the data set on the factors found in the rate table. ### References Berry, G. (1983). The analysis of mortality by the subject-years method. *Biometrics*, 39:173-84. Ederer, F., Axtell, L. and Cutler, S. (1961). The relative survival rate: a statistical methodology. *Natl Cancer Inst Monogr*, 6:101-21. Hakulinen, T. (1982). Cancer survival corrected for heterogeneity in patient withdrawal. *Biometrics*, 38:933-942. Therneau, T. and Grambsch, P. (2000). Modeling survival data: Extending the Cox model. Springer. Chapter 10. Verheul, H., Dekker, E., Bossuyt, P., Moulijn, A. and Dunning, A. (1993). Background mortality in clinical survival studies. *Lancet*, 341: 872-875. ### See Also `<survfit>`, `<pyears>`, `<survexp.us>`, `<ratetable>`, `<survexp.fit>`. ### Examples ``` # # Stanford heart transplant data # We don't have sex in the data set, but know it to be nearly all males. # Estimate of conditional survival fit1 <- survexp(futime ~ 1, rmap=list(sex="male", year=accept.dt, age=(accept.dt-birth.dt)), method='conditional', data=jasa) summary(fit1, times=1:10*182.5, scale=365) #expected survival by 1/2 years # Estimate of expected survival stratified by prior surgery survexp(~ surgery, rmap= list(sex="male", year=accept.dt, age=(accept.dt-birth.dt)), method='ederer', data=jasa, times=1:10 * 182.5) ## Compare the survival curves for the Mayo PBC data to Cox model fit ## pfit <-coxph(Surv(time,status>0) ~ trt + log(bili) + log(protime) + age + platelet, data=pbc) plot(survfit(Surv(time, status>0) ~ trt, data=pbc), mark.time=FALSE) lines(survexp( ~ trt, ratetable=pfit, data=pbc), col='purple') ``` r None `summary.coxph` Summary method for Cox models ---------------------------------------------- ### Description Produces a summary of a fitted coxph model ### Usage ``` ## S3 method for class 'coxph' summary(object, conf.int=0.95, scale=1,...) ``` ### Arguments | | | | --- | --- | | `object` | the result of a coxph fit | | `conf.int` | level for computation of the confidence intervals. If set to FALSE no confidence intervals are printed | | `scale` | vector of scale factors for the coefficients, defaults to 1. The printed coefficients, se, and confidence intervals will be associated with one scale unit. | | `...` | for future methods | ### Value An object of class `summary.coxph`, with components: | | | | --- | --- | | `n, nevent` | number of observations and number of events, respectively, in the fit | | `loglik` | the log partial likelihood at the initial and final values | | `coefficients` | a matrix with one row for each coefficient, and columns containing the coefficient, the hazard ratio exp(coef), standard error, Wald statistic, and P value. | | `conf.int` | a matrix with one row for each coefficient, containing the confidence limits for exp(coef) | | `logtest, sctest, waldtest` | the overall likelihood ratio, score, and Wald test statistics for the model | | `concordance` | the concordance statistic and its standard error | | `used.robust` | whether an asymptotic or robust variance was used | | `rsq` | an approximate R^2 based on Nagelkirke (Biometrika 1991). | | `fail` | a message, if the underlying coxph call failed | | `call` | a copy of the call | | `na.action` | information on missing values | ### Note The pseudo r-squared of Nagelkirke is attractive because it is simple, but further work has shown that it has poor properties and it is now depricated. The value is no longer printed by default, and will eventually be removed from the object. ### See Also `<coxph>`, `[print.coxph](coxph.object)` ### Examples ``` fit <- coxph(Surv(time, status) ~ age + sex, lung) summary(fit) ``` r None `coxph.control` Ancillary arguments for controlling coxph fits --------------------------------------------------------------- ### Description This is used to set various numeric parameters controlling a Cox model fit. Typically it would only be used in a call to `coxph`. ### Usage ``` coxph.control(eps = 1e-09, toler.chol = .Machine$double.eps^0.75, iter.max = 20, toler.inf = sqrt(eps), outer.max = 10, timefix=TRUE) ``` ### Arguments | | | | --- | --- | | `eps` | Iteration continues until the relative change in the log partial likelihood is less than eps, or the absolute change is less than sqrt(eps). Must be positive. | | `toler.chol` | Tolerance for detection of singularity during a Cholesky decomposition of the variance matrix, i.e., for detecting a redundant predictor variable. | | `iter.max` | Maximum number of iterations to attempt for convergence. | | `toler.inf` | Tolerance criteria for the warning message about a possible infinite coefficient value. | | `outer.max` | For a penalized coxph model, e.g. with pspline terms, there is an outer loop of iteration to determine the penalty parameters; maximum number of iterations for this outer loop. | | `timefix` | Resolve any near ties in the time variables. | ### Details The convergence tolerances are a balance. Users think they want THE maximum point of the likelihood surface, and for well behaved data sets where this is quadratic near the max a high accuracy is fairly inexpensive: the number of correct digits approximately doubles with each iteration. Conversely, a drop of .0001 from the maximum in any given direction will be correspond to only about 1/20 of a standard error change in the coefficient. Statistically, more precision than this is straining at a gnat. Based on this the author originally had set the tolerance to 1e-5, but relented in the face of multiple "why is the answer different than package X" queries. Asking for results that are too close to machine precision (double.eps) is a fool's errand; a reasonable critera is often the square root of that precision. The Cholesky decompostion needs to be held to a higher standard than the overall convergence criterion, however. The `tolerance.inf` value controls a warning message; if it is too small incorrect warnings can appear, if too large some actual cases of an infinite coefficient will not be detected. The most difficult cases are data sets where the MLE coefficient is infinite; an example is a data set where at each death time, it was the subject with the largest covariate value who perished. In that situation the coefficient increases at each iteration while the log-likelihood asymptotes to a maximum. As iteration proceeds there is a race condition condition for three endpoint: exp(coef) overflows, the Hessian matrix become singular, or the change in loglik is small enough to satisfy the convergence criterion. The first two are difficult to anticipate and lead to numeric diffculties, which is another argument for moderation in the choice of `eps`. See the vignette "Roundoff error and tied times" for a more detailed explanation of the `timefix` option. In short, when time intervals are created via subtraction then two time intervals that are actually identical can appear to be different due to floating point round off error, which in turn can make `coxph` and `survfit` results dependent on things such as the order in which operations were done or the particular computer that they were run on. Such cases are unfortunatedly not rare in practice. The `timefix=TRUE` option adds logic similar to `all.equal` to ensure reliable results. In analysis of simulated data sets, however, where often by defintion there can be no duplicates, the option will often need to be set to `FALSE` to avoid spurious merging of close numeric values. ### Value a list containing the values of each of the above constants ### See Also `<coxph>` r None `rats` Rat treatment data from Mantel et al -------------------------------------------- ### Description Rat treatment data from Mantel et al. Three rats were chosen from each of 100 litters, one of which was treated with a drug, and then all followed for tumor incidence. ### Usage ``` rats data(cancer, package="survival") ``` ### Format | | | | --- | --- | | litter: | litter number from 1 to 100 | | rx: | treatment,(1=drug, 0=control) | | time: | time to tumor or last follow-up | | status: | event status, 1=tumor and 0=censored | | sex: | male or female | ### Note Since only 2/150 of the male rats have a tumor, most analyses use only females (odd numbered litters), e.g. Lee et al. ### Source N. Mantel, N. R. Bohidar and J. L. Ciminera. Mantel-Haenszel analyses of litter-matched time to response data, with modifications for recovery of interlitter information. Cancer Research, 37:3863-3868, 1977. ### References E. W. Lee, L. J. Wei, and D. Amato, Cox-type regression analysis for large number of small groups of correlated failure time observations, in "Survival Analysis, State of the Art", Kluwer, 1992. r None `aareg` Aalen's additive regression model for censored data ------------------------------------------------------------ ### Description Returns an object of class `"aareg"` that represents an Aalen model. ### Usage ``` aareg(formula, data, weights, subset, na.action, qrtol=1e-07, nmin, dfbeta=FALSE, taper=1, test = c('aalen', 'variance', 'nrisk'), model=FALSE, x=FALSE, y=FALSE) ``` ### Arguments | | | | --- | --- | | `formula` | a formula object, with the response on the left of a ‘~’ operator and the terms, separated by `+` operators, on the right. The response must be a `Surv` object. Due to a particular computational approach that is used, the model MUST include an intercept term. If "-1" is used in the model formula the program will ignore it. | | `data` | data frame in which to interpret the variables named in the `formula`, `subset`, and `weights` arguments. This may also be a single number to handle some speci al cases – see below for details. If `data` is missing, the variables in the model formula should be in the search path. | | `weights` | vector of observation weights. If supplied, the fitting algorithm minimizes the sum of the weights multiplied by the squared residuals (see below for additional technical details). The length of `weights` must be the same as the number of observations. The weights must be nonnegative and it i s recommended that they be strictly positive, since zero weights are ambiguous. To exclude particular observations from the model, use the `subset` argument instead of zero weights. | | `subset` | expression specifying which subset of observations should be used in the fit. Th is can be a logical vector (which is replicated to have length equal to the numb er of observations), a numeric vector indicating the observation numbers to be included, or a character vector of the observation names that should be included. All observations are included by default. | | `na.action` | a function to filter missing data. This is applied to the `model.fr ame` after any `subset` argument has be en applied. The default is `na.fail`, which returns a n error if any missing values are found. An alternative is `na.excl ude`, which deletes observations that contain one or more missing values. | | `qrtol` | tolerance for detection of singularity in the QR decomposition | | `nmin` | minimum number of observations for an estimate; defaults to 3 times the number of covariates. This essentially truncates the computations near the tail of the data set, when n is small and the calculations can become numerically unstable. | | `dfbeta` | should the array of dfbeta residuals be computed. This implies computation of the sandwich variance estimate. The residuals will always be computed if there is a `cluster` term in the model formula. | | `taper` | allows for a smoothed variance estimate. Var(x), where x is the set of covariates, is an important component of the calculations for the Aalen regression model. At any given time point t, it is computed over all subjects who are still at risk at time t. The tape argument allows smoothing these estimates, for example `taper=(1:4)/4` would cause the variance estimate used at any event time to be a weighted average of the estimated variance matrices at the last 4 death times, with a weight of 1 for the current death time and decreasing to 1/4 for prior event times. The default value gives the standard Aalen model. | | `test` | selects the weighting to be used, for computing an overall “average” coefficient vector over time and the subsequent test for equality to zero. | | `model, x, y` | should copies of the model frame, the x matrix of predictors, or the response vector y be included in the saved result. | ### Details The Aalen model assumes that the cumulative hazard H(t) for a subject can be expressed as a(t) + X B(t), where a(t) is a time-dependent intercept term, X is the vector of covariates for the subject (possibly time-dependent), and B(t) is a time-dependent matrix of coefficients. The estimates are inherently non-parametric; a fit of the model will normally be followed by one or more plots of the estimates. The estimates may become unstable near the tail of a data set, since the increment to B at time t is based on the subjects still at risk at time t. The tolerance and/or nmin parameters may act to truncate the estimate before the last death. The `taper` argument can also be used to smooth out the tail of the curve. In practice, the addition of a taper such as 1:10 appears to have little effect on death times when n is still reasonably large, but can considerably dampen wild occilations in the tail of the plot. ### Value an object of class `"aareg"` representing the fit, with the following components: | | | | --- | --- | | `n` | vector containing the number of observations in the data set, the number of event times, and the number of event times used in the computation | | `times` | vector of sorted event times, which may contain duplicates | | `nrisk` | vector containing the number of subjects at risk, of the same length as `times` | | `coefficient` | matrix of coefficients, with one row per event and one column per covariate | | `test.statistic` | the value of the test statistic, a vector with one element per covariate | | `test.var` | variance-covariance matrix for the test | | `test` | the type of test; a copy of the `test` argument above | | `tweight` | matrix of weights used in the computation, one row per event | | `call` | a copy of the call that produced this result | ### References Aalen, O.O. (1989). A linear regression model for the analysis of life times. Statistics in Medicine, 8:907-925. Aalen, O.O (1993). Further results on the non-parametric linear model in survival analysis. Statistics in Medicine. 12:1569-1588. ### See Also print.aareg, summary.aareg, plot.aareg ### Examples ``` # Fit a model to the lung cancer data set lfit <- aareg(Surv(time, status) ~ age + sex + ph.ecog, data=lung, nmin=1) ## Not run: lfit Call: aareg(formula = Surv(time, status) ~ age + sex + ph.ecog, data = lung, nmin = 1 ) n=227 (1 observations deleted due to missing values) 138 out of 138 unique event times used slope coef se(coef) z p Intercept 5.26e-03 5.99e-03 4.74e-03 1.26 0.207000 age 4.26e-05 7.02e-05 7.23e-05 0.97 0.332000 sex -3.29e-03 -4.02e-03 1.22e-03 -3.30 0.000976 ph.ecog 3.14e-03 3.80e-03 1.03e-03 3.70 0.000214 Chisq=26.73 on 3 df, p=6.7e-06; test weights=aalen plot(lfit[4], ylim=c(-4,4)) # Draw a plot of the function for ph.ecog ## End(Not run) lfit2 <- aareg(Surv(time, status) ~ age + sex + ph.ecog, data=lung, nmin=1, taper=1:10) ## Not run: lines(lfit2[4], col=2) # Nearly the same, until the last point # A fit to the mulitple-infection data set of children with # Chronic Granuomatous Disease. See section 8.5 of Therneau and Grambsch. fita2 <- aareg(Surv(tstart, tstop, status) ~ treat + age + inherit + steroids + cluster(id), data=cgd) ## Not run: n= 203 69 out of 70 unique event times used slope coef se(coef) robust se z p Intercept 0.004670 0.017800 0.002780 0.003910 4.55 5.30e-06 treatrIFN-g -0.002520 -0.010100 0.002290 0.003020 -3.36 7.87e-04 age -0.000101 -0.000317 0.000115 0.000117 -2.70 6.84e-03 inheritautosomal 0.001330 0.003830 0.002800 0.002420 1.58 1.14e-01 steroids 0.004620 0.013200 0.010600 0.009700 1.36 1.73e-01 Chisq=16.74 on 4 df, p=0.0022; test weights=aalen ## End(Not run) ```
programming_docs
r None `survreg.distributions` Parametric Survival Distributions ---------------------------------------------------------- ### Description List of distributions for accelerated failure models. These are location-scale families for some transformation of time. The entry describes the cdf *F* and density *f* of a canonical member of the family. ### Usage ``` survreg.distributions ``` ### Format There are two basic formats, the first defines a distribution de novo, the second defines a new distribution in terms of an old one. | | | | --- | --- | | name: | name of distribution | | variance: | function(parms) returning the variance (currently unused) | | init(x,weights,...): | Function returning an initial | | | estimate of the mean and variance | | | (used for initial values in the iteration) | | density(x,parms): | Function returning a matrix with columns *F*, *1-F*, *f*, *f'/f*, and *f''/f* | | quantile(p,parms): | Quantile function | | scale: | Optional fixed value for the scale parameter | | parms: | Vector of default values and names for any additional parameters | | deviance(y,scale,parms): | Function returning the deviance for a | | | saturated model; used only for deviance residuals. | and to define one distribution in terms of another | | | | --- | --- | | name: | name of distribution | | dist: | name of parent distribution | | trans: | transformation (eg log) | | dtrans: | derivative of transformation | | itrans: | inverse of transformation | | scale: | Optional fixed value for scale parameter | | | ### Details There are four basic distributions:`extreme`, `gaussian`, `logistic` and `t`. The last three are parametrised in the same way as the distributions already present in **R**. The extreme value cdf is *F=1-e^{-e^t}.* When the logarithm of survival time has one of the first three distributions we obtain respectively `weibull`, `lognormal`, and `loglogistic`. The location-scale parameterization of a Weibull distribution found in `survreg` is not the same as the parameterization of `[rweibull](../../stats/html/weibull)`. The other predefined distributions are defined in terms of these. The `exponential` and `rayleigh` distributions are Weibull distributions with fixed `scale` of 1 and 0.5 respectively, and `loggaussian` is a synonym for `lognormal`. For speed parts of the three most commonly used distributions are hardcoded in C; for this reason the elements of `survreg.distributions` with names of "Extreme value", "Logistic" and "Gaussian" should not be modified. (The order of these in the list is not important, recognition is by name.) As an alternative to modifying `survreg.distributions` a new distribution can be specified as a separate list. This is the preferred method of addition and is illustrated below. ### See Also `<survreg>`, `[pweibull](../../stats/html/weibull)`, `[pnorm](../../stats/html/normal)`,`[plogis](../../stats/html/logistic)`, `[pt](../../stats/html/tdist)`, `[survregDtest](survregdtest)` ### Examples ``` # time transformation survreg(Surv(time, status) ~ ph.ecog + sex, dist='weibull', data=lung) # change the transformation to work in years # intercept changes by log(365), everything else stays the same my.weibull <- survreg.distributions$weibull my.weibull$trans <- function(y) log(y/365) my.weibull$itrans <- function(y) 365*exp(y) survreg(Surv(time, status) ~ ph.ecog + sex, lung, dist=my.weibull) # Weibull parametrisation y<-rweibull(1000, shape=2, scale=5) survreg(Surv(y)~1, dist="weibull") # survreg scale parameter maps to 1/shape, linear predictor to log(scale) # Cauchy fit mycauchy <- list(name='Cauchy', init= function(x, weights, ...) c(median(x), mad(x)), density= function(x, parms) { temp <- 1/(1 + x^2) cbind(.5 + atan(x)/pi, .5+ atan(-x)/pi, temp/pi, -2 *x*temp, 2*temp*(4*x^2*temp -1)) }, quantile= function(p, parms) tan((p-.5)*pi), deviance= function(...) stop('deviance residuals not defined') ) survreg(Surv(log(time), status) ~ ph.ecog + sex, lung, dist=mycauchy) ``` r None `heart` Stanford Heart Transplant data --------------------------------------- ### Description Survival of patients on the waiting list for the Stanford heart transplant program. ### Usage ``` heart data(heart, package="survival") ``` ### Format jasa: original data | | | | --- | --- | | birth.dt: | birth date | | accept.dt: | acceptance into program | | tx.date: | transplant date | | fu.date: | end of followup | | fustat: | dead or alive | | surgery: | prior bypass surgery | | age: | age (in years) | | futime: | followup time | | wait.time: | time before transplant | | transplant: | transplant indicator | | mismatch: | mismatch score | | hla.a2: | particular type of mismatch | | mscore: | another mismatch score | | reject: | rejection occurred | | | jasa1, heart: processed data | | | | --- | --- | | start, stop, event: | Entry and exit time and status for this interval of time | | age: | age-48 years | | year: | year of acceptance (in years after 1 Nov 1967) | | surgery: | prior bypass surgery 1=yes | | transplant: | received transplant 1=yes | | id: | patient id | | | ### Source J Crowley and M Hu (1977), Covariance analysis of heart transplant survival data. *Journal of the American Statistical Association*, **72**, 27–36. ### See Also `<stanford2>` r None `colon` Chemotherapy for Stage B/C colon cancer ------------------------------------------------ ### Description These are data from one of the first successful trials of adjuvant chemotherapy for colon cancer. Levamisole is a low-toxicity compound previously used to treat worm infestations in animals; 5-FU is a moderately toxic (as these things go) chemotherapy agent. There are two records per person, one for recurrence and one for death ### Usage ``` colon data(cancer, package="survival") ``` ### Format | | | | --- | --- | | id: | id | | study: | 1 for all patients | | rx: | Treatment - Obs(ervation), Lev(amisole), Lev(amisole)+5-FU | | sex: | 1=male | | age: | in years | | obstruct: | obstruction of colon by tumour | | perfor: | perforation of colon | | adhere: | adherence to nearby organs | | nodes: | number of lymph nodes with detectable cancer | | time: | days until event or censoring | | status: | censoring status | | differ: | differentiation of tumour (1=well, 2=moderate, 3=poor) | | extent: | Extent of local spread (1=submucosa, 2=muscle, 3=serosa, 4=contiguous structures) | | surg: | time from surgery to registration (0=short, 1=long) | | node4: | more than 4 positive lymph nodes | | etype: | event type: 1=recurrence,2=death | | | ### Note The study is originally described in Laurie (1989). The main report is found in Moertel (1990). This data set is closest to that of the final report in Moertel (1991). A version of the data with less follow-up time was used in the paper by Lin (1994). Peter Higgins has pointed out a data inconsistency, revealed by `table(colon$nodes, colon$node4)`. We don't know which of the two variables is actually correct so have elected not to 'fix' it. (Real data has warts, why not have some in the example data too?) ### References JA Laurie, CG Moertel, TR Fleming, HS Wieand, JE Leigh, J Rubin, GW McCormack, JB Gerstner, JE Krook and J Malliard. Surgical adjuvant therapy of large-bowel carcinoma: An evaluation of levamisole and the combination of levamisole and fluorouracil: The North Central Cancer Treatment Group and the Mayo Clinic. J Clinical Oncology, 7:1447-1456, 1989. DY Lin. Cox regression analysis of multivariate failure time data: the marginal approach. Statistics in Medicine, 13:2233-2247, 1994. CG Moertel, TR Fleming, JS MacDonald, DG Haller, JA Laurie, PJ Goodman, JS Ungerleider, WA Emerson, DC Tormey, JH Glick, MH Veeder and JA Maillard. Levamisole and fluorouracil for adjuvant therapy of resected colon carcinoma. New England J of Medicine, 332:352-358, 1990. CG Moertel, TR Fleming, JS MacDonald, DG Haller, JA Laurie, CM Tangen, JS Ungerleider, WA Emerson, DC Tormey, JH Glick, MH Veeder and JA Maillard, Fluorouracil plus Levamisole as an effective adjuvant therapy after resection of stage II colon carcinoma: a final report. Annals of Internal Med, 122:321-326, 1991. r None `ratetable` Rate table structure --------------------------------- ### Description Description of the rate tables used by expected survival routines. ### Details A rate table contains event rates per unit time for some particular endpoint. Death rates are the most common use, the `survexp.us` table, for instance, contains death rates for the United States by year of age, sex, and calendar year. A rate table is structured as a multi-way array with the following attributes: dim the dimensions of the array dimnames a named list of dimnames. The names are used to match user data to the dimensions, e.g., see the `rmap` argument in the `pyears` example. If a dimension is categorical, such as `sex` in `survexp.us`, then the dimname itself is matched against user's data values. The matching ignores case and allows abbreviations, e.g., "M", "Male", and "m" all successfully match the `survexp.us` dimname of `sex=c("male", "female")`. type a vector giving the type of each dimension, which will be 1= categorical, 2= continuous, 3= date, 4= calendar year of a US rate table. If `type` is 3 or 4, then the corresponding cutpoints must be one of the calendar date types: Date, POSIXt, date, or chron. This allows the code to properly match user data to the ratetable. (The published US decennial rate tables' definition is that a subject does not begin to experience a new years' death rate on Jan 1, but rather on their next birthday. The actual impact of this delay on any given subjects' calculation is neglible, but the code has always tried to be correct.) cutpoints a list with one elment per dimension. If `type=1` then the corresponding list element should be NULL, otherwise it should be a vector of length `dim[i]` containing the starting point of the interval to which the corresponding row/col of the array applies. Cutpoints must be in the same units as the underlying table, e.g., the `survexp.us` table contains death rates per day, so the `age` cutpoint vector contains age in days while `year` contains a vector of Dates. Cutpoints do not need to be evenly spaced: the `survexp.us` table, for instance, originally had age divided up as 0-1 days, 1-7 days, 7-28 days, 28 days - 1 year, 2, 3, ... 119 years. (Changes in the source of the tables made it difficult to continue splitting out the first year.) summary an optional summarization function. If present, it will be called with a numeric matrix that has one column per dimension and one row per observation. The function returns a character string giving a summary of the data. This is used by some routines to print an informative message, and provides one way to inform users of a data mistake, e.g., if the printout states that all subjects are between 0.14 and 0.23 years old it is likely that the user's age variable was in years when it should have been in days. dimid optional attribute containing the names of the dimnames. If the dimnames list itself has names, this attribute will be ignored. ### See Also `<survexp>`, `<pyears>`, `<survexp.us>` r None `lung` NCCTG Lung Cancer Data ------------------------------ ### Description Survival in patients with advanced lung cancer from the North Central Cancer Treatment Group. Performance scores rate how well the patient can perform usual daily activities. ### Usage ``` lung data(cancer, package="survival") ``` ### Format | | | | --- | --- | | inst: | Institution code | | time: | Survival time in days | | status: | censoring status 1=censored, 2=dead | | age: | Age in years | | sex: | Male=1 Female=2 | | ph.ecog: | ECOG performance score as rated by the physician. 0=asymptomatic, 1= symptomatic but completely ambulatory, 2= in bed <50% of the day, 3= in bed > 50% of the day but not bedbound, 4 = bedbound | | ph.karno: | Karnofsky performance score (bad=0-good=100) rated by physician | | pat.karno: | Karnofsky performance score as rated by patient | | meal.cal: | Calories consumed at meals | | wt.loss: | Weight loss in last six months | | | ### Note The use of 1/2 for alive/dead instead of the usual 0/1 is a historical footnote. For data contained on punch cards, IBM 360 Fortran treated blank as a zero, which led to a policy within the section of Biostatistics to never use "0" as a data value since one could not distinguish it from a missing value. The policy became a habit, as is often the case; and the 1/2 coding endured long beyond the demise of punch cards and Fortran. ### Source Terry Therneau ### References Loprinzi CL. Laurie JA. Wieand HS. Krook JE. Novotny PJ. Kugler JW. Bartel J. Law M. Bateman M. Klatt NE. et al. Prospective evaluation of prognostic variables from patient-completed questionnaires. North Central Cancer Treatment Group. Journal of Clinical Oncology. 12(3):601-7, 1994. r None `plot.survfit` Plot method for survfit objects ----------------------------------------------- ### Description A plot of survival curves is produced, one curve for each strata. The `log=T` option does extra work to avoid log(0), and to try to create a pleasing result. If there are zeros, they are plotted by default at 0.8 times the smallest non-zero value on the curve(s). Curves are plotted in the same order as they are listed by `print` (which gives a 1 line summary of each). This will be the order in which `col`, `lty`, etc are used. ### Usage ``` ## S3 method for class 'survfit' plot(x, conf.int=, mark.time=FALSE, pch=3, col=1, lty=1, lwd=1, cex=1, log=FALSE, xscale=1, yscale=1, xlim, ylim, xmax, fun, xlab="", ylab="", xaxs="r", conf.times, conf.cap=.005, conf.offset=.012, conf.type = c("log", "log-log", "plain", "logit", "arcsin"), mark, noplot="(s0)", cumhaz=FALSE, firstx, ymin, ...) ``` ### Arguments | | | | --- | --- | | `x` | an object of class `survfit`, usually returned by the `survfit` function. | | `conf.int` | determines whether pointwise confidence intervals will be plotted. The default is to do so if there is only 1 curve, i.e., no strata, using 95% confidence intervals Alternatively, this can be a numeric value giving the desired confidence level. | | `mark.time` | controls the labeling of the curves. If set to `FALSE`, no labeling is done. If `TRUE`, then curves are marked at each censoring time. If `mark` is a numeric vector then curves are marked at the specified time points. | | `pch` | vector of characters which will be used to label the curves. The `points` help file contains examples of the possible marks. A single string such as "abcd" is treated as a vector `c("a", "b", "c", "d")`. The vector is reused cyclically if it is shorter than the number of curves. If it is present this implies `mark.time = TRUE`. | | `col` | a vector of integers specifying colors for each curve. The default value is 1. | | `lty` | a vector of integers specifying line types for each curve. The default value is 1. | | `lwd` | a vector of numeric values for line widths. The default value is 1. | | `cex` | a numeric value specifying the size of the marks. This is not treated as a vector; all marks have the same size. | | `log` | a logical value, if TRUE the y axis wll be on a log scale. Alternately, one of the standard character strings "x", "y", or "xy" can be given to specific logarithmic horizontal and/or vertical axes. | | `xscale` | a numeric value used like `yscale` for labels on the x axis. A value of 365.25 will give labels in years instead of the original days. | | `yscale` | a numeric value used to multiply the labels on the y axis. A value of 100, for instance, would be used to give a percent scale. Only the labels are changed, not the actual plot coordinates, so that adding a curve with "`lines(surv.exp(...))`", say, will perform as it did without the `yscale` argument. | | `xlim,ylim` | optional limits for the plotting region. | | `xmax` | the maximum horizontal plot coordinate. This can be used to shrink the range of a plot. It shortens the curve before plotting it, so that unlike using the `xlim` graphical parameter, warning messages about out of bounds points are not generated. | | `fun` | an arbitrary function defining a transformation of the survival (or probability in state, or cumulative hazard) curves. For example `fun=log` is an alternative way to draw a log-survival curve (but with the axis labeled with log(S) values), and `fun=sqrt` would generate a curve on square root scale. Four often used transformations can be specified with a character argument instead: `"S"` gives the usual survival curve, `"log"` is the same as using the `log=T` option, `"event"` or `"F"` plots the empirical CDF *F(t)= 1-S(t)* (f(y) = 1-y), and `"cloglog"` creates a complimentary log-log survival plot (f(y) = log(-log(y)) along with log scale for the x-axis). The terms `"identity"` and `"surv"` are allowed as synonyms for `type="S"`. The argument `"cumhaz"` causes the cumulative hazard function to be plotted. | | `xlab` | label given to the x-axis. | | `ylab` | label given to the y-axis. | | `xaxs` | either `"S"` for a survival curve or a standard x axis style as listed in `par`; "r" (regular) is the R default. Survival curves have historically been displayed with the curve touching the y-axis, but not touching the bounding box of the plot on the other 3 sides, Type `"S"` accomplishes this by manipulating the plot range and then using the `"i"` style internally. The "S" style is becoming increasingly less common, however. | | `conf.times` | optional vector of times at which to place a confidence bar on the curve(s). If present, these will be used instead of confidence bands. | | `conf.cap` | width of the horizontal cap on top of the confidence bars; only used if conf.times is used. A value of 1 is the width of the plot region. | | `conf.offset` | the offset for confidence bars, when there are multiple curves on the plot. A value of 1 is the width of the plot region. If this is a single number then each curve's bars are offset by this amount from the prior curve's bars, if it is a vector the values are used directly. | | `conf.type` | One of `"plain"`, `"log"` (the default), `"log-log"` or `"logit"`. Only enough of the string to uniquely identify it is necessary. The first option causes confidence intervals not to be generated. The second causes the standard intervals `curve +- k *se(curve)`, where k is determined from `conf.int`. The log option calculates intervals based on the cumulative hazard or log(survival). The log-log option bases the intervals on the log hazard or log(-log(survival)), and the logit option on log(survival/(1-survival)). | | `mark` | a historical alias for `pch` | | `noplot` | for multi-state models, curves with this label will not be plotted. (Also see the `istate0` argument in `survcheck`.) | | `cumhaz` | plot the cumulative hazard rather than the probability in state or survival. Optionally, this can be a numeric vector specifying which columns of the `cumhaz` component to plot. | | `ymin` | this will normally be given as part of the `ylim` argument | | `firstx` | this will normally be given as part of the `xlim` argument. | | `...` | other arguments that will be passed forward to the underlying plot method, such as xlab or ylab. | ### Details If the object contains a cumulative hazard curve, then `fun='cumhaz'` will plot that curve, otherwise it will plot -log(S) as an approximation. Theoretically, S = *exp(-H)* where S is the survival and *H* is the cumulative hazard. The same relationship holds for estimates of S and *H* only in special cases, but the approximation is often close. When the `survfit` function creates a multi-state survival curve the resulting object also has class ‘survfitms’. Competing risk curves are a common case. In this situation the `fun` argument is ignored. When the `conf.times` argument is used, the confidence bars are offset by `conf.offset` units to avoid overlap. The bar on each curve are the confidence interval for the time point at which the bar is drawn, i.e., different time points for each curve. If curves are steep at that point, the visual impact can sometimes substantially differ for positive and negative values of `conf.offset`. ### Value a list with components `x` and `y`, containing the coordinates of the last point on each of the curves (but not the confidence limits). This may be useful for labeling. ### Note In prior versions the behavior of `xscale` and `yscale` differed: the first changed the scale both for the plot and for all subsequent actions such as adding a legend, whereas `yscale` affected only the axis label. This was normalized in version 2-36.4, and both parameters now only affect the labeling. In versions prior to approximately 2.36 a `survfit` object did not contain the cumulative hazard as a separate result, and the use of fun="cumhaz" would plot the approximation -log(surv) to the cumulative hazard. When cumulative hazards were added to the object, the `cumhaz=TRUE` argument to the plotting function was added. In version 2.3-8 the use of fun="cumhaz" became a synonym for `cumhaz=TRUE`. ### See Also `[points.survfit](lines.survfit)`, `<lines.survfit>`, `[par](../../graphics/html/par)`, `<survfit>` ### Examples ``` leukemia.surv <- survfit(Surv(time, status) ~ x, data = aml) plot(leukemia.surv, lty = 2:3) legend(100, .9, c("Maintenance", "No Maintenance"), lty = 2:3) title("Kaplan-Meier Curves\nfor AML Maintenance Study") lsurv2 <- survfit(Surv(time, status) ~ x, aml, type='fleming') plot(lsurv2, lty=2:3, fun="cumhaz", xlab="Months", ylab="Cumulative Hazard") ```
programming_docs
r None `residuals.survfit` IJ residuals from a survfit object. -------------------------------------------------------- ### Description Return infinitesimal jackknife residuals from a survfit object, for the survival, cumulative hazard, or restricted mean time in state (RMTS). ### Usage ``` ## S3 method for class 'survfit' residuals(object, times, type="pstate", collapse, weighted=FALSE, method=1, ...) ``` ### Arguments | | | | --- | --- | | `object` | a `survfit` object | | `times` | a vector of times at which the residuals are desired | | `type` | the type of residual, see below | | `collapse` | add the residuals for all subjects in a cluster | | `weighted` | weight the residuals by each observation's weight | | `method` | controls a choice of algorithm. Current an internal debugging option. | | `...` | arguments for other methods | ### Details This function is designed to efficiently compute the leverage residuals at a small number of time points; a primary use is the creation of pseudo-values. If the residuals at all time points are needed, e.g. to compute a robust pointwise confidence interval for the survival curve, then this can be done more efficiently using the `influence` argument of the underlying `survfit` function. But be aware that such matrices can get very large. The residuals are the impact of each observation or cluster on the resulting probability in state curves at the given time points, the cumulative hazard curv`surv` at those time points, or the expected sojourn time in each state up to the given time points. For a simple Kaplan-Meier the `survfit` object contains only the probability in the "initial" state, i.e., the survival fraction. For the KM case the sojourn time, the expected amount of time spent in the initial state, up to the specified endpoint, is more commonly known as the restricted mean survival time (RMST). For a multistate model this same quantity is also referred to as the restricted mean time in state (RMTS). It can be computed as the area under the respective probability in state curve. The program allows any of `pstate`, `surv`, `cumhaz`, `chaz`, `sojourn`, `rmst`, `rmts` or `auc` for the type argument, ignoring upper/lowercase, so users can choose whichever abbreviation they like best. When `collapse=TRUE` the result has the cluster identifier (which defaults to the `id` variable) as the dimname for the first dimension. If the `fit` object contains more than one curve, and the same identifier is reused in two different curves this approach does not work and the routine will stop with an error. In principle this is not necessary, e.g., the result could contain two rows with the same label, showing the separate effect on each curve, but this was deemed too confusing. ### Value A matrix or array with one row per observation or cluster, and one column for each value in `times`. For a multi-state model the three dimensions are observation, time and state. For cumulative hazard, the last dimension is the set of transitions. (A competing risks model for instance has 3 states and 2 transitions.) ### See Also `<survfit>`, `<survfit.formula>` ### Examples ``` fit <- survfit(Surv(time, status) ~ x, aml) resid(fit, times=c(24, 48), type="RMTS") ``` r None `aml` Acute Myelogenous Leukemia survival data ----------------------------------------------- ### Description Survival in patients with Acute Myelogenous Leukemia. The question at the time was whether the standard course of chemotherapy should be extended ('maintainance') for additional cycles. ### Usage ``` aml leukemia data(cancer, package="survival") ``` ### Format | | | | --- | --- | | time: | survival or censoring time | | status: | censoring status | | x: | maintenance chemotherapy given? (factor) | | | ### Source Rupert G. Miller (1997), *Survival Analysis*. John Wiley & Sons. ISBN: 0-471-25218-2. r None `survival-deprecated` Deprecated functions in package survival --------------------------------------------------------------- ### Description These functions are temporarily retained for compatability with older programs, and may transition to defunct status. ### Usage ``` survConcordance(formula, data, weights, subset, na.action) # use concordance survConcordance.fit(y, x, strata, weight) # use concordancefit ``` ### Arguments | | | | --- | --- | | `formula` | a formula object, with the response on the left of a `~` operator, and the terms on the right. The response must be a survival object as returned by the `Surv` function. | | `data` | a data frame | | `weights,subset,na.action` | as for `coxph` | | `x, y, strata, weight` | predictor, response, strata, and weight vectors for the direct call | ### See Also `[Deprecated](../../base/html/deprecated)` r None `survfit.matrix` Create Aalen-Johansen estimates of multi-state survival from a matrix of hazards. --------------------------------------------------------------------------------------------------- ### Description This allows one to create the Aalen-Johansen estimate of P, a matrix with one column per state and one row per time, starting with the individual hazard estimates. Each row of P will sum to 1. Note that this routine has been superseded by the use of multi-state Cox models, and will eventually be removed. ### Usage ``` ## S3 method for class 'matrix' survfit(formula, p0, method = c("discrete", "matexp"), start.time, ...) ``` ### Arguments | | | | --- | --- | | `formula` | a matrix of lists, each element of which is either NULL or a survival curve object. | | `p0` | the initial state vector. The names of this vector are used as the names of the states in the output object. If there are multiple curves then `p0` can be a matrix with one row per curve. | | `method` | use a product of discrete hazards, or a product of matrix exponentials. See details below. | | `start.time` | optional; start the calculations at a given starting point | | `...` | further arguments used by other survfit methods | ### Details On input the matrix should contain a set of predicted curves for each possible transition, and NULL in other positions. Each of the predictions will have been obtained from the relevant Cox model. This approach for multistate curves is easy to use but has some caveats. First, the input curves must be consistent. The routine checks as best it can, but can easy be fooled. For instance, if one were to fit two Cox models, obtain predictions for males and females from one, and for treatment A and B from the other, this routine will create two curves but they are not meaningful. A second issue is that standard errors are not produced. The names of the resulting states are taken from the names of the vector of initial state probabilities. If they are missing, then the dimnames of the input matrix are used, and lacking that the labels '1', '2', etc. are used. For the usual Aalen-Johansen estimator the multiplier at each event time is the matrix of hazards H (also written as I + dA). When using predicted survival curves from a Cox model, however, it is possible to get predicted hazards that are greater than 1, which leads to probabilities less than 0. If the `method` argument is not supplied and the input curves are derived from a Cox model this routine instead uses the approximation expm(H-I) as the multiplier, which always gives valid probabilities. (This is also the standard approach for ordinary survival curves from a Cox model.) ### Value a survfitms object ### Note The R syntax for creating a matrix of lists is very fussy. ### Author(s) Terry Therneau ### See Also `<survfit>` ### Examples ``` etime <- with(mgus2, ifelse(pstat==0, futime, ptime)) event <- with(mgus2, ifelse(pstat==0, 2*death, 1)) event <- factor(event, 0:2, labels=c("censor", "pcm", "death")) cfit1 <- coxph(Surv(etime, event=="pcm") ~ age + sex, mgus2) cfit2 <- coxph(Surv(etime, event=="death") ~ age + sex, mgus2) # predicted competing risk curves for a 72 year old with mspike of 1.2 # (median values), male and female. # The survfit call is a bit faster without standard errors. newdata <- expand.grid(sex=c("F", "M"), age=72, mspike=1.2) AJmat <- matrix(list(), 3,3) AJmat[1,2] <- list(survfit(cfit1, newdata, std.err=FALSE)) AJmat[1,3] <- list(survfit(cfit2, newdata, std.err=FALSE)) csurv <- survfit(AJmat, p0 =c(entry=1, PCM=0, death=0)) ``` r None `transplant` Liver transplant waiting list ------------------------------------------- ### Description Subjects on a liver transplant waiting list from 1990-1999, and their disposition: received a transplant, died while waiting, withdrew from the list, or censored. ### Usage ``` transplant data(transplant, package="survival") ``` ### Format A data frame with 815 (transplant) observations on the following 6 variables. `age` age at addition to the waiting list `sex` `m` or `f` `abo` blood type: `A`, `B`, `AB` or `O` `year` year in which they entered the waiting list `futime` time from entry to final disposition `event` final disposition: `censored`, `death`, `ltx` or `withdraw` ### Details This represents the transplant experience in a particular region, over a time period in which liver transplant became much more widely recognized as a viable treatment modality. The number of liver transplants rises over the period, but the number of subjects added to the liver transplant waiting list grew much faster. Important questions addressed by the data are the change in waiting time, who waits, and whether there was an consequent increase in deaths while on the list. Blood type is an important consideration. Donor livers from subjects with blood type O can be used by patients with A, B, AB or 0 blood types, whereas an AB liver can only be used by an AB recipient. Thus type O subjects on the waiting list are at a disadvantage, since the pool of competitors is larger for type O donor livers. This data is of historical interest and provides a useful example of competing risks, but it has little relevance to current practice. Liver allocation policies have evolved and now depend directly on each individual patient's risk and need, assessments of which are regularly updated while a patient is on the waiting list. The overall organ shortage remains acute, however. The `transplant` data set was a version used early in the analysis, `transplant2` has several additions and corrections, and was the final data set and matches the paper. ### References Kim WR, Therneau TM, Benson JT, Kremers WK, Rosen CB, Gores GJ, Dickson ER. Deaths on the liver transplant waiting list: An analysis of competing risks. Hepatology 2006 Feb; 43(2):345-51. ### Examples ``` #since event is a factor, survfit creates competing risk curves pfit <- survfit(Surv(futime, event) ~ abo, transplant) pfit[,2] #time to liver transplant, by blood type plot(pfit[,2], mark.time=FALSE, col=1:4, lwd=2, xmax=735, xscale=30.5, xlab="Months", ylab="Fraction transplanted", xaxt = 'n') temp <- c(0, 6, 12, 18, 24) axis(1, temp*30.5, temp) legend(450, .35, levels(transplant$abo), lty=1, col=1:4, lwd=2) # competing risks for type O plot(pfit[4,], xscale=30.5, xmax=735, col=1:3, lwd=2) legend(450, .4, c("Death", "Transpant", "Withdrawal"), col=1:3, lwd=2) ``` r None `frailty` Random effects terms ------------------------------- ### Description The frailty function allows one to add a simple random effects term to a Cox model. ### Usage ``` frailty(x, distribution="gamma", ...) frailty.gamma(x, sparse = (nclass > 5), theta, df, eps = 1e-05, method = c("em","aic", "df", "fixed"), ...) frailty.gaussian(x, sparse = (nclass > 5), theta, df, method =c("reml","aic", "df", "fixed"), ...) frailty.t(x, sparse = (nclass > 5), theta, df, eps = 1e-05, tdf = 5, method = c("aic", "df", "fixed"), ...) ``` ### Arguments | | | | --- | --- | | `x` | the variable to be entered as a random effect. It is always treated as a factor. | | `distribution` | either the `gamma`, `gaussian` or `t` distribution may be specified. The routines `frailty.gamma`, `frailty.gaussian` and `frailty.t` do the actual work. | | `...` | Arguments for specific distribution, including (but not limited to) | | `sparse` | cutoff for using a sparse coding of the data matrix. If the total number of levels of `x` is larger than this value, then a sparse matrix approximation is used. The correct cutoff is still a matter of exploration: if the number of levels is very large (thousands) then the non-sparse calculation may not be feasible in terms of both memory and compute time. Likewise, the accuracy of the sparse approximation appears to be related to the maximum proportion of subjects in any one class, being best when no one class has a large membership. | | `theta` | if specified, this fixes the variance of the random effect. If not, the variance is a parameter, and a best solution is sought. Specifying this implies `method='fixed'`. | | `df` | if specified, this fixes the degrees of freedom for the random effect. Specifying this implies `method='df'`. Only one of `theta` or `df` should be specified. | | `method` | the method used to select a solution for theta, the variance of the random effect. The `fixed` corresponds to a user-specified value, and no iteration is done. The `df` selects the variance such that the degrees of freedom for the random effect matches a user specified value. The `aic` method seeks to maximize Akaike's information criteria 2\*(partial likelihood - df). The `em` and `reml` methods are specific to Cox models with gamma and gaussian random effects, respectively. Please see further discussion below. | | `tdf` | the degrees of freedom for the t-distribution. | | `eps` | convergence criteria for the iteration on theta. | ### Details The `frailty` plugs into the general penalized modeling framework provided by the `coxph` and `survreg` routines. This framework deals with likelihood, penalties, and degrees of freedom; these aspects work well with either parent routine. Therneau, Grambsch, and Pankratz show how maximum likelihood estimation for the Cox model with a gamma frailty can be accomplished using a general penalized routine, and Ripatti and Palmgren work through a similar argument for the Cox model with a gaussian frailty. Both of these are specific to the Cox model. Use of gamma/ml or gaussian/reml with `survreg` does not lead to valid results. The extensible structure of the penalized methods is such that the penalty function, such as `frailty` or `pspine`, is completely separate from the modeling routine. The strength of this is that a user can plug in any penalization routine they choose. A weakness is that it is very difficult for the modeling routine to know whether a sensible penalty routine has been supplied. Note that use of a frailty term implies a mixed effects model and use of a cluster term implies a GEE approach; these cannot be mixed. The `coxme` package has superseded this method. It is faster, more stable, and more flexible. ### Value this function is used in the model statement of either `coxph` or `survreg`. It's results are used internally. ### References S Ripatti and J Palmgren, Estimation of multivariate frailty models using penalized partial likelihood, Biometrics, 56:1016-1022, 2000. T Therneau, P Grambsch and VS Pankratz, Penalized survival models and frailty, J Computational and Graphical Statistics, 12:156-175, 2003. ### See Also <coxph>, <survreg> ### Examples ``` # Random institutional effect coxph(Surv(time, status) ~ age + frailty(inst, df=4), lung) # Litter effects for the rats data rfit2a <- coxph(Surv(time, status) ~ rx + frailty.gaussian(litter, df=13, sparse=FALSE), rats, subset= (sex=='f')) rfit2b <- coxph(Surv(time, status) ~ rx + frailty.gaussian(litter, df=13, sparse=TRUE), rats, subset= (sex=='f')) ``` r None `plot.aareg` Plot an aareg object. ----------------------------------- ### Description Plot the estimated coefficient function(s) from a fit of Aalen's additive regression model. ### Usage ``` ## S3 method for class 'aareg' plot(x, se=TRUE, maxtime, type='s', ...) ``` ### Arguments | | | | --- | --- | | `x` | the result of a call to the `aareg` function | | `se` | if TRUE, standard error bands are included on the plot | | `maxtime` | upper limit for the x-axis. | | `type` | graphical parameter for the type of line, default is "steps". | | `...` | other graphical parameters such as line type, color, or axis labels. | ### Side Effects A plot is produced on the current graphical device. ### References Aalen, O.O. (1989). A linear regression model for the analysis of life times. Statistics in Medicine, 8:907-925. ### See Also aareg r None `survexp.object` Expected Survival Curve Object ------------------------------------------------ ### Description This class of objects is returned by the `survexp` class of functions to represent a fitted survival curve. Objects of this class have methods for `summary`, and inherit the `print`, `plot`, `points` and `lines` methods from `survfit`. ### Arguments | | | | --- | --- | | `surv` | the estimate of survival at time t+0. This may be a vector or a matrix. | | `n.risk` | the number of subjects who contribute at this time. | | `time` | the time points at which the curve has a step. | | `std.err` | the standard error of the cumulative hazard or -log(survival). | | `strata` | if there are multiple curves, this component gives the number of elements of the `time` etc. vectors corresponding to the first curve, the second curve, and so on. The names of the elements are labels for the curves. | | `method` | the estimation method used. One of "Ederer", "Hakulinen", or "conditional". | | `na.action` | the returned value from the na.action function, if any. It will be used in the printout of the curve, e.g., the number of observations deleted due to missing values. | | `call` | an image of the call that produced the object. | ### Structure The following components must be included in a legitimate `survfit` object. ### Subscripts Survexp objects that contain multiple survival curves can be subscripted. This is most often used to plot a subset of the curves. ### Details In expected survival each subject from the data set is matched to a hypothetical person from the parent population, matched on the characteristics of the parent population. The number at risk printed here is the number of those hypothetical subject who are still part of the calculation. In particular, for the Ederer method all hypotheticals are retained for all time, so `n.risk` will be a constant. ### See Also `<plot.survfit>`, `<summary.survexp>`, `<print.survfit>`, `<survexp>`. r None `survexp.us` Census Data Sets for the Expected Survival and Person Years Functions ----------------------------------------------------------------------------------- ### Description Census data sets for the expected survival and person years functions. ### Usage ``` data(survexp, package="survival") ``` ### Details survexp.us total United States population, by age and sex, 1940 to 2012. survexp.usr United States population, by age, sex and race, 1940 to 2014. Race is white, nonwhite, or black. For 1960 and 1970 the black population values were not reported separately, so the nonwhite values were used. survexp.mn total Minnesota population, by age and sex, 1970 to 2013. Each of these tables contains the daily hazard rate for a matched subject from the population, defined as *-\log(1-q)/365.25* where *q* is the 1 year probability of death as reported in the original tables from the US Census. For age 25 in 1970, for instance, *p = 1-q* is is the probability that a subject who becomes 25 years of age in 1970 will achieve his/her 26th birthday. The tables are recast in terms of hazard per day for computational convenience. Each table is stored as an array, with additional attributes, and can be subset and manipulated as standard R arrays. See the help page for `ratetable` for details. All numeric dimensions of a rate table must be in the same units. The `survexp.us` rate table contains daily hazard rates, the age cutpoints are in days, and the calendar year cutpoints are a Date. ### See Also `<ratetable>`, `<survexp>`, `<pyears>` ### Examples ``` survexp.uswhite <- survexp.usr[,,"white",] ```
programming_docs
r None `cluster` Identify clusters. ----------------------------- ### Description This is a special function used in the context of survival models. It identifies correlated groups of observations, and is used on the right hand side of a formula. This style is now discouraged, use the `cluster` option instead. ### Usage ``` cluster(x) ``` ### Arguments | | | | --- | --- | | `x` | A character, factor, or numeric variable. | ### Details The function's only action is semantic, to mark a variable as the cluster indicator. The resulting variance is what is known as the “working independence” variance in a GEE model. Note that one cannot use both a frailty term and a cluster term in the same model, the first is a mixed-effects approach to correlation and the second a GEE approach, and these don't mix. ### Value `x` ### See Also `<coxph>`, `<survreg>` ### Examples ``` marginal.model <- coxph(Surv(time, status) ~ rx, data= rats, cluster=litter, subset=(sex=='f')) frailty.model <- coxph(Surv(time, status) ~ rx + frailty(litter), rats, subset=(sex=='f')) ``` r None `cipoisson` Confidence limits for the Poisson ---------------------------------------------- ### Description Confidence interval calculation for Poisson rates. ### Usage ``` cipoisson(k, time = 1, p = 0.95, method = c("exact", "anscombe")) ``` ### Arguments | | | | --- | --- | | `k` | Number of successes | | `time` | Total time on trial | | `p` | Probability level for the (two-sided) interval | | `method` | The method for computing the interval. | ### Details The likelihood method is based on equation 10.10 of Feller, which relates poisson probabilities to tail area of the gamma distribution. The Anscombe approximation is based on the fact that sqrt(k + 3/8) has a nearly constant variance of 1/4, along with a continuity correction. There are many other proposed intervals: Patil and Kulkarni list and evaluate 19 different suggestions from the literature!. The exact intervals can be overly broad for very small values of `k`, many of the other approaches try to shrink the lengths, with varying success. ### Value a vector, matrix, or array. If both `k` and `time` are single values the result is a vector of length 2 containing the lower an upper limits. If either or both are vectors the result is a matrix with two columns. If `k` is a matrix or array, the result will be an array with one more dimension; in this case the dimensions and dimnames (if any) of `k` are preserved. ### References F.J. Anscombe (1949). Transformations of Poisson, binomial and negative-binomial data. Biometrika, 35:246-254. W.F. Feller (1950). An Introduction to Probability Theory and its Applications, Volume 1, Chapter 6, Wiley. V. V. Patil and H.F. Kulkarni (2012). Comparison of confidence intervals for the poisson mean: some new aspects. Revstat 10:211-227. ### See Also `[ppois](../../stats/html/family)`, `[qpois](../../stats/html/family)` ### Examples ``` cipoisson(4) # 95\% confidence limit # lower upper # 1.089865 10.24153 ppois(4, 10.24153) #chance of seeing 4 or fewer events with large rate # [1] 0.02500096 1-ppois(3, 1.08986) #chance of seeing 4 or more, with a small rate # [1] 0.02499961 ``` r None `aggregate.survfit` Average survival curves -------------------------------------------- ### Description For a survfit object containing multiple curves, create average curves over a grouping. ### Usage ``` ## S3 method for class 'survfit' aggregate(x, by = NULL, FUN = mean, ...) ``` ### Arguments | | | | --- | --- | | `x` | a `survfit` object which has a data dimension. | | `by` | an optional list or vector of grouping elements, each as long as `dim(x)['data']`. | | `FUN` | a function to compute the summary statistic of interest. | | `...` | optional further arguments to FUN. | ### Details The primary use of this is to take an average over multiple survival curves that were created from a modeling function. That is, a marginal estimate of the survival. It is primarily used to average over multiple predicted curves from a Cox model. ### Value a `survfit` object of lower dimension. ### See Also `<survfit>` ### Examples ``` cfit <- coxph(Surv(futime, death) ~ sex + age*hgb, data=mgus2) # marginal effect of sex, after adjusting for the others dummy <- rbind(mgus2, mgus2) dummy$sex <- rep(c("F", "M"), each=nrow(mgus2)) # population data set dummy <- na.omit(dummy) # don't count missing hgb in our "population csurv <- survfit(cfit, newdata=dummy) dim(csurv) # 2 * 1384 survival curves csurv2 <- aggregate(csurv, dummy$sex) ``` r None `cgd0` Chronic Granulotomous Disease data ------------------------------------------ ### Description Data are from a placebo controlled trial of gamma interferon in chronic granulotomous disease (CGD). Contains the data on time to serious infections observed through end of study for each patient. ### Usage ``` cgd0 ``` ### Format id subject identification number center enrolling center random date of randomization treatment placebo or gamma interferon sex sex age age in years, at study entry height height in cm at study entry weight weight in kg at study entry inherit pattern of inheritance steroids use of steroids at study entry,1=yes propylac use of prophylactic antibiotics at study entry hos.cat a categorization of the centers into 4 groups futime days to last follow-up etime1-etime7 up to 7 infection times for the subject ### Details The `cgdraw` data set (this one) is in the form found in the references, with one line per patient and no recoding of the variables. The `cgd` data set has been further processed so as to have one line per event, with covariates such as center recoded as factors to include meaningful labels. ### Source Fleming and Harrington, Counting Processes and Survival Analysis, appendix D.2. ### See Also `<cgd>` r None `dsurvreg` Distributions available in survreg. ----------------------------------------------- ### Description Density, cumulative distribution function, quantile function and random generation for the set of distributions supported by the `survreg` function. ### Usage ``` dsurvreg(x, mean, scale=1, distribution='weibull', parms) psurvreg(q, mean, scale=1, distribution='weibull', parms) qsurvreg(p, mean, scale=1, distribution='weibull', parms) rsurvreg(n, mean, scale=1, distribution='weibull', parms) ``` ### Arguments | | | | --- | --- | | `x` | vector of quantiles. Missing values (`NA`s) are allowed. | | `q` | vector of quantiles. Missing values (`NA`s) are allowed. | | `p` | vector of probabilities. Missing values (`NA`s) are allowed. | | `n` | number of random deviates to produce | | `mean` | vector of linear predictors for the model. This is replicated to be the same length as `p`, `q` or `n`. | | `scale` | vector of (positive) scale factors. This is replicated to be the same length as `p`, `q` or `n`. | | `distribution` | character string giving the name of the distribution. This must be one of the elements of `survreg.distributions` | | `parms` | optional parameters, if any, of the distribution. For the t-distribution this is the degrees of freedom. | ### Details Elements of `q` or `p` that are missing will cause the corresponding elements of the result to be missing. The `location` and `scale` values are as they would be for `survreg`. The label "mean" was an unfortunate choice (made in mimicry of qnorm); since almost none of these distributions are symmetric it will not actually be a mean, but corresponds instead to the linear predictor of a fitted model. Translation to the usual parameterization found in a textbook is not always obvious. For example, the Weibull distribution is fit using the Extreme value distribution along with a log transformation. Letting *F(t) = 1 - exp(-(at)^p)* be the cumulative distribution of the Weibull using a standard parameterization in terms of *a* and *p*, the survreg location corresponds to *-log(a)* and the scale to *1/p* (Kalbfleisch and Prentice, section 2.2.2). ### Value density (`dsurvreg`), probability (`psurvreg`), quantile (`qsurvreg`), or for the requested distribution with mean and scale parameters `mean` and `sd`. ### References Kalbfleisch, J. D. and Prentice, R. L. (1970). *The Statistical Analysis of Failure Time Data* Wiley, New York. ### See Also `<survreg>`, `[Normal](../../stats/html/normal)` ### Examples ``` # List of distributions available names(survreg.distributions) ## Not run: [1] "extreme" "logistic" "gaussian" "weibull" "exponential" [6] "rayleigh" "loggaussian" "lognormal" "loglogistic" "t" ## End(Not run) # Compare results all.equal(dsurvreg(1:10, 2, 5, dist='lognormal'), dlnorm(1:10, 2, 5)) # Hazard function for a Weibull distribution x <- seq(.1, 3, length=30) haz <- dsurvreg(x, 2, 3)/ (1-psurvreg(x, 2, 3)) ## Not run: plot(x, haz, log='xy', ylab="Hazard") #line with slope (1/scale -1) ## End(Not run) ``` r None `rttright` Compute redistribute-to-the-right weights ----------------------------------------------------- ### Description For many survival estimands, one approach is to redistribute each censored observation's weight to those other observations with a longer survival time (think of distributing an estate to the heirs). Then compute on the remaining, uncensored data. ### Usage ``` rttright(formula, data, weights, subset, na.action, times, id, timefix = TRUE) ``` ### Arguments | | | | --- | --- | | `formula` | a formula object, which must have a `Surv` object as the response on the left of the `~` operator and, if desired, terms separated by + operators on the right. Each unique combination of predictors will define a separate strata. | | `data` | a data frame in which to interpret the variables named in the formula, `subset` and `weights` arguments. | | `weights` | The weights must be nonnegative and it is strongly recommended that they be strictly positive, since zero weights are ambiguous, compared to use of the `subset` argument. | | `subset` | expression saying that only a subset of the rows of the data should be used in the fit. | | `na.action` | a missing-data filter function, applied to the model frame, after any `subset` argument has been used. Default is `options()$na.action`. | | `times` | a vector of time points, for which to return updated weights. If missing, a time after the largest time in the data is assumed. | | `id` | optional: if the data set has multiple rows per subject, a a variable containing the subect identifier of each row. | | `timefix` | correct for possible round-off error | ### Details The `formula` argument is treated exactly the same as in the `survfit` function. Redistribution is recursive: redistribute the weight of the smallest censored observation to all those with longer time, which may include other censored observations. Then redistribute the next smallest and etc. up to the specified `time` value. After re-distributing the weight for a censored observation to other observations that are not censored, ordinary non-censored methods can often be applied. For example, redistribution of the weights, followed by computation of the weighted cumulative distribution function, reprises the Kaplan-Meier estimator. A primary use of this routine is illustration of methods or exploration of new methods. Methods that use RTTR directly, such as the Brier score, will normally do these compuations internally. ### Value a vector or matrix of weights, with one column for each requested time ### See Also `<survfit>` ### Examples ``` afit <- survfit(Surv(time, status) ~1, data=aml) rwt <- rttright(Surv(time, status) ~1, data=aml) index <- order(aml$time) cdf <- cumsum(rwt[index]) # weighted CDF cdf <- cdf[!duplicated(aml$time[index], fromLast=TRUE)] # remove duplicates cbind(time=afit$time, KM= afit$surv, RTTR= 1-cdf) ``` r None `is.ratetable` Verify that an object is of class ratetable. ------------------------------------------------------------ ### Description The function verifies not only the `class` attribute, but the structure of the object. ### Usage ``` is.ratetable(x, verbose=FALSE) ``` ### Arguments | | | | --- | --- | | `x` | the object to be verified. | | `verbose` | if `TRUE` and the object is not a ratetable, then return a character string describing the way(s) in which `x` fails to be a proper ratetable object. | ### Details Rate tables are used by the `pyears` and `survexp` functions, and normally contain death rates for some population, categorized by age, sex, or other variables. They have a fairly rigid structure, and the `verbose` option can help in creating a new rate table. ### Value returns `TRUE` if `x` is a ratetable, and `FALSE` or a description if it is not. ### See Also `<pyears>`, `<survexp>`. ### Examples ``` is.ratetable(survexp.us) # True is.ratetable(lung) # False ``` r None `survfit.object` Survival Curve Object --------------------------------------- ### Description This class of objects is returned by the `survfit` class of functions to represent a fitted survival curve. For a multi-state model the object has class `c('survfitms', 'survfit')`. Objects of this class have methods for the functions `print`, `summary`, `plot`, `points` and `lines`. The `<print.survfit>` method does more computation than is typical for a print method and is documented on a separate page. ### Arguments | | | | --- | --- | | `n` | total number of subjects in each curve. | | `time` | the time points at which the curve has a step. | | `n.risk` | the number of subjects at risk at t. | | `n.event` | the number of events that occur at time t. | | `n.enter` | for counting process data only, the number of subjects that enter at time t. | | `n.censor` | for counting process data only, the number of subjects who exit the risk set, without an event, at time t. (For right censored data, this number can be computed from the successive values of the number at risk). | | `surv` | the estimate of survival at time t+0. This may be a vector or a matrix. The latter occurs when a set of survival curves is created from a single Cox model, in which case there is one column for each covariate set. | | `pstate` | a multi-state survival will have the `pstate` component instead of `surv`. It will be a matrix containing the estimated probability of each state at each time, one column per state. | | `std.err` | for a survival curve this contains standard error of the cumulative hazard or -log(survival), for a multi-state curve it contains the standard error of prev. This difference is a reflection of the fact that each is the natural calculation for that case. | | `cumhaz hazard` | optional. Contains the cumulative hazard for each possible transtion. | | `strata` | if there are multiple curves, this component gives the number of elements of the `time` vector corresponding to the first curve, the second curve, and so on. The names of the elements are labels for the curves. | | `upper` | optional upper confidence limit for the survival curve or pstate | | `lower` | options lower confidence limit for the survival curve or pstate | | `start.time` | optional, the starting time for the curve if other than 0 | | `p0, sp0` | for a multistate object, the distribution of starting states. If the curve has a strata dimension, this will be a matrix one row per stratum. The `sp0` element has the standard error of p0, if p0 was estimated. | | `newdata` | for survival curves from a fitted model, this contains the covariate values for the curves | | `n.all` | the total number of observations that were available For counting process data, and any time that the `start.time` argument was used, not all may have been used in creating the curve, in which case this value will be larger than `n` above. The `print` and `plot` routines in the package do no use this value, it is for information only. | | `conf.type` | the approximation used to compute the confidence limits. | | `conf.int` | the level of the confidence limits, e.g. 90 or 95%. | | `transitions` | for multi-state data, the total number of transitions of each type. | | `na.action` | the returned value from the na.action function, if any. It will be used in the printout of the curve, e.g., the number of observations deleted due to missing values. | | `call` | an image of the call that produced the object. | | `type` | type of survival censoring. | | `influence.p, influence.c` | optional influence matrices for the `pstate` (or `surv`) and for the `cumhaz` estimates. A list with one element per stratum, each element of the list is an array indexed by subject, time, state. | | `version` | the version of the object. Will be missing, 2, or 3 | ### Structure The following components must be included in a legitimate `survfit` or `survfitms` object. ### Subscripts Survfit objects can be subscripted. This is often used to plot a subset of the curves, for instance. From the user's point of view the `survfit` object appears to be a vector, matrix, or array of curves. The first dimension is always the underlying number of curves or “strata”; for multi-state models the state is always the last dimension. Predicted curves from a Cox model can have a second dimension which is the number of different covariate prediction vectors. ### Details The `survfit` object has evolved over time: when first created there was no thought of multi-state models for instance. This evolution has almost entirely been accomplished by the addition of new elements. One change in survival version 3 is the addition of a `survfitconf` routine which will compute confidence intervals for a `survfit` object. This allows the computation of CI intervals to be deferred until later, if desired, rather than making them a permanent part of the object. Later iterations of the base routines may omit the confidence intervals. The survfit object starts at the first observation time, but survival curves are normally plotted from time 0. A helper routine `survfit0` can be used to add this first time point and align the data. ### See Also `<plot.survfit>`, `<summary.survfit>`, `<print.survfit>`, `<survfit>`, `<survfit0>` r None `yates` Population prediction ------------------------------ ### Description Compute population marginal means (PMM) from a model fit, for a chosen population and statistic. ### Usage ``` yates(fit, term, population = c("data", "factorial", "sas"), levels, test = c("global", "trend", "pairwise"), predict = "linear", options, nsim = 200, method = c("direct", "sgtt")) ``` ### Arguments | | | | --- | --- | | `fit` | a model fit. Examples using lm, glm, and coxph objects are given in the vignette. | | `term` | the term from the model whic is to be evaluated. This can be written as a character string or as a formula. | | `population` | the population to be used for the adjusting variables. User can supply their own data frame or select one of the built in choices. The argument also allows "empirical" and "yates" as aliases for data and factorial, respectively, and ignores case. | | `levels` | optional, what values for `term` should be used. | | `test` | the test for comparing the population predictions. | | `predict` | what to predict. For a glm model this might be the 'link' or 'response'. For a coxph model it can be linear, risk, or survival. User written functions are allowed. | | `options` | optional arguments for the prediction method. | | `nsim` | number of simulations used to compute a variance for the predictions. This is not needed for the linear predictor. | | `method` | the computational approach for testing equality of the population predictions. Either the direct approach or the algorithm used by the SAS glim procedure for "type 3" tests. | ### Details The many options and details of this function are best described in a vignette on population prediction. ### Value an object of class `yates` with components of | | | | --- | --- | | `estimate` | a data frame with one row for each level of the term, and columns containing the level, the mean population predicted value (mppv) and its standard deviation. | | `tests` | a matrix giving the test statistics | | `mvar` | the full variance-covariance matrix of the mppv values | | `summary` | optional: any further summary if the values provided by the prediction method. | ### Author(s) Terry Therneau ### Examples ``` fit1 <- lm(skips ~ Solder*Opening + Mask, data = solder) yates(fit1, ~Opening, population = "factorial") fit2 <- coxph(Surv(time, status) ~ factor(ph.ecog)*sex + age, lung) yates(fit2, ~ ph.ecog, predict="risk") # hazard ratio ```
programming_docs
r None `survcheck` Checks of a survival data set ------------------------------------------ ### Description Perform a set of consistency checks on survival data ### Usage ``` survcheck(formula, data, subset, na.action, id, istate, istate0="(s0)", timefix=TRUE,...) ``` ### Arguments | | | | --- | --- | | `formula` | a model formula with a `Surv` object as the response | | `data` | data frame in which to find the `id`, `istate` and formula variables | | `subset` | expression indicating which subset of the rows of data should be used in the fit. All observations are included by default. | | `na.action` | a missing-data filter function. This is applied to the model.frame after any subset argument has been used. Default is `options()\$na.action`. | | `id` | an identifier that labels unique subjects | | `istate` | an optional vector giving the current state at the start of each interval | | `istate0` | default label for the initial state of each subject (at their first interval) when `istate` is missing | | `timefix` | process times through the `aeqSurv` function to eliminate potential roundoff issues. | | `...` | other arguments, which are ignored (but won't give an error if someone added `weights` for instance) | ### Details This routine will examine a multi-state data set for consistency of the data. The basic rules are that if a subject is at risk they have to be somewhere, can not be two states at once, and should make sensible transitions from state to state. It reports the number of instances of the following conditions: overlap two observations for the same subject that overlap in time, e.g. intervals of (0, 100) and (90, 120). If `y` is simple (time, status) survival observation intervals implicitly start at 0, so in that case any duplicate identifiers will generate an overlap. jump a hole in a subject's timeline, where they are in one state at the end of the prior interval, but a new state in the at the start subsequent interval. gap one or more gaps in a subject's timeline; they are presumably in the same state at their return as when they left. teleport two adjacent intervals for a subject, with the first interval ending in one state and the subsequent interval starting in another. They have instantaneously changed states with experiencing a transition. The total number of occurences of each is present in the `flags` vector. Optional components give the location and identifiers of the flagged observations. ### Value a list with components | | | | --- | --- | | `states` | the vector of possible states | | `transitions` | a matrix giving the count of transitions from one state to another | | `statecount` | table of the number of visits per state, e.g., 18 subjects had 2 visits to the "infection" state | | `flags` | a vector giving the counts of each check | | `istate` | a copy of the istate vector, if it was supplied; otherwise a constructed istate that satisfies all the checks | | `overlap` | a list with the row number and id of overlaps (not present if there are no overlaps) | | `gaps` | a list with the row number and id of gaps (not present if there are no gaps) | | `teleport` | a list with the row number and id of inconsistent rows (not present if there are none) | | `jumps` | a list with the row number and id of jumps (not present if there are no jumps | ### Note For data sets with time-dependent covariates, a given subject will often have intermediate rows with a status of ‘no event at this time’. (numeric value of 0). For instance a subject who started in state 1 at time 0, transitioned to state 2 at time 10, had a covariate `x` change from 135 to 156 at time 20, and a final transition to state 3 at time 30. The response would be `Surv(c(0, 10, 20), c(10, 20, 30), c(2,0,3))`: the status variable records *changes* in state, and there was no change at time 20. The `istate` variable would be (1, 2, 2); it contains the *current* state, and so the value is unchanged when status = censored. Thus, when there are intermediate observations `istate` is not simply a lagged version of the status, and may be more challenging to create. One approach is to let `survcheck` do the work: call it with an `istate` argument that is correct for the first row of each subject, or no `istate` argument at all, and then insert the returned value into ones data frame. r None `survexp.fit` Compute Expected Survival ---------------------------------------- ### Description Compute expected survival times. ### Usage ``` survexp.fit(group, x, y, times, death, ratetable) ``` ### Arguments | | | | --- | --- | | `group` | if there are multiple survival curves this identifies the group, otherwise it is a constant. Must be an integer. | | `x` | A matrix whose columns match the dimensions of the `ratetable`, in the correct order. | | `y` | the follow up time for each subject. | | `times` | the vector of times at which a result will be computed. | | `death` | a logical value, if `TRUE` the conditional survival is computed, if `FALSE` the cohort survival is computed. See `<survexp>` for more details. | | `ratetable` | a rate table, such as `survexp.uswhite`. | ### Details For conditional survival `y` must be the time of last follow-up or death for each subject. For cohort survival it must be the potential censoring time for each subject, ignoring death. For an exact estimate `times` should be a superset of `y`, so that each subject at risk is at risk for the entire sub-interval of time. For a large data set, however, this can use an inordinate amount of storage and/or compute time. If the `times` spacing is more coarse than this, an actuarial approximation is used which should, however, be extremely accurate as long as all of the returned values are > .99. For a subgroup of size 1 and `times` > `y`, the conditional method reduces to exp(-h) where h is the expected cumulative hazard for the subject over his/her observation time. This is used to compute individual expected survival. ### Value A list containing the number of subjects and the expected survival(s) at each time point. If there are multiple groups, these will be matrices with one column per group. ### Warning Most users will call the higher level routine `survexp`. Consequently, this function has very few error checks on its input arguments. ### See Also `<survexp>`, `<survexp.us>`. r None `kidney` Kidney catheter data ------------------------------ ### Description Data on the recurrence times to infection, at the point of insertion of the catheter, for kidney patients using portable dialysis equipment. Catheters may be removed for reasons other than infection, in which case the observation is censored. Each patient has exactly 2 observations. This data has often been used to illustrate the use of random effects (frailty) in a survival model. However, one of the males (id 21) is a large outlier, with much longer survival than his peers. If this observation is removed no evidence remains for a random subject effect. ### Usage ``` kidney data(cancer, package="survival") ``` ### Format | | | | --- | --- | | patient: | id | | time: | time | | status: | event status | | age: | in years | | sex: | 1=male, 2=female | | disease: | disease type (0=GN, 1=AN, 2=PKD, 3=Other) | | frail: | frailty estimate from original paper | | | ### Note The original paper ignored the issue of tied times and so is not exactly reproduced by the survival package. ### Source CA McGilchrist, CW Aisbett (1991), Regression with frailty in survival analysis. *Biometrics* **47**, 461–66. ### Examples ``` kfit <- coxph(Surv(time, status)~ age + sex + disease + frailty(id), kidney) kfit0 <- coxph(Surv(time, status)~ age + sex + disease, kidney) kfitm1 <- coxph(Surv(time,status) ~ age + sex + disease + frailty(id, dist='gauss'), kidney) ``` r None `pseudo` Pseudo values for survival. ------------------------------------- ### Description Produce pseudo values from a survival curve. ### Usage ``` pseudo(fit, times, type, addNA=TRUE, data.frame=FALSE, minus1=FALSE, ...) ``` ### Arguments | | | | --- | --- | | `fit` | a `survfit` object, or one that inherits that class. | | `times` | a vector of time points, at which to evaluate the pseudo values. | | `type` | the type of value, either the probabilty in state `pstate`, the cumulative hazard `cumhaz` or the expected sojourn time in the state `sojourn`. | | `addNA` | If any observations were removed due to missing values in the `fit` object, add those rows (as NA) into the return. This causes the result of pseudo to match the original dataframe. | | `data.frame` | if TRUE, return the data in "long" form as a data.frame with id, time, and pseudo as variables. | | `minus1` | use n-1 as the multiplier rather than n | . | | | | --- | --- | | `...` | other arguments to the `residuals.survfit` function, which does the majority of the work, e.g., `collapse` and `weighted`. | ### Details This function computes pseudo values based on a first order Taylor series, also known as the "infinitesimal jackknife" (IJ) or "dfbeta" residuals. To be completely correct these results could perhaps be called ‘IJ pseudo values’ or even pseudo psuedo-values. For moderate to large data, however, the resulting values will be almost identical, numerically, to the ordinary jackknife. A primary advantage of this approach is computational speed. Other features, neither good nor bad, are that they will agree with robust standard errors of other survival package estimates, which are based on the IJ, and that the mean of the estimates, over subjects, is exactly the underlying survival estimate. For the `type` variable, `surv` is an acceptable synonym for `pstate`, and `rmst, rmts` are equivalent to `sojourn`. All of these are case insensitive. ### Value A vector, matrix, or array. The first dimension is always the number of observations in `fit` object, in the same order as the original data set (less any missing values that were removed when creating the survfit object); the second, if applicable, corresponds to `fit$states`, e.g., multi-state survival, and the last dimension to the selected time points. For the data.frame option, a data frame containing values for id, time, and pseudo. If the original `survfit` call contained an `id` statement, then the values in the `id` column will be taken from that variable. If the `id` statement has a simple form, e.g., `id = patno`, then the name of the id column will be ‘patno’, otherwise it will be named ‘(id)’. ### References PK Andersen and M Pohar-Perme, Pseudo-observations in surivival analysis, Stat Methods Medical Res, 2010; 19:71-99 ### See Also `<residuals.survfit>` ### Examples ``` fit1 <- survfit(Surv(time, status) ~ 1, data=lung) yhat <- pseudo(fit1, times=c(365, 730)) dim(yhat) lfit <- lm(yhat[,1] ~ ph.ecog + age + sex, data=lung) ``` r None `ratetableDate` Convert date objects to ratetable form ------------------------------------------------------- ### Description This method converts dates from various forms into the internal form used in `ratetable` objects. ### Usage ``` ratetableDate(x) ``` ### Arguments | | | | --- | --- | | `x` | a date. The function currently has methods for Date, date, POSIXt, timeDate, and chron objects. | ### Details This function is useful for those who create new ratetables, but is normally invisible to users. It is used internally by the `survexp` and `pyears` functions to map the various date formats; if a new method is added then those routines will automatically be adapted to the new date type. ### Value a numeric vector, the number of days since 1/1/1960. ### Author(s) Terry Therneau ### See Also `<pyears>`, `<survexp>` r None `myeloid` Acute myeloid leukemia --------------------------------- ### Description This simulated data set is based on a trial in acute myeloid leukemia. ### Usage ``` myeloid data(cancer, package="survival") ``` ### Format A data frame with 646 observations on the following 9 variables. `id` subject identifier, 1-646 `trt` treatment arm A or B `sex` f=female, m=male `futime` time to death or last follow-up `death` 1 if `futime` is a death, 0 for censoring `txtime` time to hematropetic stem cell transplant `crtime` time to complete response `rltime` time to relapse of disease ### Details This data set is used to illustrate multi-state survival curves. The correlation between within-subject event times strongly resembles that from an actual trial, but none of the actual data values are from that source. ### References Le-Rademacher JG, Peterson RA, Therneau TM, Sanford BL, Stone RM, Mandrekar SJ. Application of multi-state models in cancer clinical trials. Clin Trials. 2018 Oct; 15 (5):489-498 ### Examples ``` coxph(Surv(futime, death) ~ trt, data=myeloid) # See the mstate vignette for a more complete analysis ``` r None `pyears` Person Years ---------------------- ### Description This function computes the person-years of follow-up time contributed by a cohort of subjects, stratified into subgroups. It also computes the number of subjects who contribute to each cell of the output table, and optionally the number of events and/or expected number of events in each cell. ### Usage ``` pyears(formula, data, weights, subset, na.action, rmap, ratetable, scale=365.25, expect=c('event', 'pyears'), model=FALSE, x=FALSE, y=FALSE, data.frame=FALSE) ``` ### Arguments | | | | --- | --- | | `formula` | a formula object. The response variable will be a vector of follow-up times for each subject, or a `Surv` object containing the survival time and an event indicator. The predictors consist of optional grouping variables separated by + operators (exactly as in `survfit`), time-dependent grouping variables such as age (specified with `tcut`), and optionally a `ratetable` term. This latter matches each subject to his/her expected cohort. | | `data` | a data frame in which to interpret the variables named in the `formula`, or in the `subset` and the `weights` argument. | | `weights` | case weights. | | `subset` | expression saying that only a subset of the rows of the data should be used in the fit. | | `na.action` | a missing-data filter function, applied to the model.frame, after any `subset` argument has been used. Default is `options()$na.action`. | | `rmap` | an optional list that maps data set names to the ratetable names. See the details section below. | | `ratetable` | a table of event rates, such as `survexp.uswhite`. | | `scale` | a scaling for the results. As most rate tables are in units/day, the default value of 365.25 causes the output to be reported in years. | | `expect` | should the output table include the expected number of events, or the expected number of person-years of observation. This is only valid with a rate table. | | `data.frame` | return a data frame rather than a set of arrays. | | `model, x, y` | If any of these is true, then the model frame, the model matrix, and/or the vector of response times will be returned as components of the final result. | ### Details Because `pyears` may have several time variables, it is necessary that all of them be in the same units. For instance, in the call ``` py <- pyears(futime ~ rx, rmap=list(age=age, sex=sex, year=entry.dt), ratetable=survexp.us) ``` the natural unit of the ratetable is hazard per day, it is important that `futime`, `age` and `entry.dt` all be in days. Given the wide range of possible inputs, it is difficult for the routine to do sanity checks of this aspect. The ratetable being used may have different variable names than the user's data set, this is dealt with by the `rmap` argument. The rate table for the above calculation was `survexp.us`, a call to `summary{survexp.us}` reveals that it expects to have variables `age` = age in days, `sex`, and `year` = the date of study entry, we create them in the `rmap` line. The sex variable is not mapped, therefore the code assumes that it exists in `mydata` in the correct format. (Note: for factors such as sex, the program will match on any unique abbreviation, ignoring case.) A special function `tcut` is needed to specify time-dependent cutpoints. For instance, assume that age is in years, and that the desired final arrays have as one of their margins the age groups 0-2, 2-10, 10-25, and 25+. A subject who enters the study at age 4 and remains under observation for 10 years will contribute follow-up time to both the 2-10 and 10-25 subsets. If `cut(age, c(0,2,10,25,100))` were used in the formula, the subject would be classified according to his starting age only. The `tcut` function has the same arguments as `cut`, but produces a different output object which allows the `pyears` function to correctly track the subject. The results of `pyears` are normally used as input to further calculations. The `print` routine, therefore, is designed to give only a summary of the table. ### Value a list with components: | | | | --- | --- | | `pyears` | an array containing the person-years of exposure. (Or other units, depending on the rate table and the scale). The dimension and dimnames of the array correspond to the variables on the right hand side of the model equation. | | `n` | an array containing the number of subjects who contribute time to each cell of the `pyears` array. | | `event` | an array containing the observed number of events. This will be present only if the response variable is a `Surv` object. | | `expected` | an array containing the expected number of events (or person years if `expect ="pyears"`). This will be present only if there was a `ratetable` term. | | `data` | if the `data.frame` option was set, a data frame containing the variables `n`, `event`, `pyears` and `event` that supplants the four arrays listed above, along with variables corresponding to each dimension. There will be one row for each cell in the arrays. | | `offtable` | the number of person-years of exposure in the cohort that was not part of any cell in the `pyears` array. This is often useful as an error check; if there is a mismatch of units between two variables, nearly all the person years may be off table. | | `tcut` | whether the call included any time-dependent cutpoints. | | `summary` | a summary of the rate-table matching. This is also useful as an error check. | | `call` | an image of the call to the function. | | `observations` | the number of observations in the input data set, after any missings were removed. | | `na.action` | the `na.action` attribute contributed by an `na.action` routine, if any. | ### See Also `<ratetable>`, `<survexp>`, `[Surv](surv)`. ### Examples ``` # Look at progression rates jointly by calendar date and age # temp.yr <- tcut(mgus$dxyr, 55:92, labels=as.character(55:91)) temp.age <- tcut(mgus$age, 34:101, labels=as.character(34:100)) ptime <- ifelse(is.na(mgus$pctime), mgus$futime, mgus$pctime) pstat <- ifelse(is.na(mgus$pctime), 0, 1) pfit <- pyears(Surv(ptime/365.25, pstat) ~ temp.yr + temp.age + sex, mgus, data.frame=TRUE) # Turn the factor back into numerics for regression tdata <- pfit$data tdata$age <- as.numeric(as.character(tdata$temp.age)) tdata$year<- as.numeric(as.character(tdata$temp.yr)) fit1 <- glm(event ~ year + age+ sex +offset(log(pyears)), data=tdata, family=poisson) ## Not run: # fit a gam model gfit.m <- gam(y ~ s(age) + s(year) + offset(log(time)), family = poisson, data = tdata) ## End(Not run) # Example #2 Create the hearta data frame: hearta <- by(heart, heart$id, function(x)x[x$stop == max(x$stop),]) hearta <- do.call("rbind", hearta) # Produce pyears table of death rates on the surgical arm # The first is by age at randomization, the second by current age fit1 <- pyears(Surv(stop/365.25, event) ~ cut(age + 48, c(0,50,60,70,100)) + surgery, data = hearta, scale = 1) fit2 <- pyears(Surv(stop/365.25, event) ~ tcut(age + 48, c(0,50,60,70,100)) + surgery, data = hearta, scale = 1) fit1$event/fit1$pyears #death rates on the surgery and non-surg arm fit2$event/fit2$pyears #death rates on the surgery and non-surg arm ```
programming_docs
r None `predict.coxph` Predictions for a Cox model -------------------------------------------- ### Description Compute fitted values and regression terms for a model fitted by `<coxph>` ### Usage ``` ## S3 method for class 'coxph' predict(object, newdata, type=c("lp", "risk", "expected", "terms", "survival"), se.fit=FALSE, na.action=na.pass, terms=names(object$assign), collapse, reference=c("strata", "sample", "zero"), ...) ``` ### Arguments | | | | --- | --- | | `object` | the results of a coxph fit. | | `newdata` | Optional new data at which to do predictions. If absent predictions are for the data frame used in the original fit. When coxph has been called with a formula argument created in another context, i.e., coxph has been called within another function and the formula was passed as an argument to that function, there can be problems finding the data set. See the note below. | | `type` | the type of predicted value. Choices are the linear predictor (`"lp"`), the risk score exp(lp) (`"risk"`), the expected number of events given the covariates and follow-up time (`"expected"`), and the terms of the linear predictor (`"terms"`). The survival probability for a subject is equal to exp(-expected). | | `se.fit` | if TRUE, pointwise standard errors are produced for the predictions. | | `na.action` | applies only when the `newdata` argument is present, and defines the missing value action for the new data. The default is to include all observations. When there is no newdata, then the behavior of missing is dictated by the na.action option of the original fit. | | `terms` | if type="terms", this argument can be used to specify which terms should be included; the default is all. | | `collapse` | optional vector of subject identifiers. If specified, the output will contain one entry per subject rather than one entry per observation. | | `reference` | reference for centering predictions, see details below | | `...` | For future methods | ### Details The Cox model is a *relative* risk model; predictions of type "linear predictor", "risk", and "terms" are all relative to the sample from which they came. By default, the reference value for each of these is the mean covariate within strata. The underlying reason is both statistical and practial. First, a Cox model only predicts relative risks between pairs of subjects within the same strata, and hence the addition of a constant to any covariate, either overall or only within a particular stratum, has no effect on the fitted results. Second, downstream calculations depend on the risk score exp(linear predictor), which will fall prey to numeric overflow for a linear predictor greater than `.Machine\$double.max.exp`. The `coxph` routines try to approximately center the predictors out of self protection. Using the `reference="strata"` option is the safest centering, since strata occassionally have different means. When the results of `predict` are used in further calculations it may be desirable to use a single reference level for all observations. Use of `reference="sample"` will use the overall means, and agrees with the `linear.predictors` component of the coxph object (which uses the overall mean for backwards compatability with older code). Predictions of `type="terms"` are almost invariably passed forward to further calculation, so for these we default to using the sample as the reference. A reference of `"zero"` causes no centering to be done. Predictions of type "expected" incorporate the baseline hazard and are thus absolute instead of relative; the `reference` option has no effect on these. These values depend on the follow-up time for the future subjects as well as covariates so the `newdata` argument needs to include both the right and *left* hand side variables from the formula. (The status variable will not be used, but is required since the underlying code needs to reconstruct the entire formula.) Models that contain a `frailty` term are a special case: due to the technical difficulty, when there is a `newdata` argument the predictions will always be for a random effect of zero. ### Value a vector or matrix of predictions, or a list containing the predictions (element "fit") and their standard errors (element "se.fit") if the se.fit option is TRUE. ### Note Some predictions can be obtained directly from the coxph object, and for others it is necessary for the routine to have the entirety of the original data set, e.g., for type = `terms` or if standard errors are requested. This extra information is saved in the coxph object if `model=TRUE`, if not the original data is reconstructed. If it is known that such residuals will be required overall execution will be slightly faster if the model information is saved. In some cases the reconstruction can fail. The most common is when coxph has been called inside another function and the formula was passed as one of the arguments to that enclosing function. Another is when the data set has changed between the original call and the time of the prediction call. In each of these the simple solution is to add `model=TRUE` to the original coxph call. ### See Also `[predict](../../stats/html/predict)`,`<coxph>`,`[termplot](../../stats/html/termplot)` ### Examples ``` options(na.action=na.exclude) # retain NA in predictions fit <- coxph(Surv(time, status) ~ age + ph.ecog + strata(inst), lung) #lung data set has status coded as 1/2 mresid <- (lung$status-1) - predict(fit, type='expected') #Martingale resid predict(fit,type="lp") predict(fit,type="expected") predict(fit,type="risk",se.fit=TRUE) predict(fit,type="terms",se.fit=TRUE) # For someone who demands reference='zero' pzero <- function(fit) predict(fit, reference="sample") + sum(coef(fit) * fit$means, na.rm=TRUE) ``` r None `flchain` Assay of serum free light chain for 7874 subjects. ------------------------------------------------------------- ### Description This is a stratified random sample containing 1/2 of the subjects from a study of the relationship between serum free light chain (FLC) and mortality. The original sample contains samples on approximately 2/3 of the residents of Olmsted County aged 50 or greater. ### Usage ``` flchain data(flchain, package="survival") ``` ### Format A data frame with 7874 persons containing the following variables. `age` age in years `sex` F=female, M=male `sample.yr` the calendar year in which a blood sample was obtained `kappa` serum free light chain, kappa portion `lambda` serum free light chain, lambda portion `flc.grp` the FLC group for the subject, as used in the original analysis `creatinine` serum creatinine `mgus` 1 if the subject had been diagnosed with monoclonal gammapothy (MGUS) `futime` days from enrollment until death. Note that there are 3 subjects whose sample was obtained on their death date. `death` 0=alive at last contact date, 1=dead `chapter` for those who died, a grouping of their primary cause of death by chapter headings of the International Code of Diseases ICD-9 ### Details In 1995 Dr. Robert Kyle embarked on a study to determine the prevalence of monoclonal gammopathy of undetermined significance (MGUS) in Olmsted County, Minnesota, a condition which is normally only found by chance from a test (serum electrophoresis) which is ordered for other causes. Later work suggested that one component of immunoglobulin production, the serum free light chain, might be a possible marker for immune disregulation. In 2010 Dr. Angela Dispenzieri and colleagues assayed FLC levels on those samples from the original study for which they had patient permission and from which sufficient material remained for further testing. They found that elevated FLC levels were indeed associated with higher death rates. Patients were recruited when they came to the clinic for other appointments, with a final random sample of those who had not yet had a visit since the study began. An interesting side question is whether there are differences between early, mid, and late recruits. This data set contains an age and sex stratified random sample that includes 7874 of the original 15759 subjects. The original subject identifiers and dates have been removed to protect patient identity. Subsampling was done to further protect this information. ### Source The primary investigator (A Dispenzieri) and statistician (T Therneau) for the study. ### References A Dispenzieri, J Katzmann, R Kyle, D Larson, T Therneau, C Colby, R Clark, G Mead, S Kumar, LJ Melton III and SV Rajkumar (2012). Use of monclonal serum immunoglobulin free light chains to predict overall survival in the general population, Mayo Clinic Proceedings 87:512-523. R Kyle, T Therneau, SV Rajkumar, D Larson, M Plevak, J Offord, A Dispenzieri, J Katzmann, and LJ Melton, III, 2006, Prevalence of monoclonal gammopathy of undetermined significance, New England J Medicine 354:1362-1369. ### Examples ``` data(flchain) age.grp <- cut(flchain$age, c(49,54, 59,64, 69,74,79, 89, 110), labels= paste(c(50,55,60,65,70,75,80,90), c(54,59,64,69,74,79,89,109), sep='-')) table(flchain$sex, age.grp) ``` r None `stanford2` More Stanford Heart Transplant data ------------------------------------------------ ### Description This contains the Stanford Heart Transplant data in a different format. The main data set is in `<heart>`. ### Usage ``` stanford2 ``` ### Format | | | | --- | --- | | id: | ID number | | time: | survival or censoring time | | status: | censoring status | | age: | in years | | t5: | T5 mismatch score | | | ### Source LA Escobar and WQ Meeker Jr (1992), Assessing influence in regression analysis with censored data. *Biometrics* **48**, 507–528. Page 519. ### See Also `<predict.survreg>`, `<heart>` r None `veteran` Veterans' Administration Lung Cancer study ----------------------------------------------------- ### Description Randomised trial of two treatment regimens for lung cancer. This is a standard survival analysis data set. ### Usage ``` veteran data(cancer, package="survival") ``` ### Format | | | | --- | --- | | trt: | 1=standard 2=test | | celltype: | 1=squamous, 2=smallcell, 3=adeno, 4=large | | time: | survival time | | status: | censoring status | | karno: | Karnofsky performance score (100=good) | | diagtime: | months from diagnosis to randomisation | | age: | in years | | prior: | prior therapy 0=no, 10=yes | | | ### Source D Kalbfleisch and RL Prentice (1980), *The Statistical Analysis of Failure Time Data*. Wiley, New York. r None `Surv2data` Convert data from timecourse to (time1,time2) style ---------------------------------------------------------------- ### Description The multi-state survival functions `coxph` and `survfit` allow for two forms of input data. This routine converts between them. The function is normally called behind the scenes when `Surv2` is as the response. ### Usage ``` Surv2data(formula, data, subset, id) ``` ### Arguments | | | | --- | --- | | `formula` | a model formula | | `data` | a data frame | | `subset` | optional, selects rows of the data to be retained | | `id` | a variable that identified multiple rows for the same subject, normally found in the referenced data set | ### Details For timeline style data, each row is uniquely identified by an (identifier, time) pair. The time could be a date, time from entry to a study, age, etc, (there may often be more than one time variable). The identifier and time cannot be missing. The remaining covariates represent values that were observed at that time point. Often, a given covariate is observed at only a subset of times and is missing at others. At the time of death, in particular, often only the identifier, time, and status indicator are known. In the resulting data set missing covariates are replaced by their last known value, and the response y will be a Surv(time1, time2, endpoint) object. ### Value a list with elements | | | | --- | --- | | `mf` | an updated model frame (fewer rows, unchanged columns) | | `S2.y` | the constructed response variable | | `S2.state` | the current state for each of the rows | r None `coxph.detail` Details of a Cox Model Fit ------------------------------------------ ### Description Returns the individual contributions to the first and second derivative matrix, at each unique event time. ### Usage ``` coxph.detail(object, riskmat=FALSE, rorder=c("data", "time")) ``` ### Arguments | | | | --- | --- | | `object` | a Cox model object, i.e., the result of `coxph`. | | `riskmat` | include the at-risk indicator matrix in the output? | | `rorder` | if `riskmat=TRUE`, the rows of riskmat will be in the original data order, otherwise sorted by time within strata. | ### Details This function may be useful for those who wish to investigate new methods or extensions to the Cox model. The example below shows one way to calculate the Schoenfeld residuals. ### Value a list with components | | | | --- | --- | | `time` | the vector of unique event times | | `nevent` | the number of events at each of these time points. | | `means` | a matrix with one row for each event time and one column for each variable in the Cox model, containing the weighted mean of the variable at that time, over all subjects still at risk at that time. The weights are the risk weights `exp(x %*% fit$coef)`. | | `nrisk` | number of subjects at risk. | | `score` | the contribution to the score vector (first derivative of the log partial likelihood) at each time point. | | `imat` | the contribution to the information matrix (second derivative of the log partial likelihood) at each time point. | | `hazard` | the hazard increment. Note that the hazard and variance of the hazard are always for some particular future subject. This routine uses `object$mean` as the future subject. | | `varhaz` | the variance of the hazard increment. | | `x,y` | copies of the input data. | | `strata` | only present for a stratified Cox model, this is a table giving the number of time points of component `time` that were contributed by each of the strata. | | `riskmat` | a matrix with one row for each observation and one colum for each unique event time, containing a 0/1 value to indicate whether that observation was (1) or was not (0) at risk at the given time point. Rows are in the order of the original data (after removal of any missings by `coxph`), or in time order. | ### See Also `<coxph>`, `<residuals.coxph>` ### Examples ``` fit <- coxph(Surv(futime,fustat) ~ age + rx + ecog.ps, ovarian, x=TRUE) fitd <- coxph.detail(fit) # There is one Schoenfeld residual for each unique death. It is a # vector (covariates for the subject who died) - (weighted mean covariate # vector at that time). The weighted mean is defined over the subjects # still at risk, with exp(X beta) as the weight. events <- fit$y[,2]==1 etime <- fit$y[events,1] #the event times --- may have duplicates indx <- match(etime, fitd$time) schoen <- fit$x[events,] - fitd$means[indx,] ``` r None `gbsg` Breast cancer data sets used in Royston and Altman (2013) ----------------------------------------------------------------- ### Description The `gbsg` data set contains patient records from a 1984-1989 trial conducted by the German Breast Cancer Study Group (GBSG) of 720 patients with node positive breast cancer; it retains the 686 patients with complete data for the prognostic variables. ### Usage ``` gbsg data(cancer, package="survival") ``` ### Format A data set with 686 observations and 11 variables. `pid` patient identifier `age` age, years `meno` menopausal status (0= premenopausal, 1= postmenopausal) `size` tumor size, mm `grade` tumor grade `nodes` number of positive lymph nodes `pgr` progesterone receptors (fmol/l) `er` estrogen receptors (fmol/l) `hormon` hormonal therapy, 0= no, 1= yes `rfstime` recurrence free survival time; days to first of reccurence, death or last follow-up `status` 0= alive without recurrence, 1= recurrence or death ### Details These data sets are used in the paper by Royston and Altman. The Rotterdam data is used to create a fitted model, and the GBSG data for validation of the model. The paper gives references for the data source. ### References Patrick Royston and Douglas Altman, External validation of a Cox prognostic model: principles and methods. BMC Medical Research Methodology 2013, 13:33 ### See Also `<rotterdam>` r None `levels.Surv` Return the states of a multi-state Surv object ------------------------------------------------------------- ### Description For a multi-state `Surv` object, this will return the names of the states. ### Usage ``` ## S3 method for class 'Surv' levels(x) ``` ### Arguments | | | | --- | --- | | `x` | a `Surv` object | ### Value for a multi-state `Surv` object, the vector of state names (excluding censoring); or NULL for an ordinary `Surv` object ### Examples ``` y1 <- Surv(c(1,5, 9, 17,21, 30), factor(c(0, 1, 2,1,0,2), 0:2, c("censored", "progression", "death"))) levels(y1) y2 <- Surv(1:6, rep(0:1, 3)) y2 levels(y2) ``` r None `anova.coxph` Analysis of Deviance for a Cox model. ---------------------------------------------------- ### Description Compute an analysis of deviance table for one or more Cox model fits, based on the log partial likelihood. ### Usage ``` ## S3 method for class 'coxph' anova(object, ..., test = 'Chisq') ``` ### Arguments | | | | --- | --- | | `object` | An object of class `coxph` | | `...` | Further `coxph` objects | | `test` | a character string. The appropriate test is a chisquare, all other choices result in no test being done. | ### Details Specifying a single object gives a sequential analysis of deviance table for that fit. That is, the reductions in the model Cox log-partial-likelihood as each term of the formula is added in turn are given in as the rows of a table, plus the log-likelihoods themselves. A robust variance estimate is normally used in situations where the model may be mis-specified, e.g., multiple events per subject. In this case a comparison of likelihood values does not make sense (differences no longer have a chi-square distribution), and `anova` will refuse to print results. If more than one object is specified, the table has a row for the degrees of freedom and loglikelihood for each model. For all but the first model, the change in degrees of freedom and loglik is also given. (This only make statistical sense if the models are nested.) It is conventional to list the models from smallest to largest, but this is up to the user. The table will optionally contain test statistics (and P values) comparing the reduction in loglik for each row. ### Value An object of class `"anova"` inheriting from class `"data.frame"`. ### Warning The comparison between two or more models by `anova` will only be valid if they are fitted to the same dataset. This may be a problem if there are missing values. ### See Also `<coxph>`, `[anova](../../stats/html/anova)`. ### Examples ``` fit <- coxph(Surv(futime, fustat) ~ resid.ds *rx + ecog.ps, data = ovarian) anova(fit) fit2 <- coxph(Surv(futime, fustat) ~ resid.ds +rx + ecog.ps, data=ovarian) anova(fit2,fit) ``` r None `survfit` Create survival curves --------------------------------- ### Description This function creates survival curves from either a formula (e.g. the Kaplan-Meier), a previously fitted Cox model, or a previously fitted accelerated failure time model. ### Usage ``` survfit(formula, ...) ``` ### Arguments | | | | --- | --- | | `formula` | either a formula or a previously fitted model | | `...` | other arguments to the specific method | ### Details A survival curve is based on a tabulation of the number at risk and number of events at each unique death time. When time is a floating point number the definition of "unique" is subject to interpretation. The code uses factor() to define the set. For further details see the documentation for the appropriate method, i.e., `?survfit.formula` or `?survfit.coxph`. A survfit object may contain a single curve, a set of curves, or a matrix curves. Predicted curves from a `coxph` model have one row for each stratum in the Cox model fit and one column for each specified covariate set. Curves from a multi-state model have one row for each stratum and a column for each state, the strata correspond to predictors on the right hand side of the equation. The default printing and plotting order for curves is by column, as with other matrices. Curves can be subscripted using either a single or double subscript. If the set of curves is a matrix, as in the above, and one of the dimensions is 1 then the code allows a single subscript to be used. (That is, it is not quite as general as using a single subscript for a numeric matrix.) ### Value An object of class `survfit` containing one or more survival curves. ### Note Older releases of the code also allowed the specification for a single curve to omit the right hand of the formula, i.e., `survfit(Surv(time, status))`, in which case the formula argument is not actually a formula. Handling this case required some non-standard and fairly fragile manipulations, and this case is no longer supported. ### Author(s) Terry Therneau ### See Also `<survfit.formula>`, `<survfit.coxph>`, `<survfit.object>`, `<print.survfit>`, `<plot.survfit>`, `<quantile.survfit>`, `<residuals.survfit>`, `<summary.survfit>`
programming_docs
r None `pbcseq` Mayo Clinic Primary Biliary Cirrhosis, sequential data ---------------------------------------------------------------- ### Description This data is a continuation of the PBC data set, and contains the follow-up laboratory data for each study patient. An analysis based on the data can be found in Murtagh, et. al. The primary PBC data set contains only baseline measurements of the laboratory parameters. This data set contains multiple laboratory results, but only on the 312 randomized patients. Some baseline data values in this file differ from the original PBC file, for instance, the data errors in prothrombin time and age which were discovered after the original analysis (see Fleming and Harrington, figure 4.6.7). One "feature" of the data deserves special comment. The last observation before death or liver transplant often has many more missing covariates than other data rows. The original clinical protocol for these patients specified visits at 6 months, 1 year, and annually thereafter. At these protocol visits lab values were obtained for a large pre-specified battery of tests. "Extra" visits, often undertaken because of worsening medical condition, did not necessarily have all this lab work. The missing values are thus potentially informative. ### Usage ``` pbcseq data(pbc, package="survival") ``` ### Format | | | | --- | --- | | id: | case number | | age: | in years | | sex: | m/f | | trt: | 1/2/NA for D-penicillmain, placebo, not randomised | | time: | number of days between registration and the earlier of death, | | | transplantion, or study analysis in July, 1986 | | status: | status at endpoint, 0/1/2 for censored, transplant, dead | | day: | number of days between enrollment and this visit date | | | all measurements below refer to this date | | albumin: | serum albumin (mg/dl) | | alk.phos: | alkaline phosphotase (U/liter) | | ascites: | presence of ascites | | ast: | aspartate aminotransferase, once called SGOT (U/ml) | | bili: | serum bilirunbin (mg/dl) | | chol: | serum cholesterol (mg/dl) | | copper: | urine copper (ug/day) | | edema: | 0 no edema, 0.5 untreated or successfully treated | | | 1 edema despite diuretic therapy | | hepato: | presence of hepatomegaly or enlarged liver | | platelet: | platelet count | | protime: | standardised blood clotting time | | spiders: | blood vessel malformations in the skin | | stage: | histologic stage of disease (needs biopsy) | | trig: | triglycerides (mg/dl) | | | ### Source T Therneau and P Grambsch, "Modeling Survival Data: Extending the Cox Model", Springer-Verlag, New York, 2000. ISBN: 0-387-98784-3. ### References Murtaugh PA. Dickson ER. Van Dam GM. Malinchoc M. Grambsch PM. Langworthy AL. Gips CH. "Primary biliary cirrhosis: prediction of short-term survival based on repeated patient visits." Hepatology. 20(1.1):126-34, 1994. Fleming T and Harrington D., "Counting Processes and Survival Analysis", Wiley, New York, 1991. ### See Also `<pbc>` ### Examples ``` # Create the start-stop-event triplet needed for coxph first <- with(pbcseq, c(TRUE, diff(id) !=0)) #first id for each subject last <- c(first[-1], TRUE) #last id time1 <- with(pbcseq, ifelse(first, 0, day)) time2 <- with(pbcseq, ifelse(last, futime, c(day[-1], 0))) event <- with(pbcseq, ifelse(last, status, 0)) fit1 <- coxph(Surv(time1, time2, event) ~ age + sex + log(bili), pbcseq) ``` r None `residuals.coxph` Calculate Residuals for a ‘coxph’ Fit -------------------------------------------------------- ### Description Calculates martingale, deviance, score or Schoenfeld residuals for a Cox proportional hazards model. ### Usage ``` ## S3 method for class 'coxph' residuals(object, type=c("martingale", "deviance", "score", "schoenfeld", "dfbeta", "dfbetas", "scaledsch","partial"), collapse=FALSE, weighted= (type %in% c("dfbeta", "dfbetas")), ...) ## S3 method for class 'coxphms' residuals(object, type=c("martingale", "score", "schoenfeld", "dfbeta", "dfbetas", "scaledsch"), collapse=FALSE, weighted= FALSE, ...) ## S3 method for class 'coxph.null' residuals(object, type=c("martingale", "deviance","score","schoenfeld"), collapse=FALSE, weighted= FALSE, ...) ``` ### Arguments | | | | --- | --- | | `object` | an object inheriting from class `coxph`, representing a fitted Cox regression model. Typically this is the output from the `coxph` function. | | `type` | character string indicating the type of residual desired. Possible values are `"martingale"`, `"deviance"`, `"score"`, `"schoenfeld"`, "dfbeta"', `"dfbetas"`, `"scaledsch"` and `"partial"`. Only enough of the string to determine a unique match is required. | | `collapse` | vector indicating which rows to collapse (sum) over. In time-dependent models more than one row data can pertain to a single individual. If there were 4 individuals represented by 3, 1, 2 and 4 rows of data respectively, then `collapse=c(1,1,1, 2, 3,3, 4,4,4,4)` could be used to obtain per subject rather than per observation residuals. | | `weighted` | if `TRUE` and the model was fit with case weights, then the weighted residuals are returned. | | `...` | other unused arguments | ### Value For martingale and deviance residuals, the returned object is a vector with one element for each subject (without `collapse`). For score residuals it is a matrix with one row per subject and one column per variable. The row order will match the input data for the original fit. For Schoenfeld residuals, the returned object is a matrix with one row for each event and one column per variable. The rows are ordered by time within strata, and an attribute `strata` is attached that contains the number of observations in each strata. The scaled Schoenfeld residuals are used in the `cox.zph` function. The score residuals are each individual's contribution to the score vector. Two transformations of this are often more useful: `dfbeta` is the approximate change in the coefficient vector if that observation were dropped, and `dfbetas` is the approximate change in the coefficients, scaled by the standard error for the coefficients. ### NOTE For deviance residuals, the status variable may need to be reconstructed. For score and Schoenfeld residuals, the X matrix will need to be reconstructed. ### References T. Therneau, P. Grambsch, and T. Fleming. "Martingale based residuals for survival models", *Biometrika*, March 1990. ### See Also `<coxph>` ### Examples ``` fit <- coxph(Surv(start, stop, event) ~ (age + surgery)* transplant, data=heart) mresid <- resid(fit, collapse=heart$id) ``` r None `logan` Data from the 1972-78 GSS data used by Logan ----------------------------------------------------- ### Description Intergenerational occupational mobility data with covariates. ### Usage ``` logan data(logan, package="survival") ``` ### Format A data frame with 838 observations on the following 4 variables. occupation subject's occupation, a factor with levels `farm`, `operatives`, `craftsmen`, `sales`, and `professional` focc father's occupation education total years of schooling, 0 to 20 race levels of `non-black` and `black` ### Source General Social Survey data, see the web site for detailed information on the variables. <https://gss.norc.org/>. ### References Logan, John A. (1983). A Multivariate Model for Mobility Tables. American Journal of Sociology 89: 324-349. r None `blogit` Bounded link functions -------------------------------- ### Description Alternate link functions that impose bounds on the input of their link function ### Usage ``` blogit(edge = 0.05) bprobit(edge= 0.05) bcloglog(edge=.05) blog(edge=.05) ``` ### Arguments | | | | --- | --- | | `edge` | input values less than the cutpoint are replaces with the cutpoint. For all be `blog` input values greater than (1-edge) are replaced with (1-edge) | ### Details When using survival psuedovalues for binomial regression, the raw data can be outside the range (0,1), yet we want to restrict the predicted values to lie within that range. A natural way to deal with this is to use `glm` with `family = gaussian(link= "logit")`. But this will fail. The reason is that the `family` object has a component `linkfun` that does not accept values outside of (0,1). This function is only used to create initial values for the iteration step, however. Mapping the offending input argument into the range of (egde, 1-edge) before computing the link results in starting estimates that are good enough. The final result of the fit will be no different than if explicit starting estimates were given using the `etastart` or `mustart` arguments. These functions create copies of the logit, probit, and complimentary log-log families that differ from the standard ones only in this use of a bounded input argument, and are called a "bounded logit" = `blogit`, etc. The same argument hold when using RMST (area under the curve) pseudovalues along with a log link to ensure positive predictions, though in this case only the lower boundary needs to be mapped. ### Value a `family` object of the same form as `make.family`. ### See Also `[stats](../../stats/html/stats-package){make.family}` ### Examples ``` py <- pseudo(survfit(Surv(time, status) ~1, lung), time=730) #2 year survival range(py) pfit <- glm(py ~ ph.ecog, data=lung, family=gaussian(link=blogit())) # For each +1 change in performance score, the odds of 2 year survival # are multiplied by 1/2 = exp of the coefficient. ``` r None `rhDNase` rhDNASE data set --------------------------- ### Description Results of a randomized trial of rhDNase for the treatment of cystic fibrosis. ### Usage ``` rhDNase data(rhDNase, package="survival") ``` ### Format A data frame with 767 observations on the following 8 variables. `id` subject id `inst` enrolling institution `trt` treatment arm: 0=placebo, 1= rhDNase `entry.dt` date of entry into the study `end.dt` date of last follow-up `fev` forced expriatory volume at enrollment, a measure of lung capacity `ivstart` days from enrollment to the start of IV antibiotics `ivstop` days from enrollment to the cessation of IV antibiotics ### Details In patients with cystic fibrosis, extracellular DNA is released by leukocytes that accumulate in the airways in response to chronic bacterial infection. This excess DNA thickens the mucus, which then cannot be cleared from the lung by the cilia. The accumulation leads to exacerbations of respiratory symptoms and progressive deterioration of lung function. At the time of this study more than 90% of cystic fibrosis patients eventually died of lung disease. Deoxyribonuclease I (DNase I) is a human enzyme normally present in the mucus of human lungs that digests extracellular DNA. Genentech, Inc. cloned a highly purified recombinant DNase I (rhDNase or Pulmozyme) which when delivered to the lungs in an aerosolized form cuts extracellular DNA, reducing the viscoelasticity of airway secretions and improving clearance. In 1992 the company conducted a randomized double-blind trial comparing rhDNase to placebo. Patients were then monitored for pulmonary exacerbations, along with measures of lung volume and flow. The primary endpoint was the time until first pulmonary exacerbation; however, data on all exacerbations were collected for 169 days. The definition of an exacerbation was an infection that required the use of intravenous (IV) antibiotics. Subjects had 0–5 such episodes during the trial, those with more than one have multiple rows in the data set, those with none have NA for the IV start and end times. A few subjects were infected at the time of enrollment, subject 173 for instance has a first infection interval of -21 to 7. We do not count this first infection as an "event", and the subject first enters the risk set at day 7. Subjects who have an event are not considered to be at risk for another event during the course of antibiotics, nor for an additional 6 days after they end. (If the symptoms reappear immediately after cessation then from a medical standpoint this would not be a new infection.) This data set reproduces the data in Therneau and Grambsch, is does not exactly reproduce those in Therneau and Hamilton due to data set updates. ### References T. M. Therneau and P. M. Grambsch, Modeling Survival Data: Extending the Cox Model, Springer, 2000. T. M. Therneau and S.A. Hamilton, rhDNase as an example of recurrent event analysis, Statistics in Medicine, 16:2029-2047, 1997. ### Examples ``` # Build the start-stop data set for analysis, and # replicate line 2 of table 8.13 first <- subset(rhDNase, !duplicated(id)) #first row for each subject dnase <- tmerge(first, first, id=id, tstop=as.numeric(end.dt -entry.dt)) # Subjects whose fu ended during the 6 day window are the reason for # this next line temp.end <- with(rhDNase, pmin(ivstop+6, end.dt-entry.dt)) dnase <- tmerge(dnase, rhDNase, id=id, infect=event(ivstart), end= event(temp.end)) # toss out the non-at-risk intervals, and extra variables # 3 subjects had an event on their last day of fu, infect=1 and end=1 dnase <- subset(dnase, (infect==1 | end==0), c(id:trt, fev:infect)) agfit <- coxph(Surv(tstart, tstop, infect) ~ trt + fev, cluster=id, data=dnase) ``` r None `aeqSurv` Adjudicate near ties in a Surv object ------------------------------------------------ ### Description The check for tied survival times can fail due to floating point imprecision, which can make actual ties appear to be distinct values. Routines that depend on correct identification of ties pairs will then give incorrect results, e.g., a Cox model. This function rectifies these. ### Usage ``` aeqSurv(x, tolerance = sqrt(.Machine$double.eps)) ``` ### Arguments | | | | --- | --- | | `x` | a Surv object | | `tolerance` | the tolerance used to detect values that will be considered equal | ### Details This routine is called by both `survfit` and `coxph` to deal with the issue of ties that get incorrectly broken due to floating point imprecision. See the short vignette on tied times for a simple example. Use the `timefix` argument of `survfit` or `coxph.control` to control the option if desired. The rule for ‘equality’ is identical to that used by the `all.equal` routine. Pairs of values that are within round off error of each other are replaced by the smaller value. An error message is generated if this process causes a 0 length time interval to be created. ### Value a Surv object identical to the original, but with ties restored. ### Author(s) Terry Therneau ### See Also `<survfit>`, `<coxph.control>` r None `retinopathy` Diabetic Retinopathy ----------------------------------- ### Description A trial of laser coagulation as a treatment to delay diabetic retinopathy. ### Usage ``` retinopathy data(retinopathy, package="survival") ``` ### Format A data frame with 394 observations on the following 9 variables. `id` numeric subject id `laser` type of laser used: `xenon` `argon` `eye` which eye was treated: `right` `left` `age` age at diagnosis of diabetes `type` type of diabetes: `juvenile` `adult`, (diagnosis before age 20) `trt` 0 = control eye, 1 = treated eye `futime` time to loss of vision or last follow-up `status` 0 = censored, 1 = loss of vision in this eye `risk` a risk score for the eye. This high risk subset is defined as a score of 6 or greater in at least one eye. ### Details The 197 patients in this dataset were a 50% random sample of the patients with "high-risk" diabetic retinopathy as defined by the Diabetic Retinopathy Study (DRS). Each patient had one eye randomized to laser treatment and the other eye received no treatment, and has two observations in the data set. For each eye, the event of interest was the time from initiation of treatment to the time when visual acuity dropped below 5/200 two visits in a row. Thus there is a built-in lag time of approximately 6 months (visits were every 3 months). Survival times in this dataset are the actual time to vision loss in months, minus the minimum possible time to event (6.5 months). Censoring was caused by death, dropout, or end of the study. ### References W. J. Huster, R. Brookmeyer and S. G. Self (1989). Modelling paired survival data with covariates, Biometrics 45:145-156. A. L. Blair, D. R. Hadden, J. A. Weaver, D. B. Archer, P. B. Johnston and C. J. Maguire (1976). The 5-year prognosis for vision in diabetes, American Journal of Ophthalmology, 81:383-396. ### Examples ``` coxph(Surv(futime, status) ~ type + trt, cluster= id, retinopathy) ``` r None `xtfrm.Surv` Sorting order for Surv objects -------------------------------------------- ### Description Sort survival objects into a partial order, which is the same one used internally for many of the calculations. ### Usage ``` ## S3 method for class 'Surv' xtfrm(x) ``` ### Arguments | | | | --- | --- | | `x` | a `Surv` object | ### Details This creates a partial ordering of survival objects. The result is sorted in time order, for tied pairs of times right censored events come after observed events (censor after death), and left censored events are sorted before observed events. For counting process data `(tstart, tstop, status)` the ordering is by stop time, status, and start time, again with censoring last. Interval censored data is sorted using the midpoint of each interval. The `xtfrm` routine is used internally by `order` and `sort`, so these results carry over to those routines. ### Value a vector of integers which will have the same sort order as `x`. ### Author(s) Terry Therneau ### See Also `[sort](../../base/html/sort)`, `[order](../../base/html/order)` ### Examples ``` test <- c(Surv(c(10, 9,9, 8,8,8,7,5,5,4), rep(1:0, 5)), Surv(6.2, NA)) test sort(test) ``` r None `logLik.coxph` logLik method for a Cox model --------------------------------------------- ### Description The logLik function for survival models ### Usage ``` ## S3 method for class 'coxph' logLik(object, ...) ## S3 method for class 'survreg' logLik(object, ...) ``` ### Arguments | | | | --- | --- | | `object` | the result of a `coxph` or `survreg` fit | | `...` | optional arguments for other instances of the method | ### Details The logLik function is used by summary functions in R such as `AIC`. For a Cox model, this method returns the partial likelihood. The number of degrees of freedom (df) used by the fit and the effective number of observations (nobs) are added as attributes. Per Raftery and others, the effective number of observations is the taken to be the number of events in the data set. For a `survreg` model the proper value for the effective number of observations is still an open question (at least to this author). For right censored data the approach of `logLik.coxph` is the possible the most sensible, but for interval censored observations the result is unclear. The code currently does not add a *nobs* attribute. ### Value an object of class `logLik` ### Author(s) Terry Therneau ### References Robert E. Kass and Adrian E. Raftery (1995). "Bayes Factors". J. American Statistical Assoc. 90 (430): 791. Raftery A.E. (1995), "Bayesian Model Selection in Social Research", Sociological methodology, 111-196. ### See Also `[logLik](../../stats/html/loglik)` r None `uspop2` Projected US Population --------------------------------- ### Description US population by age and sex, for 2000 through 2020 ### Format The data is a matrix with dimensions age, sex, and calendar year. Age goes from 0 through 100, where the value for age 100 is the total for all ages of 100 or greater. ### Details This data is often used as a "standardized" population for epidemiology studies. ### Source NP2008\_D1: Projected Population by Single Year of Age, Sex, Race, and Hispanic Origin for the United States: July 1, 2000 to July 1, 2050, www.census.gov/population/projections. ### See Also `[uspop](../../datasets/html/uspop)` ### Examples ``` us50 <- uspop2[51:101,, "2000"] #US 2000 population, 50 and over age <- as.integer(dimnames(us50)[[1]]) smat <- model.matrix( ~ factor(floor(age/5)) -1) ustot <- t(smat) %*% us50 #totals by 5 year age groups temp <- c(50,55, 60, 65, 70, 75, 80, 85, 90, 95) dimnames(ustot) <- list(c(paste(temp, temp+4, sep="-"), "100+"), c("male", "female")) ```
programming_docs
r None `concordance` Compute the concordance statistic for data or a model -------------------------------------------------------------------- ### Description The concordance statistic compute the agreement between an observed response and a predictor. It is closely related to Kendall's tau-a and tau-b, Goodman's gamma, and Somers' d, all of which can also be calculated from the results of this function. ### Usage ``` concordance(object, ...) ## S3 method for class 'formula' concordance(object, data, weights, subset, na.action, cluster, ymin, ymax, timewt= c("n", "S", "S/G", "n/G", "n/G2", "I"), influence=0, ranks = FALSE, reverse=FALSE, timefix=TRUE, keepstrata=10, ...) ## S3 method for class 'lm' concordance(object, ..., newdata, cluster, ymin, ymax, influence=0, ranks=FALSE, timefix=TRUE, keepstrata=10) ## S3 method for class 'coxph' concordance(object, ..., newdata, cluster, ymin, ymax, timewt= c("n", "S", "S/G", "n/G", "n/G2", "I"), influence=0, ranks=FALSE, timefix=FALSE, keepstrata=10) ## S3 method for class 'survreg' concordance(object, ..., newdata, cluster, ymin, ymax, timewt= c("n", "S", "S/G", "n/G", "n/G2", "I"), influence=0, ranks=FALSE, timefix=FALSE, keepstrata=10) ``` ### Arguments | | | | --- | --- | | `object` | a fitted model or a formula. The formula should be of the form `y ~x` or `y ~ x + strata(z)` with a single numeric or survival response and a single predictor. Counts of concordant, discordant and tied pairs are computed separately per stratum, and then added. | | `data` | a data.frame in which to interpret the variables named in the `formula`, or in the `subset` and the `weights` argument. Only applicable if `object` is a formula. | | `weights` | optional vector of case weights. Only applicable if `object` is a formula. | | `subset` | expression indicating which subset of the rows of data should be used in the fit. Only applicable if `object` is a formula. | | `na.action` | a missing-data filter function. This is applied to the model.frame after any subset argument has been used. Default is `options()\$na.action`. Only applicable if `object` is a formula. | | `...` | multiple fitted models are allowed. Only applicable if `object` is a model object. | | `newdata` | optional, a new data frame in which to evaluate (but not refit) the models | | `cluster` | optional grouping vector for calculating the robust variance | | `ymin, ymax` | compute the concordance over the restricted range ymin <= y <= ymax. (For survival data this is a time range.) | | `timewt` | the weighting to be applied. The overall statistic is a weighted mean over event times. | | `influence` | 1= return the dfbeta vector, 2= return the full influence matrix, 3 = return both | | `ranks` | if TRUE, return a data frame containing the individual ranks that make up the overall score. | | `reverse` | if TRUE then assume that larger `x` values predict smaller response values `y`; a proportional hazards model is the common example of this. | | `timefix` | if the response is a Surv object, correct for possible rounding error; otherwise this argument has no effect. See the vignette on tied times for more explanation. For the coxph and survreg methods this issue will have already been addressed in the parent routine, so should not be revisited. | | `keepstrata` | either TRUE, FALSE, or an integer value. Computations are always done within stratum, then added. If the total number of strata greater than `keepstrata`, or `keepstrata=FALSE`, those subtotals are not kept in the output. | ### Details At each event time, compute the rank of the subject who had the event as compared to all others with a longer survival, where the rank is value between 0 and 1. The concordance is a weighted mean of these values, determined by the `timewt` option. For uncensored data each unique response value is compared to all those which are larger. Using the default value for `timewt` gives the area under the receiver operating curve (AUC) for a binary response, and (d+1)/2 when y is continuous, where d is Somers' d. For a survival time, `timewt` of n gives Harrell's c-statistic, which is closely related to the Gehan-Wilcoxon test, S corresponds to the Peto-Wilcoxon, n/G2 is the weighted advocated by Umo, and S/G the weighting proposed by Schemper. When the number of strata is very large, such as in a conditional logistic regression for instance (`clogit` function), a much faster computation is available when the individual strata results are not retained. In the more general case the `keepstrata = 10` default simply keeps the printout managable. ### Value An object of class `concordance` containing the following components: | | | | --- | --- | | `concordance` | the estimated concordance value or values | | `count` | a vector containing the number of concordant pairs, discordant, tied on x but not y, tied on y but not x, and tied on both x and y | | `n` | the number of observations | | `var` | a vector containing the estimated variance of the concordance based on the infinitesimal jackknife (IJ) method. If there are multiple models it contains the estimtated variance/covariance matrix. | | `cvar` | a vector containing the estimated variance(s) of the concordance values, based on the variance formula for the associated score test from a proportional hazards model. (This was the primary variance used in the `survConcordance` function.) | | `dfbeta` | optional, the vector of leverage estimates for the concordance | | `influence` | optional, the matrix of leverage values for each of the counts, one row per observation | | `ranks` | optional, a data frame containing the Somers' d rank at each event time, along with the time weight, case weight of the observation with an event, and variance (contribution to the proportional hazards model information matrix). A weighted mean of the ranks equals Somer's d. | ### Author(s) Terry Therneau ### See Also `<coxph>` ### Examples ``` fit1 <- coxph(Surv(ptime, pstat) ~ age + sex + mspike, mgus2) concordance(fit1, timewt="n") # logistic regression fit2 <- glm(pstat ~ age + sex + mspike, binomial, data= mgus2) concordance(fit2) # equal to the AUC ``` r None `udca` Data from a trial of usrodeoxycholic acid ------------------------------------------------- ### Description Data from a trial of ursodeoxycholic acid (UDCA) in patients with primary biliary cirrohosis (PBC). ### Usage ``` udca udca2 data(udca, package="survival") ``` ### Format A data frame with 170 observations on the following 15 variables. `id` subject identifier `trt` treatment of 0=placebo, 1=UDCA `entry.dt` date of entry into the study `last.dt` date of last on-study visit `stage` stage of disease `bili` bilirubin value at entry `riskscore` the Mayo PBC risk score at entry `death.dt` date of death `tx.dt` date of liver transplant `hprogress.dt` date of histologic progression `varices.dt` appearance of esphogeal varices `ascites.dt` appearance of ascites `enceph.dt` appearance of encephalopathy `double.dt` doubling of initial bilirubin `worsen.dt` worsening of symptoms by two stages ### Details This data set is used in the Therneau and Grambsh. The `udca1` data set contains the baseline variables along with the time until the first endpoint (any of death, transplant, ..., worsening). The `udca2` data set treats all of the endpoints as parallel events and has a stratum for each. ### References T. M. Therneau and P. M. Grambsch, Modeling survival data: extending the Cox model. Springer, 2000. K. D. Lindor, E. R. Dickson, W. P Baldus, R.A. Jorgensen, J. Ludwig, P. A. Murtaugh, J. M. Harrison, R. H. Weisner, M. L. Anderson, S. M. Lange, G. LeSage, S. S. Rossi and A. F. Hofman. Ursodeoxycholic acid in the treatment of primary biliary cirrhosis. Gastroenterology, 106:1284-1290, 1994. ### Examples ``` # values found in table 8.3 of the book fit1 <- coxph(Surv(futime, status) ~ trt + log(bili) + stage, cluster =id , data=udca1) fit2 <- coxph(Surv(futime, status) ~ trt + log(bili) + stage + strata(endpoint), cluster=id, data=udca2) ``` r None `model.frame.coxph` Model.frame method for coxph objects --------------------------------------------------------- ### Description Recreate the model frame of a coxph fit. ### Usage ``` ## S3 method for class 'coxph' model.frame(formula, ...) ``` ### Arguments | | | | --- | --- | | `formula` | the result of a `coxph` fit | | `...` | other arguments to `model.frame` | ### Details For details, see the manual page for the generic function. This function would rarely be called by a user, it is mostly used inside functions like `residual` that need to recreate the data set from a model in order to do further calculations. ### Value the model frame used in the original fit, or a parallel one for new data. ### Author(s) Terry Therneau ### See Also `[model.frame](../../stats/html/model.frame)` r None `tobin` Tobin's Tobit data --------------------------- ### Description Economists fit a parametric censored data model called the ‘tobit’. These data are from Tobin's original paper. ### Usage ``` tobin data(tobin, package="survival") ``` ### Format A data frame with 20 observations on the following 3 variables. durable Durable goods purchase age Age in years quant Liquidity ratio (x 1000) ### Source J Tobin (1958), Estimation of relationships for limited dependent variables. *Econometrica* **26**, 24–36. ### Examples ``` tfit <- survreg(Surv(durable, durable>0, type='left') ~age + quant, data=tobin, dist='gaussian') predict(tfit,type="response") ``` r None `print.survfit` Print a Short Summary of a Survival Curve ---------------------------------------------------------- ### Description Print number of observations, number of events, the restricted mean survival and its standard error, and the median survival with confidence limits for the median. ### Usage ``` ## S3 method for class 'survfit' print(x, scale=1, digits = max(options()$digits - 4,3), print.rmean=getOption("survfit.print.rmean"), rmean = getOption('survfit.rmean'),...) ``` ### Arguments | | | | --- | --- | | `x` | the result of a call to the `survfit` function. | | `scale` | a numeric value to rescale the survival time, e.g., if the input data to survfit were in days, `scale=365` would scale the printout to years. | | `digits` | Number of digits to print | | `print.rmean,rmean` | Options for computation and display of the restricted mean. | | `...` | for future results | ### Details The mean and its variance are based on a truncated estimator. That is, if the last observation(s) is not a death, then the survival curve estimate does not go to zero and the mean is undefined. There are four possible approaches to resolve this, which are selected by the `rmean` option. The first is to set the upper limit to a constant, e.g.,`rmean=365`. In this case the reported mean would be the expected number of days, out of the first 365, that would be experienced by each group. This is useful if interest focuses on a fixed period. Other options are `"none"` (no estimate), `"common"` and `"individual"`. The `"common"` option uses the maximum time for all curves in the object as a common upper limit for the auc calculation. For the `"individual"`options the mean is computed as the area under each curve, over the range from 0 to the maximum observed time for that curve. Since the end point is random, values for different curves are not comparable and the printed standard errors are an underestimate as they do not take into account this random variation. This option is provided mainly for backwards compatability, as this estimate was the default (only) one in earlier releases of the code. Note that SAS (as of version 9.3) uses the integral up to the last *event* time of each individual curve; we consider this the worst of the choices and do not provide an option for that calculation. The median and its confidence interval are defined by drawing a horizontal line at 0.5 on the plot of the survival curve and its confidence bands. The intersection of the line with the lower CI band defines the lower limit for the median's interval, and similarly for the upper band. If any of the intersections is not a point the we use the center of the intersection interval, e.g., if the survival curve were exactly equal to 0.5 over an interval. When data is uncensored this agrees with the usual definition of a median. ### Value x, with the invisible flag set to prevent printing. (The default for all print functions in R is to return the object passed to them; print.survfit complies with this pattern. If you want to capture these printed results for further processing, see the `table` component of `summary.survfit`.) ### Side Effects The number of observations, the number of events, the median survival with its confidence interval, and optionally the restricted mean survival (`rmean`) and its standard error, are printed. If there are multiple curves, there is one line of output for each. ### References Miller, Rupert G., Jr. (1981). *Survival Analysis.* New York:Wiley, p 71. ### See Also `<summary.survfit>`, `<quantile.survfit>` r None `diabetic` Ddiabetic retinopathy --------------------------------- ### Description Partial results from a trial of laser coagulation for the treatment of diabetic retinopathy. ### Usage ``` diabetic data(diabetic, package="survival") ``` ### Format A data frame with 394 observations on the following 8 variables. `id` subject id `laser` laser type: `xenon` or `argon` `age` age at diagnosis `eye` a factor with levels of `left` `right` `trt` treatment: 0 = no treatment, 1= laser `risk` risk group of 6-12 `time` time to event or last follow-up `status` status of 0= censored or 1 = visual loss ### Details The 197 patients in this dataset were a 50% random sample of the patients with "high-risk" diabetic retinopathy as defined by the Diabetic Retinopathy Study (DRS). Each patient had one eye randomized to laser treatment and the other eye received no treatment. For each eye, the event of interest was the time from initiation of treatment to the time when visual acuity dropped below 5/200 two visits in a row. Thus there is a built-in lag time of approximately 6 months (visits were every 3 months). Survival times in this dataset are therefore the actual time to blindness in months, minus the minimum possible time to event (6.5 months). Censoring was caused by death, dropout, or end of the study. ### References Huster, Brookmeyer and Self, Biometrics, 1989. American Journal of Ophthalmology, 1976, 81:4, pp 383-396 ### Examples ``` # juvenile diabetes is defined as and age less than 20 juvenile <- 1*(diabetic$age < 20) coxph(Surv(time, status) ~ trt + juvenile, cluster= id, data= diabetic) ``` r None `plot.cox.zph` Graphical Test of Proportional Hazards ------------------------------------------------------ ### Description Displays a graph of the scaled Schoenfeld residuals, along with a smooth curve. ### Usage ``` ## S3 method for class 'cox.zph' plot(x, resid=TRUE, se=TRUE, df=4, nsmo=40, var, xlab="Time", ylab, lty=1:2, col=1, lwd=1, hr=FALSE, ...) ``` ### Arguments | | | | --- | --- | | `x` | result of the `cox.zph` function. | | `resid` | a logical value, if `TRUE` the residuals are included on the plot, as well as the smooth fit. | | `se` | a logical value, if `TRUE`, confidence bands at two standard errors will be added. | | `df` | the degrees of freedom for the fitted natural spline, `df=2` leads to a linear fit. | | `nsmo` | number of points to use for the lines | | `var` | the set of variables for which plots are desired. By default, plots are produced in turn for each variable of a model. Selection of a single variable allows other features to be added to the plot, e.g., a horizontal line at zero or a main title. This has been superseded by a subscripting method; see the example below. | | `hr` | if TRUE, label the y-axis using the estimated hazard ratio rather than the estimated coefficient. (The plot does not change, only the axis label.) | | `xlab` | label for the x-axis of the plot | | `ylab` | optional label for the y-axis of the plot. If missing a default label is provided. This can be a vector of labels. | | `lty, col, lwd` | line type, color, and line width for the overlaid curve. Each of these can be vector of length 2, in which case the second element is used for the confidence interval. | | `...` | additional graphical arguments passed to the `plot` function. | ### Side Effects a plot is produced on the current graphics device. ### See Also `<coxph>`, `<cox.zph>`. ### Examples ``` vfit <- coxph(Surv(time,status) ~ trt + factor(celltype) + karno + age, data=veteran, x=TRUE) temp <- cox.zph(vfit) plot(temp, var=3) # Look at Karnofsy score, old way of doing plot plot(temp[3]) # New way with subscripting abline(0, 0, lty=3) # Add the linear fit as well abline(lm(temp$y[,3] ~ temp$x)$coefficients, lty=4, col=3) title(main="VA Lung Study") ``` r None `model.matrix.coxph` Model.matrix method for coxph models ---------------------------------------------------------- ### Description Reconstruct the model matrix for a cox model. ### Usage ``` ## S3 method for class 'coxph' model.matrix(object, data=NULL, contrast.arg = object$contrasts, ...) ``` ### Arguments | | | | --- | --- | | `object` | the result of a `coxph` model | | `data` | optional, a data frame from which to obtain the data | | `contrast.arg` | optional, a contrasts object describing how factors should be coded | | `...` | other possible argument to `model.frame` | ### Details When there is a `data` argument this function differs from most of the other `model.matrix` methods in that the response variable for the original formula is *not* required to be in the data. If the data frame contains a `terms` attribute then it is assumed to be the result of a call to `model.frame`, otherwise a call to `model.frame` is applied with the data as an argument. ### Value The model matrix for the fit ### Author(s) Terry Therneau ### See Also `[model.matrix](../../stats/html/model.matrix)` ### Examples ``` fit1 <- coxph(Surv(time, status) ~ age + factor(ph.ecog), data=lung) xfit <- model.matrix(fit1) fit2 <- coxph(Surv(time, status) ~ age + factor(ph.ecog), data=lung, x=TRUE) all.equal(model.matrix(fit1), fit2$x) ``` r None `ridge` Ridge regression ------------------------- ### Description When used in a <coxph> or <survreg> model formula, specifies a ridge regression term. The likelihood is penalised by `theta`/2 time the sum of squared coefficients. If `scale=T` the penalty is calculated for coefficients based on rescaling the predictors to have unit variance. If `df` is specified then `theta` is chosen based on an approximate degrees of freedom. ### Usage ``` ridge(..., theta, df=nvar/2, eps=0.1, scale=TRUE) ``` ### Arguments | | | | --- | --- | | `...` | predictors to be ridged | | `theta` | penalty is `theta`/2 time sum of squared coefficients | | `df` | Approximate degrees of freedom | | `eps` | Accuracy required for `df` | | `scale` | Scale variables before applying penalty? | ### Value An object of class `coxph.penalty` containing the data and control functions. ### Note If the expression `ridge(x1, x2, x3, ...)` is too many characters long then the internal terms() function will add newlines to the variable name and then the coxph routine simply gets lost. (Some labels will have the newline and some won't.) One solution is to bundle all of the variables into a single matrix and use that matrix as the argument to `ridge` so as to shorten the call, e.g. `mdata$many <- as.matrix(mydata[,5:53])`. ### References Gray (1992) "Flexible methods of analysing survival data using splines, with applications to breast cancer prognosis" JASA 87:942–951 ### See Also `<coxph>`,`<survreg>`,`<pspline>`,`<frailty>` ### Examples ``` coxph(Surv(futime, fustat) ~ rx + ridge(age, ecog.ps, theta=1), ovarian) lfit0 <- survreg(Surv(time, status) ~1, lung) lfit1 <- survreg(Surv(time, status) ~ age + ridge(ph.ecog, theta=5), lung) lfit2 <- survreg(Surv(time, status) ~ sex + ridge(age, ph.ecog, theta=1), lung) lfit3 <- survreg(Surv(time, status) ~ sex + age + ph.ecog, lung) ```
programming_docs
r None `attrassign` Create new-style "assign" attribute ------------------------------------------------- ### Description The `"assign"` attribute on model matrices describes which columns come from which terms in the model formula. It has two versions. R uses the original version, but the alternate version found in S-plus is sometimes useful. ### Usage ``` ## Default S3 method: attrassign(object, tt,...) ## S3 method for class 'lm' attrassign(object,...) ``` ### Arguments | | | | --- | --- | | `object` | model matrix or linear model object | | `tt` | terms object | | `...` | ignored | ### Details For instance consider the following ``` survreg(Surv(time, status) ~ age + sex + factor(ph.ecog), lung) ``` R gives the compact for for assign, a vector (0, 1, 2, 3, 3, 3); which can be read as “the first column of the X matrix (intercept) goes with none of the terms, the second column of X goes with term 1 of the model equation, the third column of X with term 2, and columns 4-6 with term 3”. The alternate (S-Plus default) form is a list ``` $(Intercept) 1 $age 2 $sex 3 $factor(ph.ecog) 4 5 6 ``` ### Value A list with names corresponding to the term names and elements that are vectors indicating which columns come from which terms ### See Also `[terms](../../stats/html/terms)`,`[model.matrix](../../stats/html/model.matrix)` ### Examples ``` formula <- Surv(time,status)~factor(ph.ecog) tt <- terms(formula) mf <- model.frame(tt,data=lung) mm <- model.matrix(tt,mf) ## a few rows of data mm[1:3,] ## old-style assign attribute attr(mm,"assign") ## alternate style assign attribute attrassign(mm,tt) ``` r None `survfit.formula` Compute a Survival Curve for Censored Data ------------------------------------------------------------- ### Description Computes an estimate of a survival curve for censored data using the Aalen-Johansen estimator. For ordinary (single event) survival this reduces to the Kaplan-Meier estimate. ### Usage ``` ## S3 method for class 'formula' survfit(formula, data, weights, subset, na.action, stype=1, ctype=1, id, cluster, robust, istate, timefix=TRUE, etype, error, ...) ``` ### Arguments | | | | --- | --- | | `formula` | a formula object, which must have a `Surv` object as the response on the left of the `~` operator and, if desired, terms separated by + operators on the right. One of the terms may be a `strata` object. For a single survival curve the right hand side should be `~ 1`. | | `data` | a data frame in which to interpret the variables named in the formula, `subset` and `weights` arguments. | | `weights` | The weights must be nonnegative and it is strongly recommended that they be strictly positive, since zero weights are ambiguous, compared to use of the `subset` argument. | | `subset` | expression saying that only a subset of the rows of the data should be used in the fit. | | `na.action` | a missing-data filter function, applied to the model frame, after any `subset` argument has been used. Default is `options()$na.action`. | | `stype` | the method to be used estimation of the survival curve: 1 = direct, 2 = exp(cumulative hazard). | | `ctype` | the method to be used for estimation of the cumulative hazard: 1 = Nelson-Aalen formula, 2 = Fleming-Harrington correction for tied events. | | `id` | identifies individual subjects, when a given person can have multiple lines of data. | | `cluster` | used to group observations for the infinitesmal jackknife variance estimate, defaults to the value of id. | | `robust` | logical, should the function compute a robust variance. For multi-state survival curves this is true by default. For single state data see details, below. | | `istate` | for multi-state models, identifies the initial state of each subject or observation | | `timefix` | process times through the `aeqSurv` function to eliminate potential roundoff issues. | | `etype` | a variable giving the type of event. This has been superseded by multi-state Surv objects and is depricated; see example below. | | `error` | this argument is no longer used | | `...` | The following additional arguments are passed to internal functions called by `survfit`. se.fit logical value, default is TRUE. If FALSE then standard error computations are omitted. conf.type One of `"none"`, `"plain"`, `"log"` (the default), `"log-log"`, `"logit"` or `"arcsin"`. Only enough of the string to uniquely identify it is necessary. The first option causes confidence intervals not to be generated. The second causes the standard intervals `curve +- k *se(curve)`, where k is determined from `conf.int`. The log option calculates intervals based on the cumulative hazard or log(survival). The log-log option bases the intervals on the log hazard or log(-log(survival)), the logit option on log(survival/(1-survival)) and arcsin on arcsin(survival). conf.lower a character string to specify modified lower limits to the curve, the upper limit remains unchanged. Possible values are `"usual"` (unmodified), `"peto"`, and `"modified"`. The modified lower limit is based on an "effective n" argument. The confidence bands will agree with the usual calculation at each death time, but unlike the usual bands the confidence interval becomes wider at each censored observation. The extra width is obtained by multiplying the usual variance by a factor m/n, where n is the number currently at risk and m is the number at risk at the last death time. (The bands thus agree with the un-modified bands at each death time.) This is especially useful for survival curves with a long flat tail. The Peto lower limit is based on the same "effective n" argument as the modified limit, but also replaces the usual Greenwood variance term with a simple approximation. It is known to be conservative. start.time numeric value specifying a time to start calculating survival information. The resulting curve is the survival conditional on surviving to `start.time`. conf.int the level for a two-sided confidence interval on the survival curve(s). Default is 0.95. se.fit a logical value indicating whether standard errors should be computed. Default is `TRUE`. influence a logical value indicating whether to return the infinitesimal jackknife (influence) values for each subject. These contain the values of the derivative of each value with respect to the case weights of each subject i: *dp/dw[i]*, evaluated at the vector of weights. The resulting object will contain `influence.surv` and `influence.chaz` components. Alternatively, options of `influence=1` or `influence=2` will return values for only the survival or hazard curves, respectively. p0 this applies only to multi-state curves. An optional vector giving the initial probability across the states. If this is missing, then p0 is estimated using the frequency of the starting states of all observations at risk at `start.time`, or if that is not specified, at the time of the first event. type an older argument that combined `stype` and `ctype`, now depricated. Legal values were "kaplan-meier" which is equivalent to `stype=1, ctype=1`, "fleming-harrington" which is equivalent to `stype=2, ctype=1`, and "fh2" which is equivalent to `stype=2, ctype=2.` | ### Details If there is a `data` argument, then variables in the `formula`, codeweights, `subset`, `id`, `cluster` and `istate` arguments will be searched for in that data set. The routine returns both an estimated probability in state and an estimated cumulative hazard estimate. The cumulative hazard estimate is the Nelson-Aalen (NA) estimate or the Fleming-Harrington (FH) estimate, the latter includes a correction for tied event times. The estimated probability in state can estimated either using the exponential of the cumulative hazard, or as a direct estimate using the Aalen-Johansen approach. For single state data the AJ estimate reduces to the Kaplan-Meier and the probability in state to the survival curve; for competing risks data the AJ reduces to the cumulative incidence (CI) estimator. For backward compatability the `type` argument can be used instead. When the data set includes left censored or interval censored data (or both), then the EM approach of Turnbull is used to compute the overall curve. Currently this algorithm is very slow, only a survival curve is produced, and it does not support a robust variance. Robust variance: If a `robust` is TRUE, or for multi-state curves, then the standard errors of the results will be based on an infinitesimal jackknife (IJ) estimate, otherwise the standard model based estimate will be used. For single state curves, the default for `robust` will be TRUE if one of: there is a `cluster` argument, there are non-integer weights, or there is a `id` statement and at least one of the id values has multiple events, and FALSE otherwise. The default represents our best guess about when one would most often desire a robust variance. When there are non-integer case weights and (time1, time2) survival data the routine is at an impasse: a robust variance likely is called for, but requires either `id` or `cluster` information to be done correctly; it will default to robust=FALSE. With the IJ estimate, the leverage values themselves can be returned as arrays with dimensions: number of subjects, number of unique times, and for a multi-state model, the number of unique states. Be forwarned that these arrays can be huge. If there is a `cluster` argument this first dimension will be the number of clusters and the variance will be a grouped IJ estimate; this can be an important tool for reducing the size. A numeric value for the `influence` argument allows finer control: 0= return neither (same as FALSE), 1= return the influence array for probability in state, 2= return the influence array for the cumulative hazard, 3= both (same as TRUE). ### Value an object of class `"survfit"`. See `survfit.object` for details. Methods defined for survfit objects are `print`, `plot`, `lines`, and `points`. ### References Dorey, F. J. and Korn, E. L. (1987). Effective sample sizes for confidence intervals for survival probabilities. *Statistics in Medicine* **6**, 679-87. Fleming, T. H. and Harrington, D. P. (1984). Nonparametric estimation of the survival distribution in censored data. *Comm. in Statistics* **13**, 2469-86. Kalbfleisch, J. D. and Prentice, R. L. (1980). *The Statistical Analysis of Failure Time Data.* New York:Wiley. Kyle, R. A. (1997). Moncolonal gammopathy of undetermined significance and solitary plasmacytoma. Implications for progression to overt multiple myeloma}, *Hematology/Oncology Clinics N. Amer.* **11**, 71-87. Link, C. L. (1984). Confidence intervals for the survival function using Cox's proportional hazards model with covariates. *Biometrics* **40**, 601-610. Turnbull, B. W. (1974). Nonparametric estimation of a survivorship function with doubly censored data. *J Am Stat Assoc*, **69**, 169-173. ### See Also `<survfit.coxph>` for survival curves from Cox models, `<survfit.object>` for a description of the components of a survfit object, `<print.survfit>`, `<plot.survfit>`, `<lines.survfit>`, `<coxph>`, `[Surv](surv)`. ### Examples ``` #fit a Kaplan-Meier and plot it fit <- survfit(Surv(time, status) ~ x, data = aml) plot(fit, lty = 2:3) legend(100, .8, c("Maintained", "Nonmaintained"), lty = 2:3) #fit a Cox proportional hazards model and plot the #predicted survival for a 60 year old fit <- coxph(Surv(futime, fustat) ~ age, data = ovarian) plot(survfit(fit, newdata=data.frame(age=60)), xscale=365.25, xlab = "Years", ylab="Survival") # Here is the data set from Turnbull # There are no interval censored subjects, only left-censored (status=3), # right-censored (status 0) and observed events (status 1) # # Time # 1 2 3 4 # Type of observation # death 12 6 2 3 # losses 3 2 0 3 # late entry 2 4 2 5 # tdata <- data.frame(time =c(1,1,1,2,2,2,3,3,3,4,4,4), status=rep(c(1,0,2),4), n =c(12,3,2,6,2,4,2,0,2,3,3,5)) fit <- survfit(Surv(time, time, status, type='interval') ~1, data=tdata, weight=n) # # Three curves for patients with monoclonal gammopathy. # 1. KM of time to PCM, ignoring death (statistically incorrect) # 2. Competing risk curves (also known as "cumulative incidence") # 3. Multi-state, showing Pr(in each state, at time t) # fitKM <- survfit(Surv(stop, event=='pcm') ~1, data=mgus1, subset=(start==0)) fitCR <- survfit(Surv(stop, event) ~1, data=mgus1, subset=(start==0)) fitMS <- survfit(Surv(start, stop, event) ~ 1, id=id, data=mgus1) ## Not run: # CR curves show the competing risks plot(fitCR, xscale=365.25, xmax=7300, mark.time=FALSE, col=2:3, xlab="Years post diagnosis of MGUS", ylab="P(state)") lines(fitKM, fun='event', xmax=7300, mark.time=FALSE, conf.int=FALSE) text(3652, .4, "Competing risk: death", col=3) text(5840, .15,"Competing risk: progression", col=2) text(5480, .30,"KM:prog") ## End(Not run) ``` r None `tmerge` Time based merge for survival data -------------------------------------------- ### Description A common task in survival analysis is the creation of start,stop data sets which have multiple intervals for each subject, along with the covariate values that apply over that interval. This function aids in the creation of such data sets. ### Usage ``` tmerge(data1, data2, id,..., tstart, tstop, options) ``` ### Arguments | | | | --- | --- | | `data1` | the primary data set, to which new variables and/or observation will be added | | `data2` | second data set in which all the other arguments will be found | | `id` | subject identifier | | `...` | operations that add new variables or intervals, see below | | `tstart` | optional variable to define the valid time range for each subject, only used on an initial call | | `tstop` | optional variable to define the valid time range for each subject, only used on an initial call | | `options` | a list of options. Valid ones are idname, tstartname, tstopname, delay, na.rm, and tdcstart. See the explanation below. | ### Details The program is often run in multiple passes, the first of which defines the basic structure, and subsequent ones that add new variables to that structure. For a more complete explanation of how this routine works refer to the vignette on time-dependent variables. There are 4 types of operational arguments: a time dependent covariate (tdc), cumulative count (cumtdc), event (event) or cumulative event (cumevent). Time dependent covariates change their values before an event, events are outcomes. * newname = tdc(y, x, init) A new time dependent covariate variable will created. The argument `y` is assumed to be on the scale of the start and end time, and each instance describes the occurrence of a "condition" at that time. The second argument `x` is optional. In the case where `x` is missing the count variable starts at 0 for each subject and becomes 1 at the time of the event. If `x` is present the value of the time dependent covariate is initialized to value of `init`, if present, or the `tdcstart` option otherwise, and is updated to the value of `x` at each observation. If the option `na.rm=TRUE` missing values of `x` are first removed, i.e., the update will not create missing values. newname = cumtdc(y,x, init) Similar to tdc, except that the event count is accumulated over time for each subject. The variable `x` must be numeric. newname = event(y,x) Mark an event at time y. In the usual case that `x` is missing the new 0/1 variable will be similar to the 0/1 status variable of a survival time. newname = cumevent(y,x) Cumulative events. The function adds three new variables to the output data set: `tstart`, `tstop`, and `id`. The `options` argument can be used to change these names. If, in the first call, the `id` argument is a simple name, that variable name will be used as the default for the `idname` option. If `data1` contains the `tstart` variable then that is used as the starting point for the created time intervals, otherwise the initial interval for each id will begin at 0 by default. This will lead to an invalid interval and subsequent error if say a death time were <= 0. The `na.rm` option affects creation of time-dependent covariates. Should a data row in `data2` that has a missing value for the variable be ignored (na.rm=FALSE, default) or should it generate an observation with a value of NA? The default value leads to "last value carried forward" behavior. The `delay` option causes a time-dependent covariate's new value to be delayed, see the vignette for an example. ### Value a data frame with two extra attributes `tname` and `tcount`. The first contains the names of the key variables; it's persistence from call to call allows the user to avoid constantly reentering the `options` argument. The tcount variable contains counts of the match types. New time values that occur before the first interval for a subject are "early", those after the last interval for a subject are "late", and those that fall into a gap are of type "gap". All these are are considered to be outside the specified time frame for the given subject. An event of this type will be discarded. An observation in `data2` whose identifier matches no rows in `data1` is of type "missid" and is also discarded. A time-dependent covariate value will be applied to later intervals but will not generate a new time point in the output. The most common type will usually be "within", corresponding to those new times that fall inside an existing interval and cause it to be split into two. Observations that fall exactly on the edge of an interval but within the (min, max] time for a subject are counted as being on a "leading" edge, "trailing" edge or "boundary". The first corresponds for instance to an occurrence at 17 for someone with an intervals of (0,15] and (17, 35]. A `tdc` at time 17 will affect this interval but an `event` at 17 would be ignored. An `event` occurrence at 15 would count in the (0,15] interval. The last case is where the main data set has touching intervals for a subject, e.g. (17, 28] and (28,35] and a new occurrence lands at the join. Events will go to the earlier interval and counts to the latter one. A last column shows the number of additions where the id and time point were identical. When this occurs, the `tdc` and `event` operators will use the final value in the data (last edit wins), but ignoring missing, while `cumtdc` and `cumevent` operators add up the values. These extra attributes are ephemeral and will be discarded if the dataframe is modified. This is intentional, since they will become invalid if for instance a subset were selected. ### Author(s) Terry Therneau ### See Also `<neardate>` ### Examples ``` # The pbc data set contains baseline data and follow-up status # for a set of subjects with primary biliary cirrhosis, while the # pbcseq data set contains repeated laboratory values for those # subjects. # The first data set contains data on 312 subjects in a clinical trial plus # 106 that agreed to be followed off protocol, the second data set has data # only on the trial subjects. temp <- subset(pbc, id <= 312, select=c(id:sex, stage)) # baseline data pbc2 <- tmerge(temp, temp, id=id, endpt = event(time, status)) pbc2 <- tmerge(pbc2, pbcseq, id=id, ascites = tdc(day, ascites), bili = tdc(day, bili), albumin = tdc(day, albumin), protime = tdc(day, protime), alk.phos = tdc(day, alk.phos)) fit <- coxph(Surv(tstart, tstop, endpt==2) ~ protime + log(bili), data=pbc2) ```
programming_docs
r None `coxph.wtest` Compute a quadratic form --------------------------------------- ### Description This function is used internally by several survival routines. It computes a simple quadratic form, while properly dealing with missings. ### Usage ``` coxph.wtest(var, b, toler.chol = 1e-09) ``` ### Arguments | | | | --- | --- | | `var` | variance matrix | | `b` | vector | | `toler.chol` | tolerance for the internal cholesky decomposition | ### Details Compute b' V-inverse b. Equivalent to sum(b \* solve(V,b)), except for the case of redundant covariates in the original model, which lead to NA values in V and b. ### Value a real number ### Author(s) Terry Therneau r None `statefig` Draw a state space figure. -------------------------------------- ### Description For multi-state survival models it is useful to have a figure that shows the states and the possible transitions between them. This function creates a simple "box and arrows" figure. It's goal was simplicity. ### Usage ``` statefig(layout, connect, margin = 0.03, box = TRUE, cex = 1, col = 1, lwd=1, lty=1, bcol=col, acol=col, alwd=lwd, alty=lty, offset=0) ``` ### Arguments | | | | --- | --- | | `layout` | describes the layout of the boxes on the page. See the detailed description below. | | `connect` | a square matrix with one row for each state. If `connect[i,j] !=0` then an arrow is drawn from state i to state j. The row names of the matrix are used as the labels for the states. | | `margin` | the fraction of white space between the label and the surrounding box, and between the box and the arrows, as a function of the plot region size. | | `box` | should boxes be drawn? TRUE or FALSE. | | `cex, col, lty, lwd` | default graphical parameters used for the text and boxes. The last 3 can be a vector of values. | | `bcol` | color for the box, if it differs from that used for the text. | | `acol, alwd, alty` | color, line type and line width for the arrows. | | `offset` | used to slight offset the arrows between two boxes x and y if there is a transition in both directions. The default of 0 leads to a double headed arrow in this case – to arrows are drawn but they coincide. A positive value causes each arrow to shift to the left, from the view of someone standing at the foot of a arrow and looking towards the arrowhead, a negative offset shifts to the right. A value of 1 corresponds to the size of the plotting region. | ### Details The arguments for color, line type and line width can all be vectors, in which case they are recycled as needed. Boxes and text are drawn in the order of the rownames of `connect`, and arrows are drawn in the usual R matrix order. The `layout` argument is normally a vector of integers, e.g., the vector (1, 3, 2) describes a layout with 3 columns. The first has a single state, the second column has 3 states and the third has 2. The coordinates of the plotting region are 0 to 1 for both x and y. Within a column the centers of the boxes are evenly spaced, with 1/2 a space between the boxes and the margin, e.g., 4 boxes would be at 1/8, 3/8, 5/8 and 7/8. If `layout` were a 1 column matrix with values of (1, 3, 2) then the layout will have three rows with 1, 3, and 2 boxes per row, respectively. Alternatively, the user can supply a 2 column matrix that directly gives the centers. The values of the connect matrix should be 0 for pairs of states that do not have a transition and values between 0 and 2 for those that do. States are connected by an arc that passes through the centers of the two boxes and a third point that is between them. Specifically, consider a line segment joining the two centers and erect a second segment at right angles to the midpoint of length d times the distance from center to midpoint. The arc passes through this point. A value of d=0 gives a straight line, d=1 a right hand half circle centered on the midpoint and d= -1 a left hand half circle. The `connect` matrix contains values of d+1 with -1 < d < 1. The connecting arrow are drawn from (center of box 1 + offset) to (center of box 2 + offset), where the the amount of offset (white space) is determined by the `box` and `margin` parameters. If a pair of states are too close together this can result in an arrow that points the wrong way. ### Value a matrix containing the centers of the boxes, with the invisible attribute set. ### Note The goal of this function is to make “good enough” figures as simply as possible, and thereby to encourage users to draw them. The `layout` argument was inspired by the `diagram` package, which can draw more complex and well decorated figures, e.g., many different shapes, shading, multiple types of connecting lines, etc., but at the price of greater complexity. Because curved lines are drawn as a set of short line segments, line types have almost no effect for that case. ### Author(s) Terry Therneau ### Examples ``` # Draw a simple competing risks figure states <- c("Entry", "Complete response", "Relapse", "Death") connect <- matrix(0, 4, 4, dimnames=list(states, states)) connect[1, -1] <- c(1.1, 1, 0.9) statefig(c(1, 3), connect) ``` r None `summary.survfit` Summary of a Survival Curve ---------------------------------------------- ### Description Returns a list containing the survival curve, confidence limits for the curve, and other information. ### Usage ``` ## S3 method for class 'survfit' summary(object, times, censored=FALSE, scale=1, extend=FALSE, rmean=getOption('survfit.rmean'), ...) ``` ### Arguments | | | | --- | --- | | `object` | the result of a call to the `survfit` function. | | `times` | vector of times; the returned matrix will contain 1 row for each time. The vector will be sorted into increasing order; missing values are not allowed. If `censored=T`, the default `times` vector contains all the unique times in `fit`, otherwise the default `times` vector uses only the event (death) times. | | `censored` | logical value: should the censoring times be included in the output? This is ignored if the `times` argument is present. | | `scale` | numeric value to rescale the survival time, e.g., if the input data to `survfit` were in days, `scale = 365.25` would scale the output to years. | | `extend` | logical value: if TRUE, prints information for all specified `times`, even if there are no subjects left at the end of the specified `times`. This is only used if the `times` argument is present. | | `rmean` | Show restricted mean: see `<print.survfit>` for details | | `...` | for future methods | ### Value a list with the following components: | | | | --- | --- | | `surv` | the estimate of survival at time t+0. | | `time` | the timepoints on the curve. | | `n.risk` | the number of subjects at risk at time t-0 (but see the comments on weights in the `survfit` help file). | | `n.event` | if the `times` argument is missing, then this column is the number of events that occurred at time t. Otherwise, it is the cumulative number of events that have occurred since the last time listed until time t+0. | | `n.entered` | This is present only for counting process survival data. If the `times` argument is missing, this column is the number of subjects that entered at time t. Otherwise, it is the cumulative number of subjects that have entered since the last time listed until time t. | | `n.exit.censored` | if the `times` argument is missing, this column is the number of subjects that left without an event at time t. Otherwise, it is the cumulative number of subjects that have left without an event since the last time listed until time t+0. This is only present for counting process survival data. | | `std.err` | the standard error of the survival value. | | `conf.int` | level of confidence for the confidence intervals of survival. | | `lower` | lower confidence limits for the curve. | | `upper` | upper confidence limits for the curve. | | `strata` | indicates stratification of curve estimation. If `strata` is not `NULL`, there are multiple curves in the result and the `surv`, `time`, `n.risk`, etc. vectors will contain multiple curves, pasted end to end. The levels of `strata` (a factor) are the labels for the curves. | | `call` | the statement used to create the `fit` object. | | `na.action` | same as for `fit`, if present. | | `table` | table of information that is returned from `print.survfit` function. | | `type` | type of data censoring. Passed through from the fit object. | ### Details This routine has two uses: printing out a survival curve at specified time points (often yearly), or extracting the values at specified time points for further processing. In the first case we normally want `extend=FALSE`, i.e., don't print out data past the end of the curve. If the `times` option only contains values beyond the last point in the curve then there is nothing to print and an error message will result. For the second usage we almost always want `extend=TRUE`, so that the results will have a predictable length. The `survfit` object itself will have a row of information at each censoring or event time, it does not save information on each unique entry time. For printout at two time points t1, t2, this function will give the the number at risk at the smallest event times that are >= t1 and >= t2, respectively, the survival curve at the largest recorded times <= t1 and <= t2, and the number of events and censorings in the interval t1 < t <= t2. When the routine is called with counting process data many users are confused by counts that are too large. For example, `Surv(c(0,0, 5, 5), c(2, 3, 8, 10), c(1, 0, 1, 0))` followed by a request for the values at time 4. The `survfit` object has entries only at times 2, 3, 8, and 10; there are 2 subjects at risk at time 8, so that is what will be printed. ### See Also `<survfit>`, `<print.summary.survfit>` ### Examples ``` summary( survfit( Surv(futime, fustat)~1, data=ovarian)) summary( survfit( Surv(futime, fustat)~rx, data=ovarian)) ``` r None `print.summary.survexp` Print Survexp Summary ---------------------------------------------- ### Description Prints the results of `summary.survexp` ### Usage ``` ## S3 method for class 'summary.survexp' print(x, digits = max(options()$digits - 4, 3), ...) ``` ### Arguments | | | | --- | --- | | `x` | an object of class `summary.survexp`. | | `digits` | the number of digits to use in printing the result. | | `...` | for future methods | ### Value `x`, with the invisible flag set to prevent further printing. ### Author(s) Terry Therneau ### See Also `link{summary.survexp}`, `<survexp>` r None `finegray` Create data for a Fine-Gray model --------------------------------------------- ### Description The Fine-Gray model can be fit by first creating a special data set, and then fitting a weighted Cox model to the result. This routine creates the data set. ### Usage ``` finegray(formula, data, weights, subset, na.action= na.pass, etype, prefix="fg", count, id, timefix=TRUE) ``` ### Arguments | | | | --- | --- | | `formula` | a standard model formula, with survival on the left and covariates on the right. | | `data` | an optional data frame, list or environment (or object coercible by as.data.frame to a data frame) containing the variables in the model. | | `weights` | optional vector of observation weights | | `subset` | an optional vector specifying a subset of observations to be used in the fitting process. | | `na.action` | a function which indicates what should happen when the data contain NAs. The default is set by the na.action setting of options. | | `etype` | the event type for which a data set will be generated. The default is to use whichever is listed first in the multi-state survival object. | | `prefix` | the routine will add 4 variables to the data set: a start and end time for each interval, status, and a weight for the interval. The default names of these are "fgstart", "fgstop", "fgstatus", and "fgwt"; the `prefix` argument determines the initial portion of the new names. | | `count` | a variable name in the output data set for an optional variable that will contain the the replication count for each row of the input data. If a row is expanded into multiple lines it will contain 1, 2, etc. | | `id` | optional, the variable name in the data set which identifies subjects. | | `timefix` | process times through the `aeqSurv` function to eliminate potential roundoff issues. | ### Details The function expects a multi-state survival expression or variable as the left hand side of the formula, e.g. `Surv(atime, astat)` where `astat` is a factor whose first level represents censoring and remaining levels are states. The output data set will contain simple survival data (status = 0 or 1) for a single endpoint of interest. In the output data set subjects who did not experience the event of interest become censored subjects whose times are artificially extended over multiple intervals, with a decreasing case weight from interval to interval. The output data set will normally contain many more rows than the input. Time dependent covariates are allowed, but not (currently) delayed entry. If there are time dependent covariates, e.g.., the input data set had `Surv(entry, exit, stat)` as the left hand side, then an `id` statement is required. The program does data checks in this case, and needs to know which rows belong to each subject. The output data set will often have gaps. Say that there were events at time 50 and 100 (and none between) and censoring at 60, 70, and 80. Formally, a non event subjects at risk from 50 to 100 will have different weights in each of the 3 intervals 50-60, 60-70, and 80-100, but because the middle interval does not span any event times the subsequent Cox model will never use that row. The `finegray` output omits such rows. See the competing risks vignette for more details. ### Value a data frame ### Author(s) Terry Therneau ### References Fine JP and Gray RJ (1999) A proportional hazards model for the subdistribution of a competing risk. JASA 94:496-509. Geskus RB (2011). Cause-Specific Cumulative Incidence Estimation and the Fine and Gray Model Under Both Left Truncation and Right Censoring. Biometrics 67, 39-49. ### See Also `<coxph>`, `[aeqSurv](aeqsurv)` ### Examples ``` # Treat time to death and plasma cell malignancy as competing risks etime <- with(mgus2, ifelse(pstat==0, futime, ptime)) event <- with(mgus2, ifelse(pstat==0, 2*death, 1)) event <- factor(event, 0:2, labels=c("censor", "pcm", "death")) # FG model for PCM pdata <- finegray(Surv(etime, event) ~ ., data=mgus2) fgfit <- coxph(Surv(fgstart, fgstop, fgstatus) ~ age + sex, weight=fgwt, data=pdata) # Compute the weights separately by sex adata <- finegray(Surv(etime, event) ~ . + strata(sex), data=mgus2, na.action=na.pass) ``` r None `yates_setup` Method for adding new models to the yates function. ------------------------------------------------------------------ ### Description This is a method which is called by the `yates` function, in order to setup the code to handle a particular model type. Methods for glm, coxph, and default are part of the survival package. ### Usage ``` yates_setup(fit, ...) ``` ### Arguments | | | | --- | --- | | `fit` | a fitted model object | | `...` | optional arguments for some methods | ### Details If the predicted value should be the linear predictor, the function should return NULL. The `yates` routine has particularly efficient code for this case. Otherwise it should return a prediction function or a list of two elements containing the prediction function and a summary function. The prediction function will be passed the linear predictor as a single argument and should return a vector of predicted values. ### Note See the vignette on population prediction for more details. ### Author(s) Terry Therneau ### See Also `<yates>` r None `reliability` Reliability data sets ------------------------------------ ### Description A set of data for simple reliablility analyses, taken from the book by Meeker and Escobar. ### Usage ``` data(reliability, package="survival") ``` ### Details * `capacitor`: Data from a factorial experiment on the life of glass capacitors as a function of voltage and operating temperature. There were 8 capacitors at each combination of temperature and voltage. Testing at each combination was terminated after the fourth failure. + `temperature`: temperature in degrees celcius + `voltage`: applied voltage + `time`: time to failure + `status`: 1=failed, 0=censored * `cracks`: Data on the time until the development of cracks in a set of 167 identical turbine parts. The parts were inspected at 8 selected times. + day: time of inspection + fail: number of fans found to have cracks, at this inspection * Data set `genfan`: Time to failure of 70 diesel engine fans. + `hours`: hours of service + `status`: 1=failure, 0=censoredData set `ifluid`: A data frame with two variables describing the time to electrical breakdown of an insulating fluid. + `time`: hours to breakdown + `voltage`: test voltage in kV * Data set `imotor`: Breakdown of motor insulation as a function of temperature. + temp: temperature of the test + time: time to failure or censoring + status: 0=censored, 1=failed * Data set `turbine`: Each of 432 turbine wheels was inspected once to determine whether a crack had developed in the wheel or not. + hours: time of inspection (100s of hours) + inspected: number that were inspected + failed: number that failedData set `valveSeat`: Time to replacement of valve seats for 41 diesel engines. More than one seat may be replaced at a particular service, leading to duplicate times in the data set. The final inspection time for each engine will have status=0. + id: engine identifier + time: time of the inspection, in days + status: 1=replacement occured, 0= not ### References Meeker and Escobar, Statistical Methods for Reliability Data, 1998. ### Examples ``` survreg(Surv(time, status) ~ temperature + voltage, capacitor) ``` r None `nafld` Non-alcohol fatty liver disease ---------------------------------------- ### Description Data sets containing the data from a population study of non-alcoholic fatty liver disease (NAFLD). Subjects with the condition and a set of matched control subjects were followed forward for metabolic conditions, cardiac endpoints, and death. ### Usage ``` nafld1 nafld2 nafld3 data(nafld, package="survival") ``` ### Format `nafld1` is a data frame with 17549 observations on the following 10 variables. `id` subject identifier `age` age at entry to the study `male` 0=female, 1=male `weight` weight in kg `height` height in cm `bmi` body mass index `case.id` the id of the NAFLD case to whom this subject is matched `futime` time to death or last follow-up `status` 0= alive at last follow-up, 1=dead `nafld2` is a data frame with 400123 observations and 4 variables containing laboratory data `id` subject identifier `days` days since index date `test` the type of value recorded `value` the numeric value `nafld3` is a data frame with 34340 observations and 3 variables containing outcomes `id` subject identifier `days` days since index date `event` the endpoint that occurred ### Details The primary reference for the NAFLD study is Allen (2018). The incidence of non-alcoholic fatty liver disease (NAFLD) has been rising rapidly in the last decade and it is now one of the main drivers of hepatology practice Tapper2018. It is essentially the presence of excess fat in the liver, and parallels the ongoing obesity epidemic. Approximately 20-25% of NAFLD patients will develop the inflammatory state of non-alcoholic steatohepatitis (NASH), leading to fibrosis and eventual end-stage liver disease. NAFLD can be accurately diagnosed by MRI methods, but NASH diagnosis currently requires a biopsy. The current study constructed a population cohort of all adult NAFLD subjects from 1997 to 2014 along with 4 potential controls for each case. To protect patient confidentiality all time intervals are in days since the index date; none of the dates from the original data were retained. Subject age is their integer age at the index date, and the subject identifier is an arbitrary integer. As a final protection, we include only a 90% random sample of the data. As a consequence analyses results will not exactly match the original paper. There are 3 data sets: `nafld1` contains baseline data and has one observation per subject, `nafld2` has one observation for each (time dependent) continuous measurement, and `nafld3` has one observation for each yes/no outcome that occured. ### Source Data obtained from the author. ### References AM Allen, TM Therneau, JJ Larson, A Coward, VK Somers and PS Kamath, Nonalcoholic Fatty Liver Disease Incidence and Impact on Metabolic Burden and Death: A 20 Year Community Study, Hepatology 67:1726-1736, 2018.
programming_docs
r None `tcut` Factors for person-year calculations -------------------------------------------- ### Description Attaches categories for person-year calculations to a variable without losing the underlying continuous representation ### Usage ``` tcut(x, breaks, labels, scale=1) ## S3 method for class 'tcut' levels(x) ``` ### Arguments | | | | --- | --- | | `x` | numeric/date variable | | `breaks` | breaks between categories, which are right-continuous | | `labels` | labels for categories | | `scale` | Multiply `x` and `breaks` by this. | ### Value An object of class `tcut` ### See Also `[cut](../../base/html/cut)`, `<pyears>` ### Examples ``` mdy.date <- function(m,d,y) as.Date(paste(ifelse(y<100, y+1900, y), m, d, sep='/')) temp1 <- mdy.date(6,6,36) temp2 <- mdy.date(6,6,55)# Now compare the results from person-years # temp.age <- tcut(temp2-temp1, floor(c(-1, (18:31 * 365.24))), labels=c('0-18', paste(18:30, 19:31, sep='-'))) temp.yr <- tcut(temp2, mdy.date(1,1,1954:1965), labels=1954:1964) temp.time <- 3700 #total days of fu py1 <- pyears(temp.time ~ temp.age + temp.yr, scale=1) #output in days py1 ``` r None `bladder` Bladder Cancer Recurrences ------------------------------------- ### Description Data on recurrences of bladder cancer, used by many people to demonstrate methodology for recurrent event modelling. Bladder1 is the full data set from the study. It contains all three treatment arms and all recurrences for 118 subjects; the maximum observed number of recurrences is 9. Bladder is the data set that appears most commonly in the literature. It uses only the 85 subjects with nonzero follow-up who were assigned to either thiotepa or placebo, and only the first four recurrences for any patient. The status variable is 1 for recurrence and 0 for everything else (including death for any reason). The data set is laid out in the competing risks format of the paper by Wei, Lin, and Weissfeld. Bladder2 uses the same subset of subjects as bladder, but formatted in the (start, stop] or Anderson-Gill style. Note that in transforming from the WLW to the AG style data set there is a quite common programming mistake that leads to extra follow-up time for 12 subjects: all those with follow-up beyond their 4th recurrence. This "follow-up" is a side effect of throwing away all events after the fourth while retaining the last follow-up time variable from the original data. The bladder2 data set found here does not make this mistake, but some analyses in the literature have done so; it results in the addition of a small amount of immortal time bias and shrinks the fitted coefficients towards zero. ### Usage ``` bladder1 bladder bladder2 data(cancer, package="survival") ``` ### Format bladder1 | | | | --- | --- | | id: | Patient id | | treatment: | Placebo, pyridoxine (vitamin B6), or thiotepa | | number: | Initial number of tumours (8=8 or more) | | size: | Size (cm) of largest initial tumour | | recur: | Number of recurrences | | start,stop: | The start and end time of each time interval | | status: | End of interval code, 0=censored, 1=recurrence, | | | 2=death from bladder disease, 3=death other/unknown cause | | rtumor: | Number of tumors found at the time of a recurrence | | rsize: | Size of largest tumor at a recurrence | | enum: | Event number (observation number within patient) | | | bladder | | | | --- | --- | | id: | Patient id | | rx: | Treatment 1=placebo 2=thiotepa | | number: | Initial number of tumours (8=8 or more) | | size: | size (cm) of largest initial tumour | | stop: | recurrence or censoring time | | enum: | which recurrence (up to 4) | | | bladder2 | | | | --- | --- | | id: | Patient id | | rx: | Treatment 1=placebo 2=thiotepa | | number: | Initial number of tumours (8=8 or more) | | size: | size (cm) of largest initial tumour | | start: | start of interval (0 or previous recurrence time) | | stop: | recurrence or censoring time | | enum: | which recurrence (up to 4) | | | ### Source Andrews DF, Hertzberg AM (1985), DATA: A Collection of Problems from Many Fields for the Student and Research Worker, New York: Springer-Verlag. LJ Wei, DY Lin, L Weissfeld (1989), Regression analysis of multivariate incomplete failure time data by modeling marginal distributions. *Journal of the American Statistical Association*, **84**. r None `royston` Compute Royston's D for a Cox model ---------------------------------------------- ### Description Compute the D statistic and R^2 for a coxph model, proposed by Royston and Sauerbrei ### Usage ``` royston(fit, newdata, ties = TRUE, adjust = FALSE) ``` ### Arguments | | | | --- | --- | | `fit` | a coxph fit | | `newdata` | optional validation data set | | `ties` | make a correction for ties in the risk score | | `adjust` | adjust for possible overfitting | ### Details The adjustment is based on the ratio r= (number of events)/(number of coefficients). For models which have sufficient sample size (r>20) the adjustment will be small. ### Value a vector containing the value of D, *R^2\_D* and the estimated standard error of D. ### References P. Royston and W. Sauerbrei, A new measure of prognostic separation in survival data. Statistics in Medicine 23:723-748, 2004. ### Examples ``` # one of the examples from the paper pbc2 <- na.omit(pbc) # no missing values cfit <- coxph(Surv(time, status==2) ~ age + log(bili) + edema + albumin + stage + copper, data=pbc2, ties="breslow") royston(cfit) ``` r None `coxsurv.fit` A direct interface to the ‘computational engine’ of survfit.coxph -------------------------------------------------------------------------------- ### Description This program is mainly supplied to allow other packages to invoke the survfit.coxph function at a ‘data’ level rather than a ‘user’ level. It does no checks on the input data that is provided, which can lead to unexpected errors if that data is wrong. ### Usage ``` coxsurv.fit(ctype, stype, se.fit, varmat, cluster, y, x, wt, risk, position, strata, oldid, y2, x2, risk2, strata2, id2, unlist=TRUE) ``` ### Arguments | | | | --- | --- | | `stype` | survival curve computation: 1=direct, 2=exp(-cumulative hazard) | | `ctype` | cumulative hazard computation: 1=Breslow, 2=Efron | | `se.fit` | if TRUE, compute standard errors | | `varmat` | the variance matrix of the coefficients | | `cluster` | vector to control robust variance | | `y` | the response variable used in the Cox model. (Missing values removed of course.) | | `x` | covariate matrix used in the Cox model | | `wt` | weight vector for the Cox model. If the model was unweighted use a vector of 1s. | | `risk` | the risk score exp(X beta + offset) from the fitted Cox model. | | `position` | optional argument controlling what is counted as 'censored'. Due to time dependent covariates, for instance, a subject might have start, stop times of (1,5)(5,30)(30,100). Times 5 and 30 are not 'real' censorings. Position is 1 for a real start, 2 for an actual end, 3 for both, 0 for neither. | | `strata` | strata variable used in the Cox model. This will be a factor. | | `oldid` | identifier for subjects with multiple rows in the original data. | | `y2, x2, risk2, strata2` | variables for the hypothetical subjects, for which prediction is desired | | `id2` | optional; if present and not NULL this should be a vector of identifiers of length `nrow(x2)`. A non-null value signifies that `x2` contains time dependent covariates, in which case this identifies which rows of `x2` go with each subject. | | `unlist` | if `FALSE` the result will be a list with one element for each strata. Otherwise the strata are “unpacked” into the form found in a `survfit` object. | ### Value a list containing nearly all the components of a `survfit` object. All that is missing is to add the confidence intervals, the type of the original model's response (as in a coxph object), and the class. ### Note The source code for for both this function and `survfit.coxph` is written using noweb. For complete documentation see the `inst/sourcecode.pdf` file. ### Author(s) Terry Therneau ### See Also `<survfit.coxph>` r None `lines.survfit` Add Lines or Points to a Survival Plot ------------------------------------------------------- ### Description Often used to add the expected survival curve(s) to a Kaplan-Meier plot generated with `plot.survfit`. ### Usage ``` ## S3 method for class 'survfit' lines(x, type="s", pch=3, col=1, lty=1, lwd=1, cex=1, mark.time=FALSE, xmax, fun, conf.int=FALSE, conf.times, conf.cap=.005, conf.offset=.012, conf.type = c("log", "log-log", "plain", "logit", "arcsin"), mark, noplot="(s0)", cumhaz= FALSE, ...) ## S3 method for class 'survexp' lines(x, type="l", ...) ## S3 method for class 'survfit' points(x, fun, censor=FALSE, col=1, pch, noplot="(s0)", cumhaz=FALSE, ...) ``` ### Arguments | | | | --- | --- | | `x` | a survival object, generated from the `survfit` or `survexp` functions. | | `type` | the line type, as described in `lines`. The default is a step function for `survfit` objects, and a connected line for `survexp` objects. All other arguments for `lines.survexp` are identical to those for `lines.survfit`. | | `col, lty, lwd, cex` | vectors giving the mark symbol, color, line type, line width and character size for the added curves. Of this set only color is applicable to `points`. | | `pch` | plotting characters for points, in the style of `matplot`, i.e., either a single string of characters of which the first will be used for the first curve, etc; or a vector of characters or integers, one element per curve. | | `mark` | a historical alias for `pch` | | `censor` | should censoring times be displayed for the `points` function? | | `mark.time` | controls the labeling of the curves. If `FALSE`, no labeling is done. If `TRUE`, then curves are marked at each censoring time. If `mark.time` is a numeric vector, then curves are marked at the specified time points. | | `xmax` | optional cutoff for the right hand of the curves. | | `fun` | an arbitrary function defining a transformation of the survival curve. For example `fun=log` is an alternative way to draw a log-survival curve (but with the axis labeled with log(S) values). Four often used transformations can be specified with a character argument instead: "log" is the same as using the `log=T` option, "event" plots cumulative events (f(y) = 1-y), "cumhaz" plots the cumulative hazard function (f(y) = -log(y)) and "cloglog" creates a complimentary log-log survival plot (f(y) = log(-log(y))) along with log scale for the x-axis. | | `conf.int` | if `TRUE`, confidence bands for the curves are also plotted. If set to `"only"`, then only the CI bands are plotted, and the curve itself is left off. This can be useful for fine control over the colors or line types of a plot. A numeric value, e.g. `conf.int = .90`, can be used to | | `conf.times` | optional vector of times at which to place a confidence bar on the curve(s). If present, these will be used instead of confidence bands. | | `conf.cap` | width of the horizontal cap on top of the confidence bars; only used if conf.times is used. A value of 1 is the width of the plot region. | | `conf.offset` | the offset for confidence bars, when there are multiple curves on the plot. A value of 1 is the width of the plot region. If this is a single number then each curve's bars are offset by this amount from the prior curve's bars, if it is a vector the values are used directly. | | `conf.type` | One of `"plain"`, `"log"` (the default), `"log-log"`, `"logit"`, or `"none"`. Only enough of the string to uniquely identify it is necessary. The first option causes confidence intervals not to be generated. The second causes the standard intervals `curve +- k *se(curve)`, where k is determined from `conf.int`. The log option calculates intervals based on the cumulative hazard or log(survival). The log-log option bases the intervals on the log hazard or log(-log(survival)), and the logit option on log(survival/(1-survival)). | | `noplot` | for multi-state models, curves with this label will not be plotted. The default corresponds to an unspecified state. | | `cumhaz` | plot the cumulative hazard, rather than the survival or probability in state. | | `...` | other graphical parameters | ### Details When the `survfit` function creates a multi-state survival curve the resulting object has class ‘survfitms’. The only difference in the plots is that that it defaults to a curve that goes from lower left to upper right (starting at 0), where survival curves default to starting at 1 and going down. All other options are identical. If the user set an explicit range in an earlier `plot.survfit` call, e.g. via `xlim` or `xmax`, subsequent calls to this function remember the right hand cutoff. This memory can be erased by `options(plot.survfit) <- NULL`. ### Value a list with components `x` and `y`, containing the coordinates of the last point on each of the curves (but not of the confidence limits). This may be useful for labeling. ### Side Effects one or more curves are added to the current plot. ### See Also `[lines](../../graphics/html/lines)`, `[par](../../graphics/html/par)`, `<plot.survfit>`, `<survfit>`, `<survexp>`. ### Examples ``` fit <- survfit(Surv(time, status==2) ~ sex, pbc,subset=1:312) plot(fit, mark.time=FALSE, xscale=365.25, xlab='Years', ylab='Survival') lines(fit[1], lwd=2) #darken the first curve and add marks # Add expected survival curves for the two groups, # based on the US census data # The data set does not have entry date, use the midpoint of the study efit <- survexp(~sex, data=pbc, times= (0:24)*182, ratetable=survexp.us, rmap=list(sex=sex, age=age*365.35, year=as.Date('1979/01/01'))) temp <- lines(efit, lty=2, lwd=2:1) text(temp, c("Male", "Female"), adj= -.1) #labels just past the ends title(main="Primary Biliary Cirrhosis, Observed and Expected") ``` r None `survregDtest` Verify a survreg distribution --------------------------------------------- ### Description This routine is called by `survreg` to verify that a distribution object is valid. ### Usage ``` survregDtest(dlist, verbose = F) ``` ### Arguments | | | | --- | --- | | `dlist` | the list describing a survival distribution | | `verbose` | return a simple TRUE/FALSE from the test for validity (the default), or a verbose description of any flaws. | ### Details If the `survreg` function rejects your user-supplied distribution as invalid, this routine will tell you why it did so. ### Value TRUE if the distribution object passes the tests, and either FALSE or a vector of character strings if not. ### Author(s) Terry Therneau ### See Also `<survreg.distributions>`, `<survreg>` ### Examples ``` # An invalid distribution (it should have "init =" on line 2) # surveg would give an error message mycauchy <- list(name='Cauchy', init<- function(x, weights, ...) c(median(x), mad(x)), density= function(x, parms) { temp <- 1/(1 + x^2) cbind(.5 + atan(temp)/pi, .5+ atan(-temp)/pi, temp/pi, -2 *x*temp, 2*temp^2*(4*x^2*temp -1)) }, quantile= function(p, parms) tan((p-.5)*pi), deviance= function(...) stop('deviance residuals not defined') ) survregDtest(mycauchy, TRUE) ``` r None `survreg.object` Parametric Survival Model Object -------------------------------------------------- ### Description This class of objects is returned by the `survreg` function to represent a fitted parametric survival model. Objects of this class have methods for the functions `print`, `summary`, `predict`, and `residuals`. ### COMPONENTS The following components must be included in a legitimate `survreg` object. coefficients the coefficients of the `linear.predictors`, which multiply the columns of the model matrix. It does not include the estimate of error (sigma). The names of the coefficients are the names of the single-degree-of-freedom effects (the columns of the model matrix). If the model is over-determined there will be missing values in the coefficients corresponding to non-estimable coefficients. icoef coefficients of the baseline model, which will contain the intercept and log(scale), or multiple scale factors for a stratified model. var the variance-covariance matrix for the parameters, including the log(scale) parameter(s). loglik a vector of length 2, containing the log-likelihood for the baseline and full models. iter the number of iterations required linear.predictors the linear predictor for each subject. df the degrees of freedom for the final model. For a penalized model this will be a vector with one element per term. scale the scale factor(s), with length equal to the number of strata. idf degrees of freedom for the initial model. means a vector of the column means of the coefficient matrix. dist the distribution used in the fit. weights included for a weighted fit. The object will also have the following components found in other model results (some are optional): `linear predictors`, `weights`, `x`, `y`, `model`, `call`, `terms` and `formula`. See `lm`. ### See Also `<survreg>`, `[lm](../../stats/html/lm)` r None `quantile.survfit` Quantiles from a survfit object --------------------------------------------------- ### Description Retrieve quantiles and confidence intervals for them from a survfit object. ### Usage ``` ## S3 method for class 'survfit' quantile(x, probs = c(0.25, 0.5, 0.75), conf.int = TRUE, scale, tolerance= sqrt(.Machine$double.eps), ...) ## S3 method for class 'survfitms' quantile(x, probs = c(0.25, 0.5, 0.75), conf.int = TRUE, scale, tolerance= sqrt(.Machine$double.eps), ...) ``` ### Arguments | | | | --- | --- | | `x` | a result of the survfit function | | `probs` | numeric vector of probabilities with values in [0,1] | | `conf.int` | should lower and upper confidence limits be returned? | | `scale` | optional scale factor, e.g., `scale=365.25` would return results in years if the fit object were in days. | | `tolerance` | tolerance for checking that the survival curve exactly equals one of the quantiles | | `...` | optional arguments for other methods | ### Details The kth quantile for a survival curve S(t) is the location at which a horizontal line at height p= 1-k intersects the plot of S(t). Since S(t) is a step function, it is possible for the curve to have a horizontal segment at exactly 1-k, in which case the midpoint of the horizontal segment is returned. This mirrors the standard behavior of the median when data is uncensored. If the survival curve does not fall to 1-k, then that quantile is undefined. In order to be consistent with other quantile functions, the argument `prob` of this function applies to the cumulative distribution function F(t) = 1-S(t). Confidence limits for the values are based on the intersection of the horizontal line at 1-k with the upper and lower limits for the survival curve. Hence confidence limits use the same p-value as was in effect when the curve was created, and will differ depending on the `conf.type` option of `survfit`. If the survival curves have no confidence bands, confidence limits for the quantiles are not available. When a horizontal segment of the survival curve exactly matches one of the requested quantiles the returned value will be the midpoint of the horizontal segment; this agrees with the usual definition of a median for uncensored data. Since the survival curve is computed as a series of products, however, there may be round off error. Assume for instance a sample of size 20 with no tied times and no censoring. The survival curve after the 10th death is (19/20)(18/19)(17/18) ... (10/11) = 10/20, but the computed result will not be exactly 0.5. Any horizontal segment whose absolute difference with a requested percentile is less than `tolerance` is considered to be an exact match. ### Value The quantiles will be a vector if the `survfit` object contains only a single curve, otherwise it will be a matrix or array. In this case the last dimension will index the quantiles. If confidence limits are requested, then result will be a list with components `quantile`, `lower`, and `upper`, otherwise it is the vector or matrix of quantiles. ### Author(s) Terry Therneau ### See Also `<survfit>`, `<print.survfit>`, `[qsurvreg](dsurvreg)` ### Examples ``` fit <- survfit(Surv(time, status) ~ ph.ecog, data=lung) quantile(fit) cfit <- coxph(Surv(time, status) ~ age + strata(ph.ecog), data=lung) csurv<- survfit(cfit, newdata=data.frame(age=c(40, 60, 80)), conf.type ="none") temp <- quantile(csurv, 1:5/10) temp[2,3,] # quantiles for second level of ph.ecog, age=80 quantile(csurv[2,3], 1:5/10) # quantiles of a single curve, same result ```
programming_docs
r None `untangle.specials` Help Process the ‘specials’ Argument of the ‘terms’ Function. ---------------------------------------------------------------------------------- ### Description Given a `terms` structure and a desired special name, this returns an index appropriate for subscripting the `terms` structure and another appropriate for the data frame. ### Usage ``` untangle.specials(tt, special, order=1) ``` ### Arguments | | | | --- | --- | | `tt` | a `terms` object. | | `special` | the name of a special function, presumably used in the terms object. | | `order` | the order of the desired terms. If set to 2, interactions with the special function will be included. | ### Value a list with two components: | | | | --- | --- | | `vars` | a vector of variable names, as would be found in the data frame, of the specials. | | `terms` | a numeric vector, suitable for subscripting the terms structure, that indexes the terms in the expanded model formula which involve the special. | ### Examples ``` formula<-Surv(tt,ss)~x+z*strata(id) tms<-terms(formula,specials="strata") ## the specials attribute attr(tms,"specials") ## main effects untangle.specials(tms,"strata") ## and interactions untangle.specials(tms,"strata",order=1:2) ``` r None `summary.pyears` Summary function for pyears objecs ---------------------------------------------------- ### Description Create a printable table of a person-years result. ### Usage ``` ## S3 method for class 'pyears' summary(object, header = TRUE, call = header, n = TRUE, event = TRUE, pyears = TRUE, expected = TRUE, rate = FALSE, rr =expected, ci.r = FALSE, ci.rr = FALSE, totals=FALSE, legend = TRUE, vline = FALSE, vertical= TRUE, nastring=".", conf.level = 0.95, scale = 1, ...) ``` ### Arguments | | | | --- | --- | | `object` | a pyears object | | `header` | print out a header giving the total number of observations, events, person-years, and total time (if any) omitted from the table | | `call` | print out a copy of the call | | `n, event, pyears, expected` | logical arguments: should these elements be printed in the table? | | `rate, ci.r` | logical arguments: should the incidence rate and/or its confidence interval be given in the table? | | `rr, ci.rr` | logical arguments: should the hazard ratio and/or its confidence interval be given in the table? | | `totals` | should row and column totals be added? | | `legend` | should a legend be included in the printout? | | `vline` | should vertical lines be included in the printed tables? | | `vertical` | when there is only a single predictor, should the table be printed with the predictor on the left (vertical=TRUE) or across the top (vertical=FALSE)? | | `nastring` | what to use for missing values in the table. Some of these are structural, e.g., risk ratios for a cell with no follow-up time. | | `conf.level` | confidence level for any confidence intervals | | `scale` | a scaling factor for printed rates | | `...` | optional arguments which will be passed to the `format` function; common choices would be digits=2 or nsmall=1. | ### Details The `pyears` function is often used to create initial descriptions of a survival or time-to-event variable; the type of material that is often found in “table 1” of a paper. The summary routine prints this information out using one of pandoc table styles. A primary reason for choosing this style is that Rstudio is then able to automatically render the results in multiple formats: html, rtf, latex, etc. If the `pyears` call has only a single covariate then the table will have that covariate as one margin and the statistics of interest as the other. If the `pyears` call has two predictors then those two predictors are used as margins of the table, while each cell of the table contains the statistics of interest as multiple rows within the cell. If there are more than two predictors then multiple tables are produced, in the same order as the standard R printout for an array. The "N" entry of a pyears object is the number of observations which contributed to a particular cell. When the original call includes `tcut` objects then a single observation may contribute to multiple cells. ### Value a copy of the object ### Notes The pandoc system has four table types: with or without vertical bars, and with single or multiple rows of data in each cell. This routine produces all 4 styles depending on options, but currently not all of them are recognized by the Rstudio-pandoc pipeline. (And we don't yet see why.) ### Author(s) Terry Therneau and Elizabeth Atkinson ### See Also `<cipoisson>`, `<pyears>`, `[format](../../base/html/format)` r None `clogit` Conditional logistic regression ----------------------------------------- ### Description Estimates a logistic regression model by maximising the conditional likelihood. Uses a model formula of the form `case.status~exposure+strata(matched.set)`. The default is to use the exact conditional likelihood, a commonly used approximate conditional likelihood is provided for compatibility with older software. ### Usage ``` clogit(formula, data, weights, subset, na.action, method=c("exact", "approximate", "efron", "breslow"), ...) ``` ### Arguments | | | | --- | --- | | `formula` | Model formula | | `data` | data frame | | `weights` | optional, names the variable containing case weights | | `subset` | optional, subset the data | | `na.action` | optional na.action argument. By default the global option `na.action` is used. | | `method` | use the correct (exact) calculation in the conditional likelihood or one of the approximations | | `...` | optional arguments, which will be passed to `coxph.control` | ### Details It turns out that the loglikelihood for a conditional logistic regression model = loglik from a Cox model with a particular data structure. Proving this is a nice homework exercise for a PhD statistics class; not too hard, but the fact that it is true is surprising. When a well tested Cox model routine is available many packages use this ‘trick’ rather than writing a new software routine from scratch, and this is what the clogit routine does. In detail, a stratified Cox model with each case/control group assigned to its own stratum, time set to a constant, status of 1=case 0=control, and using the exact partial likelihood has the same likelihood formula as a conditional logistic regression. The clogit routine creates the necessary dummy variable of times (all 1) and the strata, then calls coxph. The computation of the exact partial likelihood can be very slow, however. If a particular strata had say 10 events out of 20 subjects we have to add up a denominator that involves all possible ways of choosing 10 out of 20, which is 20!/(10! 10!) = 184756 terms. Gail et al describe a fast recursion method which partly ameliorates this; it was incorporated into version 2.36-11 of the survival package. The computation remains infeasible for very large groups of ties, say 100 ties out of 500 subjects, and may even lead to integer overflow for the subscripts – in this latter case the routine will refuse to undertake the task. The Efron approximation is normally a sufficiently accurate substitute. Most of the time conditional logistic modeling is applied data with 1 case + k controls per set, in which case all of the approximations for ties lead to exactly the same result. The 'approximate' option maps to the Breslow approximation for the Cox model, for historical reasons. Case weights are not allowed when the exact option is used, as the likelihood is not defined for fractional weights. Even with integer case weights it is not clear how they should be handled. For instance if there are two deaths in a strata, one with weight=1 and one with weight=2, should the likelihood calculation consider all subsets of size 2 or all subsets of size 3? Consequently, case weights are ignored by the routine in this case. ### Value An object of class `"clogit"`, which is a wrapper for a `"coxph"` object. ### References Michell H Gail, Jay H Lubin and Lawrence V Rubinstein. Likelihood calculations for matched case-control studies and survival studies with tied death times. Biometrika 68:703-707, 1980. John A. Logan. A multivariate model for mobility tables. Am J Sociology 89:324-349, 1983. ### Author(s) Thomas Lumley ### See Also `<strata>`,`<coxph>`,`[glm](../../stats/html/glm)` ### Examples ``` ## Not run: clogit(case ~ spontaneous + induced + strata(stratum), data=infert) # A multinomial response recoded to use clogit # The revised data set has one copy per possible outcome level, with new # variable tocc = target occupation for this copy, and case = whether # that is the actual outcome for each subject. # See the reference below for the data. resp <- levels(logan$occupation) n <- nrow(logan) indx <- rep(1:n, length(resp)) logan2 <- data.frame(logan[indx,], id = indx, tocc = factor(rep(resp, each=n))) logan2$case <- (logan2$occupation == logan2$tocc) clogit(case ~ tocc + tocc:education + strata(id), logan2) ``` r None `Surv` Create a Survival Object -------------------------------- ### Description Create a survival object, usually used as a response variable in a model formula. Argument matching is special for this function, see Details below. ### Usage ``` Surv(time, time2, event, type=c('right', 'left', 'interval', 'counting', 'interval2', 'mstate'), origin=0) is.Surv(x) ``` ### Arguments | | | | --- | --- | | `time` | for right censored data, this is the follow up time. For interval data, the first argument is the starting time for the interval. | | `event` | The status indicator, normally 0=alive, 1=dead. Other choices are `TRUE`/`FALSE` (`TRUE` = death) or 1/2 (2=death). For interval censored data, the status indicator is 0=right censored, 1=event at `time`, 2=left censored, 3=interval censored. For multiple endpoint data the event variable will be a factor, whose first level is treated as censoring. Although unusual, the event indicator can be omitted, in which case all subjects are assumed to have an event. | | `time2` | ending time of the interval for interval censored or counting process data only. Intervals are assumed to be open on the left and closed on the right, `(start, end]`. For counting process data, `event` indicates whether an event occurred at the end of the interval. | | `type` | character string specifying the type of censoring. Possible values are `"right"`, `"left"`, `"counting"`, `"interval"`, `"interval2"` or `"mstate"`. | | `origin` | for counting process data, the hazard function origin. This option was intended to be used in conjunction with a model containing time dependent strata in order to align the subjects properly when they cross over from one strata to another, but it has rarely proven useful. | | `x` | any R object. | ### Details When the `type` argument is missing the code assumes a type based on the following rules: * If there are two unnamed arguments, they will match `time` and `event` in that order. If there are three unnamed arguments they match `time`, `time2` and `event`. * If the event variable is a factor then type `mstate` is assumed. Otherwise type `right` if there is no `time2` argument, and type `counting` if there is. As a consequence the `type` argument will normally be omitted. When the survival type is "mstate" then the status variable will be treated as a factor. The first level of the factor is taken to represent censoring and remaining ones a transition to the given state. (If the status variable is a factor then `mstate` is assumed.) Interval censored data can be represented in two ways. For the first use `type = "interval"` and the codes shown above. In that usage the value of the `time2` argument is ignored unless event=3. The second approach is to think of each observation as a time interval with (-infinity, t) for left censored, (t, infinity) for right censored, (t,t) for exact and (t1, t2) for an interval. This is the approach used for type = interval2. Infinite values can be represented either by actual infinity (Inf) or NA. The second form has proven to be the more useful one. Presently, the only methods allowing interval censored data are the parametric models computed by `survreg` and survival curves computed by `survfit`; for both of these, the distinction between open and closed intervals is unimportant. The distinction is important for counting process data and the Cox model. The function tries to distinguish between the use of 0/1 and 1/2 coding for censored data via the condition `if (max(status)==2)`. If 1/2 coding is used and all the subjects are censored, it will guess wrong. In any questionable case it is safer to use logical coding, e.g., `Surv(time, status==3)` would indicate that '3' is the code for an event. For multi-state survival the status variable will be a factor, whose first level is assumed to correspond to censoring. Surv objects can be subscripted either as a vector, e.g. `x[1:3]` using a single subscript, in which case the `drop` argument is ignored and the result will be a survival object; or as a matrix by using two subscripts. If the second subscript is missing and `drop=F` (the default), the result of the subscripting will be a Surv object, e.g., `x[1:3,,drop=F]`, otherwise the result will be a matrix (or vector), in accordance with the default behavior for subscripting matrices. ### Value An object of class `Surv`. There are methods for `print`, `is.na`, and subscripting survival objects. `Surv` objects are implemented as a matrix of 2 or 3 columns that has further attributes. These include the type (left censored, right censored, counting process, etc.) and labels for the states for multi-state objects. Any attributes of the input arguments are also preserved in `inputAttributes`. This may be useful for other packages that have attached further information to data items such as labels; none of the routines in the survival package make use of these values, however. In the case of `is.Surv`, a logical value `TRUE` if `x` inherits from class `"Surv"`, otherwise an `FALSE`. ### Note The use of 1/2 coding for status is an interesting historical artifact. For data contained on punch cards, IBM 360 Fortran treated blank as a zero, which led to a policy within the Mayo Clinic section of Biostatistics to never use "0" as a data value since one could not distinguish it from a missing value. Policy became habit, as is often the case, and the use of 1/2 coding for alive/dead endured long after the demise of the punch cards that had sired the practice. At the time `Surv` was written many Mayo data sets still used this convention, e.g., the 1994 `lung` data set found in the package. ### See Also `<coxph>`, `<survfit>`, `<survreg>`, `<lung>`. ### Examples ``` with(aml, Surv(time, status)) survfit(Surv(time, status) ~ ph.ecog, data=lung) Surv(heart$start, heart$stop, heart$event) ``` r None `predict.survreg` Predicted Values for a ‘survreg’ Object ---------------------------------------------------------- ### Description Predicted values for a `survreg` object ### Usage ``` ## S3 method for class 'survreg' predict(object, newdata, type=c("response", "link", "lp", "linear", "terms", "quantile", "uquantile"), se.fit=FALSE, terms=NULL, p=c(0.1, 0.9), na.action=na.pass, ...) ``` ### Arguments | | | | --- | --- | | `object` | result of a model fit using the `survreg` function. | | `newdata` | data for prediction. If absent predictions are for the subjects used in the original fit. | | `type` | the type of predicted value. This can be on the original scale of the data (response), the linear predictor (`"linear"`, with `"lp"` as an allowed abbreviation), a predicted quantile on the original scale of the data (`"quantile"`), a quantile on the linear predictor scale (`"uquantile"`), or the matrix of terms for the linear predictor (`"terms"`). At this time `"link"` and linear predictor (`"lp"`) are identical. | | `se.fit` | if `TRUE`, include the standard errors of the prediction in the result. | | `terms` | subset of terms. The default for residual type `"terms"` is a matrix with one column for every term (excluding the intercept) in the model. | | `p` | vector of percentiles. This is used only for quantile predictions. | | `na.action` | applies only when the `newdata` argument is present, and defines the missing value action for the new data. The default is to include all observations. | | `...` | for future methods | ### Value a vector or matrix of predicted values. ### References Escobar and Meeker (1992). Assessing influence in regression analysis with censored data. *Biometrics,* 48, 507-528. ### See Also `<survreg>`, `<residuals.survreg>` ### Examples ``` # Draw figure 1 from Escobar and Meeker, 1992. fit <- survreg(Surv(time,status) ~ age + I(age^2), data=stanford2, dist='lognormal') with(stanford2, plot(age, time, xlab='Age', ylab='Days', xlim=c(0,65), ylim=c(.1, 10^5), log='y', type='n')) with(stanford2, points(age, time, pch=c(2,4)[status+1], cex=.7)) pred <- predict(fit, newdata=list(age=1:65), type='quantile', p=c(.1, .5, .9)) matlines(1:65, pred, lty=c(2,1,2), col=1) # Predicted Weibull survival curve for a lung cancer subject with # ECOG score of 2 lfit <- survreg(Surv(time, status) ~ ph.ecog, data=lung) pct <- 1:98/100 # The 100th percentile of predicted survival is at +infinity ptime <- predict(lfit, newdata=data.frame(ph.ecog=2), type='quantile', p=pct, se=TRUE) matplot(cbind(ptime$fit, ptime$fit + 2*ptime$se.fit, ptime$fit - 2*ptime$se.fit)/30.5, 1-pct, xlab="Months", ylab="Survival", type='l', lty=c(1,2,2), col=1) ``` r None `solder` Data from a soldering experiment ------------------------------------------ ### Description In 1988 an experiment was designed and implemented at one of AT&T's factories to investigate alternatives in the "wave soldering" procedure for mounting electronic componentes to printed circuit boards. The experiment varied a number of factors relevant to the process. The response, measured by eye, is the number of visible solder skips. ### Usage ``` solder data(solder, package="survival") ``` ### Format A data frame with 900 observations on the following 6 variables. `Opening` the amount of clearance around the mounting pad (3 levels) `Solder` the amount of solder (Thick or Thin) `Mask` type and thickness of the material used for the solder mask (A1.5, A3, A6, B3, B6) `PadType` the geometry and size of the mounting pad (10 levels) `Panel` each board was divided into 3 panels `skips` the number of skips ### Details This data set is used as a detailed example in chapter 1 of Chambers and Hastie. Observations 1-360 and 541-900 form a balanced design of 3\*2\*10\*3= 180 observations for four of the pad types (A1.5, A3, B3, B6), while rows 361-540 match 3 of the 6 Solder\*Opening combinations with pad type A6 and the other 3 with pad type A3. ### References J Chambers and T Hastie, Statistical models in S. Chapman and Hall, 1993. ### Examples ``` # The balanced subset used by Chambers and Hastie # contains the first 180 of each mask and deletes mask A6. index <- 1 + (1:nrow(solder)) - match(solder$Mask, solder$Mask) solder.balance <- droplevels(subset(solder, Mask != "A6" & index <= 180)) ``` r None `survdiff` Test Survival Curve Differences ------------------------------------------- ### Description Tests if there is a difference between two or more survival curves using the *G-rho* family of tests, or for a single curve against a known alternative. ### Usage ``` survdiff(formula, data, subset, na.action, rho=0, timefix=TRUE) ``` ### Arguments | | | | --- | --- | | `formula` | a formula expression as for other survival models, of the form `Surv(time, status) ~ predictors`. For a one-sample test, the predictors must consist of a single `offset(sp)` term, where `sp` is a vector giving the survival probability of each subject. For a k-sample test, each unique combination of predictors defines a subgroup. A `strata` term may be used to produce a stratified test. To cause missing values in the predictors to be treated as a separate group, rather than being omitted, use the `strata` function with its `na.group=T` argument. | | `data` | an optional data frame in which to interpret the variables occurring in the formula. | | `subset` | expression indicating which subset of the rows of data should be used in the fit. This can be a logical vector (which is replicated to have length equal to the number of observations), a numeric vector indicating which observation numbers are to be included (or excluded if negative), or a character vector of row names to be included. All observations are included by default. | | `na.action` | a missing-data filter function. This is applied to the `model.frame` after any subset argument has been used. Default is `options()$na.action`. | | `rho` | a scalar parameter that controls the type of test. | | `timefix` | process times through the `aeqSurv` function to eliminate potential roundoff issues. | ### Value a list with components: | | | | --- | --- | | `n` | the number of subjects in each group. | | `obs` | the weighted observed number of events in each group. If there are strata, this will be a matrix with one column per stratum. | | `exp` | the weighted expected number of events in each group. If there are strata, this will be a matrix with one column per stratum. | | `chisq` | the chisquare statistic for a test of equality. | | `var` | the variance matrix of the test. | | `strata` | optionally, the number of subjects contained in each stratum. | ### METHOD This function implements the G-rho family of Harrington and Fleming (1982), with weights on each death of *S(t)^rho*, where *S* is the Kaplan-Meier estimate of survival. With `rho = 0` this is the log-rank or Mantel-Haenszel test, and with `rho = 1` it is equivalent to the Peto & Peto modification of the Gehan-Wilcoxon test. If the right hand side of the formula consists only of an offset term, then a one sample test is done. To cause missing values in the predictors to be treated as a separate group, rather than being omitted, use the `factor` function with its `exclude` argument. ### References Harrington, D. P. and Fleming, T. R. (1982). A class of rank test procedures for censored survival data. *Biometrika* **69**, 553-566. ### Examples ``` ## Two-sample test survdiff(Surv(futime, fustat) ~ rx,data=ovarian) ## Stratified 7-sample test survdiff(Surv(time, status) ~ pat.karno + strata(inst), data=lung) ## Expected survival for heart transplant patients based on ## US mortality tables expect <- survexp(futime ~ 1, data=jasa, cohort=FALSE, rmap= list(age=(accept.dt - birth.dt), sex=1, year=accept.dt), ratetable=survexp.us) ## actual survival is much worse (no surprise) survdiff(Surv(jasa$futime, jasa$fustat) ~ offset(expect)) ```
programming_docs
r None `pbc` Mayo Clinic Primary Biliary Cholangitis Data --------------------------------------------------- ### Description Primary sclerosing cholangitis is an autoimmune disease leading to destruction of the small bile ducts in the liver. Progression is slow but inexhortable, eventually leading to cirrhosis and liver decompensation. The condition has been recognised since at least 1851 and was named "primary biliary cirrhosis" in 1949. Because cirrhosis is a feature only of advanced disease, a change of its name to "primary biliary cholangitis" was proposed by patient advocacy groups in 2014. This data is from the Mayo Clinic trial in PBC conducted between 1974 and 1984. A total of 424 PBC patients, referred to Mayo Clinic during that ten-year interval, met eligibility criteria for the randomized placebo controlled trial of the drug D-penicillamine. The first 312 cases in the data set participated in the randomized trial and contain largely complete data. The additional 112 cases did not participate in the clinical trial, but consented to have basic measurements recorded and to be followed for survival. Six of those cases were lost to follow-up shortly after diagnosis, so the data here are on an additional 106 cases as well as the 312 randomized participants. A nearly identical data set found in appendix D of Fleming and Harrington; this version has fewer missing values. ### Usage ``` pbc data(pbc, package="survival") ``` ### Format | | | | --- | --- | | age: | in years | | albumin: | serum albumin (g/dl) | | alk.phos: | alkaline phosphotase (U/liter) | | ascites: | presence of ascites | | ast: | aspartate aminotransferase, once called SGOT (U/ml) | | bili: | serum bilirunbin (mg/dl) | | chol: | serum cholesterol (mg/dl) | | copper: | urine copper (ug/day) | | edema: | 0 no edema, 0.5 untreated or successfully treated | | | 1 edema despite diuretic therapy | | hepato: | presence of hepatomegaly or enlarged liver | | id: | case number | | platelet: | platelet count | | protime: | standardised blood clotting time | | sex: | m/f | | spiders: | blood vessel malformations in the skin | | stage: | histologic stage of disease (needs biopsy) | | status: | status at endpoint, 0/1/2 for censored, transplant, dead | | time: | number of days between registration and the earlier of death, | | | transplantion, or study analysis in July, 1986 | | trt: | 1/2/NA for D-penicillmain, placebo, not randomised | | trig: | triglycerides (mg/dl) | | | ### Source T Therneau and P Grambsch (2000), *Modeling Survival Data: Extending the Cox Model*, Springer-Verlag, New York. ISBN: 0-387-98784-3. ### See Also `<pbcseq>` r None `nsk` Natural splines with knot heights as the basis. ------------------------------------------------------ ### Description Create the design matrix for a natural spline, such that the coefficient of the resulting fit are the values of the function at the knots. ### Usage ``` nsk(x, df = NULL, knots = NULL, intercept = FALSE, b = 0.05, Boundary.knots = quantile(x, c(b, 1 - b), na.rm = TRUE)) ``` ### Arguments | | | | --- | --- | | `x` | the predictor variable. Missing values are allowed. | | `df` | degrees of freedom. One can supply df rather than knots; ns() then chooses df - 1 - intercept knots at suitably chosen quantiles of x (which will ignore missing values). The default, df = NULL, sets the number of inner knots as length(knots). | | `knots` | breakpoints that define the spline. The default is no knots; together with the natural boundary conditions this results in a basis for linear regression on x. Typical values are the mean or median for one knot, quantiles for more knots. See also Boundary.knots. | | `intercept` | if TRUE, an intercept is included in the basis; default is FALSE | | `b` | default placement of the boundary knots. A value of `bs=0` will replicate the default behavior of `ns`. | | `Boundary.knots` | boundary points at which to impose the natural boundary conditions and anchor the B-spline basis. Beyond these points the function is assumed to be linear. If both knots and Boundary.knots are supplied, the basis parameters do not depend on x. Data can extend beyond Boundary.knots | ### Details The `nsk` function behaves identically to the `ns` function, with two exceptions. The primary one is that the returned basis is such that coefficients correspond to the value of the fitted function at the knot points. If `intercept = FALSE`, there will be k-1 coefficients corresponding to the k knots, and they will be the difference in predicted value between knots 2-k and knot 1. The primary advantage to the basis is that the coefficients are directly interpretable. A second is that tests for the linear and non-linear components are simple contrasts. The second differnce with `ns` is one of opinion with respect to the default position for the boundary knots. The default here is closer to that found in the `rms::rcs` function. ### Value A matrix of dimension length(x) \* df where either df was supplied or, if knots were supplied, df = length(knots) + 1 + intercept. Attributes are returned that correspond to the arguments to kns, and explicitly give the knots, Boundary.knots etc for use by predict.kns(). ### Note A thin flexible metal or wooden strip is called a spline, and is the traditional method for laying out a smooth curve, e.g., for a ship's hull or an airplane wing. Pins are put into a board and the strip is passed through them, each pin is a 'knot'. A mathematical spline is a piecewise function between each knot. A linear spline will be a set of connected line segments, a quadratic spline is a set of connected local quadratic functions, constrained to have a continuous first derivative, a cubic spline is cubic between each knot, constrained to have continuous first and second derivatives, and etc. Mathematical splines are not an exact representation of natural splines: being a physical object the wood or metal strip will have continuous derivatives of all orders. Cubic splines are commonly used because they are sufficiently smooth to look natural to the human eye. If the mathematical spline is further constrained to be linear beyond the end knots, this is often called a 'natural spline', due to the fact that a wooden or metal spline will also be linear beyond the last knots. Another name for the same object is a 'restricted cubic spline', since it is achieved in code by adding a further constraint. Given a vector of data points and a set of knots, it is possible to create a basis matrix X with one column per knot, such that ordinary regression of X on y will fit the cubic spline function, hence these are also called 'regression splines'. One label is no better than another. Given a basis matrix $X$, the matrix Z= XT for any k by k nonsingular matrix T is is also a basis matrix, and will result in identical predicted values, but a new set of coefficients gamma = (T-inverse) beta in place of beta. One can choose the basis function so that X is easy to construct, to make the regression numerically stable, to make tests easier, or based on other considerations. It seems as though every spline function returns a different basis set, which unfortunately makes fits difficult to compare; and this is yet one more, chosen to make the coefficients more interpretable. ### See Also `[ns](../../splines/html/ns)` ### Examples ``` # make some dummy data tdata <- data.frame(x= lung$age, y = 10*log(lung$age-35) + rnorm(228, 0, 2)) fit1 <- lm(y ~ -1 + nsk(x, df=4, intercept=TRUE) , data=tdata) fit2 <- lm(y ~ nsk(x, df=3), data=tdata) # the knots (same for both fits) knots <- unlist(attributes(fit1$model[[2]])[c('Boundary.knots', 'knots')]) knots unname(coef(fit1)) # predictions at the knot points unname(coef(fit1)[-1] - coef(fit1)[1]) # differences: yhat[2:4] - yhat[1] unname(coef(fit2)) ## Not run: plot(y ~ x, data=tdata) points(sort(knots), coef(fit1), col=2, pch=19) coef(fit)[1] + c(0, coef(fit)[-1]) ## End(Not run) ``` r None `residuals.survreg` Compute Residuals for ‘survreg’ Objects ------------------------------------------------------------ ### Description This is a method for the function `[residuals](../../stats/html/residuals)` for objects inheriting from class `survreg`. ### Usage ``` ## S3 method for class 'survreg' residuals(object, type=c("response", "deviance","dfbeta","dfbetas", "working","ldcase","ldresp","ldshape", "matrix"), rsigma=TRUE, collapse=FALSE, weighted=FALSE, ...) ``` ### Arguments | | | | --- | --- | | `object` | an object inheriting from class `survreg`. | | `type` | type of residuals, with choices of `"response"`, `"deviance"`, `"dfbeta"`, `"dfbetas"`, `"working"`, `"ldcase"`, `"lsresp"`, `"ldshape"`, and `"matrix"`. | | `rsigma` | include the scale parameters in the variance matrix, when doing computations. (I can think of no good reason not to). | | `collapse` | optional vector of subject groups. If given, this must be of the same length as the residuals, and causes the result to be per group residuals. | | `weighted` | give weighted residuals? Normally residuals are unweighted. | | `...` | other unused arguments | ### Value A vector or matrix of residuals is returned. Response residuals are on the scale of the original data, working residuals are on the scale of the linear predictor, and deviance residuals are on log-likelihood scale. The dfbeta residuals are a matrix, where the ith row gives the approximate change in the coefficients due to the addition of subject i. The dfbetas matrix contains the dfbeta residuals, with each column scaled by the standard deviation of that coefficient. The matrix type produces a matrix based on derivatives of the log-likelihood function. Let *L* be the log-likelihood, *p* be the linear predictor *X %\*% coef*, and *s* be *\log(σ)*. Then the 6 columns of the matrix are *L*, *dL/dp*,*ddL/(dp dp)*, *dL/ds*, *ddL/(ds ds)* and *ddL/(dp ds)*. Diagnostics based on these quantities are discussed in the book and article by Escobar and Meeker. The main ones are the likelihood displacement residuals for perturbation of a case weight (`ldcase`), the response value (`ldresp`), and the `shape`. For a transformed distribution such as the log-normal or Weibull, matrix residuals are based on the log-likelihood of the transformed data log(y). For a monotone function f the density of f(X) is the density of X divided by the derivative of f (the Jacobian), so subtract log(derivative) from each uncensored observation's loglik value in order to match the `loglik` component of the result. The other colums of the matrix residual are unchanged by the transformation. ### References Escobar, L. A. and Meeker, W. Q. (1992). Assessing influence in regression analysis with censored data. *Biometrics* **48**, 507-528. Escobar, L. A. and Meeker, W. Q. (1998). Statistical Methods for Reliablilty Data. Wiley. ### See Also `<predict.survreg>` ### Examples ``` fit <- survreg(Surv(futime, death) ~ age + sex, mgus2) summary(fit) # age and sex are both important rr <- residuals(fit, type='matrix') sum(rr[,1]) - with(mgus2, sum(log(futime[death==1]))) # loglik plot(mgus2$age, rr[,2], col= (1+mgus2$death)) # ldresp ``` r None `grid.convert` Convert Between Different grid Coordinate Systems ----------------------------------------------------------------- ### Description These functions take a unit object and convert it to an equivalent unit object in a different coordinate system. ### Usage ``` convertX(x, unitTo, valueOnly = FALSE) convertY(x, unitTo, valueOnly = FALSE) convertWidth(x, unitTo, valueOnly = FALSE) convertHeight(x, unitTo, valueOnly = FALSE) convertUnit(x, unitTo, axisFrom = "x", typeFrom = "location", axisTo = axisFrom, typeTo = typeFrom, valueOnly = FALSE) ``` ### Arguments | | | | --- | --- | | `x` | A unit object. | | `unitTo` | The coordinate system to convert the unit to. See the `<unit>` function for valid coordinate systems. | | `axisFrom` | Either `"x"` or `"y"` to indicate whether the unit object represents a value in the x- or y-direction. | | `typeFrom` | Either `"location"` or `"dimension"` to indicate whether the unit object represents a location or a length. | | `axisTo` | Same as `axisFrom`, but applies to the unit object that is to be created. | | `typeTo` | Same as `typeFrom`, but applies to the unit object that is to be created. | | `valueOnly` | A logical indicating. If `TRUE` then the function does not return a unit object, but rather only the converted numeric values. | ### Details The `convertUnit` function allows for general-purpose conversions. The other four functions are just more convenient front-ends to it for the most common conversions. The conversions occur within the current viewport. It is not currently possible to convert to all valid coordinate systems (e.g., "strwidth" or "grobwidth"). I'm not sure if all of these are impossible, they just seem implausible at this stage. In normal usage of grid, these functions should not be necessary. If you want to express a location or dimension in inches rather than user coordinates then you should simply do something like `unit(1, "inches")` rather than something like `unit(0.134, "native")`. In some cases, however, it is necessary for the user to perform calculations on a unit value and this function becomes necessary. In such cases, please take note of the warning below. ### Value A unit object in the specified coordinate system (unless `valueOnly` is `TRUE` in which case the returned value is a numeric). ### Warning The conversion is only valid for the current device size. If the device is resized then at least some conversions will become invalid. For example, suppose that I create a unit object as follows: `oneinch <- convertUnit(unit(1, "inches"), "native")`. Now if I resize the device, the unit object in oneinch no longer corresponds to a physical length of 1 inch. ### Author(s) Paul Murrell ### See Also `<unit>` ### Examples ``` ## A tautology convertX(unit(1, "inches"), "inches") ## The physical units convertX(unit(2.54, "cm"), "inches") convertX(unit(25.4, "mm"), "inches") convertX(unit(72.27, "points"), "inches") convertX(unit(1/12*72.27, "picas"), "inches") convertX(unit(72, "bigpts"), "inches") convertX(unit(1157/1238*72.27, "dida"), "inches") convertX(unit(1/12*1157/1238*72.27, "cicero"), "inches") convertX(unit(65536*72.27, "scaledpts"), "inches") convertX(unit(1/2.54, "inches"), "cm") convertX(unit(1/25.4, "inches"), "mm") convertX(unit(1/72.27, "inches"), "points") convertX(unit(1/(1/12*72.27), "inches"), "picas") convertX(unit(1/72, "inches"), "bigpts") convertX(unit(1/(1157/1238*72.27), "inches"), "dida") convertX(unit(1/(1/12*1157/1238*72.27), "inches"), "cicero") convertX(unit(1/(65536*72.27), "inches"), "scaledpts") pushViewport(viewport(width=unit(1, "inches"), height=unit(2, "inches"), xscale=c(0, 1), yscale=c(1, 3))) ## Location versus dimension convertY(unit(2, "native"), "inches") convertHeight(unit(2, "native"), "inches") ## From "x" to "y" (the conversion is via "inches") convertUnit(unit(1, "native"), "native", axisFrom="x", axisTo="y") ## Convert several values at once convertX(unit(c(0.5, 2.54), c("npc", "cm")), c("inches", "native")) popViewport() ## Convert a complex unit convertX(unit(1, "strwidth", "Hello"), "native") ``` r None `xsplinePoints` Return the points that would be used to draw an Xspline (or a Bezier curve). --------------------------------------------------------------------------------------------- ### Description Rather than drawing an Xspline (or Bezier curve), this function returns the points that would be used to draw the series of line segments for the Xspline. This may be useful to post-process the Xspline curve, for example, to clip the curve. ### Usage ``` xsplinePoints(x) bezierPoints(x) ``` ### Arguments | | | | --- | --- | | `x` | An Xspline grob, as produced by the `xsplineGrob()` function (or a beziergrob, as produced by the `bezierGrob()` function). | ### Details The points returned by this function will only be relevant for the drawing context in force when this function was called. ### Value Depends on how many Xsplines would be drawn. If only one, then a list with two components, named x and y, both of which are unit objects (in inches). If several Xsplines would be drawn then the result of this function is a list of lists. ### Author(s) Paul Murrell ### See Also `[xsplineGrob](grid.xspline)` and `[bezierGrob](grid.bezier)` ### Examples ``` grid.newpage() xsg <- xsplineGrob(c(.1, .1, .9, .9), c(.1, .9, .9, .1), shape=1) grid.draw(xsg) trace <- xsplinePoints(xsg) grid.circle(trace$x, trace$y, default.units="inches", r=unit(.5, "mm")) grid.newpage() vp <- viewport(width=.5) xg <- xsplineGrob(x=c(0, .2, .4, .2, .5, .7, .9, .7), y=c(.5, 1, .5, 0, .5, 1, .5, 0), id=rep(1:2, each=4), shape=1, vp=vp) grid.draw(xg) trace <- xsplinePoints(xg) pushViewport(vp) invisible(lapply(trace, function(t) grid.lines(t$x, t$y, gp=gpar(col="red")))) popViewport() grid.newpage() bg <- bezierGrob(c(.2, .2, .8, .8), c(.2, .8, .8, .2)) grid.draw(bg) trace <- bezierPoints(bg) grid.circle(trace$x, trace$y, default.units="inches", r=unit(.5, "mm")) ``` r None `grid.DLapply` Modify the Grid Display List -------------------------------------------- ### Description Call a function on each element of the current display list. ### Usage ``` grid.DLapply(FUN, ...) ``` ### Arguments | | | | --- | --- | | `FUN` | A function; the first argument to this function is passed each element of the display list. | | `...` | Further arguments to pass to `FUN` . | ### Details This function is insanely dangerous (for the grid display list). Two token efforts are made to try to avoid ending up with complete garbage on the display list: 1. The display list is only replaced once all new elements have been generated (so an error during generation does not result in a half-finished display list). 2. All new elements must be either `NULL` or inherit from the class of the element that they are replacing. ### Value The side effect of these functions is usually to modify the grid display list. ### See Also [Grid](grid). ### Examples ``` grid.newpage() grid.rect(width=.4, height=.4, x=.25, y=.75, gp=gpar(fill="black"), name="r1") grid.rect(width=.4, height=.4, x=.5, y=.5, gp=gpar(fill="grey"), name="r2") grid.rect(width=.4, height=.4, x=.75, y=.25, gp=gpar(fill="white"), name="r3") grid.DLapply(function(x) { if (is.grob(x)) x$gp <- gpar(); x }) grid.refresh() ``` r None `grid.display.list` Control the Grid Display List -------------------------------------------------- ### Description Turn the Grid display list on or off. ### Usage ``` grid.display.list(on=TRUE) engine.display.list(on=TRUE) ``` ### Arguments | | | | --- | --- | | `on` | A logical value to indicate whether the display list should be on or off. | ### Details All drawing and viewport-setting operations are (by default) recorded in the Grid display list. This allows redrawing to occur following an editing operation. This display list could get very large so it may be useful to turn it off in some cases; this will of course disable redrawing. All graphics output is also recorded on the main display list of the R graphics engine (by default). This supports redrawing following a device resize and allows copying between devices. Turning off this display list means that grid will redraw from its own display list for device resizes and copies. This will be slower than using the graphics engine display list. ### Value None. ### WARNING Turning the display list on causes the display list to be erased! Turning off both the grid display list and the graphics engine display list will result in no redrawing whatsoever. ### Author(s) Paul Murrell
programming_docs
r None `grid.edit` Edit the Description of a Grid Graphical Object ------------------------------------------------------------ ### Description Changes the value of one of the slots of a grob and redraws the grob. ### Usage ``` grid.edit(gPath, ..., strict = FALSE, grep = FALSE, global = FALSE, allDevices = FALSE, redraw = TRUE) grid.gedit(..., grep = TRUE, global = TRUE) editGrob(grob, gPath = NULL, ..., strict = FALSE, grep = FALSE, global = FALSE, warn = TRUE) ``` ### Arguments | | | | --- | --- | | `grob` | A grob object. | | `...` | Zero or more named arguments specifying new slot values. | | `gPath` | A gPath object. For `grid.edit` this specifies a grob on the display list. For `editGrob` this specifies a descendant of the specified grob. | | `strict` | A boolean indicating whether the gPath must be matched exactly. | | `grep` | A boolean indicating whether the `gPath` should be treated as a regular expression. Values are recycled across elements of the `gPath` (e.g., `c(TRUE, FALSE)` means that every odd element of the `gPath` will be treated as a regular expression). | | `global` | A boolean indicating whether the function should affect just the first match of the `gPath`, or whether all matches should be affected. | | `warn` | A logical to indicate whether failing to find the specified gPath should trigger an error. | | `allDevices` | A boolean indicating whether all open devices should be searched for matches, or just the current device. NOT YET IMPLEMENTED. | | `redraw` | A logical value to indicate whether to redraw the grob. | ### Details `editGrob` copies the specified grob and returns a modified grob. `grid.edit` destructively modifies a grob on the display list. If `redraw` is `TRUE` it then redraws everything to reflect the change. Both functions call `editDetails` to allow a grob to perform custom actions and `validDetails` to check that the modified grob is still coherent. `grid.gedit` (`g` for global) is just a convenience wrapper for `grid.edit` with different defaults. ### Value `editGrob` returns a grob object; `grid.edit` returns `NULL`. ### Author(s) Paul Murrell ### See Also `[grob](grid.grob)`, `[getGrob](grid.get)`, `[addGrob](grid.add)`, `[removeGrob](grid.remove)`. ### Examples ``` grid.newpage() grid.xaxis(name = "xa", vp = viewport(width=.5, height=.5)) grid.edit("xa", gp = gpar(col="red")) # won't work because no ticks (at is NULL) try(grid.edit(gPath("xa", "ticks"), gp = gpar(col="green"))) grid.edit("xa", at = 1:4/5) # Now it should work try(grid.edit(gPath("xa", "ticks"), gp = gpar(col="green"))) ``` r None `gPath` Concatenate Grob Names ------------------------------- ### Description This function can be used to generate a grob path for use in `grid.edit` and friends. A grob path is a list of nested grob names. ### Usage ``` gPath(...) ``` ### Arguments | | | | --- | --- | | `...` | Character values which are grob names. | ### Details Grob names must only be unique amongst grobs which share the same parent in a gTree. This function can be used to generate a specification for a grob that includes the grob's parent's name (and the name of its parent and so on). For interactive use, it is possible to directly specify a path, but it is strongly recommended that this function is used otherwise in case the path separator is changed in future versions of grid. ### Value A `gPath` object. ### See Also `[grob](grid.grob)`, `[editGrob](grid.edit)`, `[addGrob](grid.add)`, `[removeGrob](grid.remove)`, `[getGrob](grid.get)`, `[setGrob](grid.set)` ### Examples ``` gPath("g1", "g2") ``` r None `showGrob` Label grid grobs. ----------------------------- ### Description Produces a graphical display of (by default) the current grid scene, with labels showing the names of each grob in the scene. It is also possible to label only specific grobs in the scene. ### Usage ``` showGrob(x = NULL, gPath = NULL, strict = FALSE, grep = FALSE, recurse = TRUE, depth = NULL, labelfun = grobLabel, ...) ``` ### Arguments | | | | --- | --- | | `x` | If `NULL`, the current grid scene is labelled. Otherwise, a grob (or gTree) to draw and then label. | | `gPath` | A path identifying a subset of the current scene or grob to be labelled. | | `strict` | Logical indicating whether the gPath is strict. | | `grep` | Logical indicating whether the gPath is a regular expression. | | `recurse` | Should the children of gTrees also be labelled? | | `depth` | Only display grobs at the specified depth (may be a vector of depths). | | `labelfun` | Function used to generate a label from each grob. | | `...` | Arguments passed to `labelfun` to control fine details of the generated label. | ### Details None of the labelling is recorded on the grid display list so the original scene can be reproduced by calling `grid.refresh`. ### See Also `[grob](grid.grob)` and `[gTree](grid.grob)` ### Examples ``` grid.newpage() gt <- gTree(childrenvp=vpStack( viewport(x=0, width=.5, just="left", name="vp"), viewport(y=.5, height=.5, just="bottom", name="vp2")), children=gList(rectGrob(vp="vp::vp2", name="child")), name="parent") grid.draw(gt) showGrob() showGrob(gPath="child") showGrob(recurse=FALSE) showGrob(depth=1) showGrob(depth=2) showGrob(depth=1:2) showGrob(gt) showGrob(gt, gPath="child") showGrob(just="left", gp=gpar(col="red", cex=.5), rot=45) showGrob(labelfun=function(grob, ...) { x <- grobX(grob, "west") y <- grobY(grob, "north") gTree(children=gList(rectGrob(x=x, y=y, width=stringWidth(grob$name) + unit(2, "mm"), height=stringHeight(grob$name) + unit(2, "mm"), gp=gpar(col=NA, fill=rgb(1, 0, 0, .5)), just=c("left", "top")), textGrob(grob$name, x=x + unit(1, "mm"), y=y - unit(1, "mm"), just=c("left", "top")))) }) ## Not run: # Examples from higher-level packages library(lattice) # Ctrl-c after first example example(histogram) showGrob() showGrob(gPath="plot_01.ylab") library(ggplot2) # Ctrl-c after first example example(qplot) showGrob() showGrob(recurse=FALSE) showGrob(gPath="panel-3-3") showGrob(gPath="axis.title", grep=TRUE) showGrob(depth=2) ## End(Not run) ``` r None `grid.record` Encapsulate calculations and drawing --------------------------------------------------- ### Description Evaluates an expression that includes both calculations and drawing that depends on the calculations so that both the calculations and the drawing will be rerun when the scene is redrawn (e.g., device resize or editing). Intended *only* for expert use. ### Usage ``` recordGrob(expr, list, name=NULL, gp=NULL, vp=NULL) grid.record(expr, list, name=NULL, gp=NULL, vp=NULL) ``` ### Arguments | | | | --- | --- | | `expr` | object of mode `[expression](../../base/html/expression)` or `call` or an unevaluated expression. | | `list` | a list defining the environment in which `expr` is to be evaluated. | | `name` | A character identifier. | | `gp` | An object of class `"gpar"`, typically the output from a call to the function `<gpar>`. This is basically a list of graphical parameter settings. | | `vp` | A Grid viewport object (or NULL). | ### Details A grob is created of special class `"recordedGrob"` (and drawn, in the case of `grid.record`). The `drawDetails` method for this class evaluates the expression with the list as the evaluation environment (and the grid Namespace as the parent of that environment). ### Note This function *must* be used instead of the function `recordGraphics`; all of the dire warnings about using `recordGraphics` responsibly also apply here. ### Author(s) Paul Murrell ### See Also `[recordGraphics](../../grdevices/html/recordgraphics)` ### Examples ``` grid.record({ w <- convertWidth(unit(1, "inches"), "npc") grid.rect(width=w) }, list()) ``` r None `deviceLoc` Convert Viewport Location to Device Location --------------------------------------------------------- ### Description These functions take a pair of unit objects and convert them to a pair of device locations (or dimensions) in inches (or native device coordinates). ### Usage ``` deviceLoc(x, y, valueOnly = FALSE, device = FALSE) deviceDim(w, h, valueOnly = FALSE, device = FALSE) ``` ### Arguments | | | | --- | --- | | `x, y, w, h` | A unit object. | | `valueOnly` | A logical indicating. If `TRUE` then the function does not return a unit object, but rather only the converted numeric values. | | `device` | A logical indicating whether the returned values should be in inches or native device units. | ### Details These functions differ from the functions like `convertX()` because they convert from the coordinate systems within a viewport to inches on the device (i.e., from one viewport to another) and because they only deal with pairs of values (locations or dimensions). The functions like `convertX()` convert between different units within the same viewport and convert along a single dimension. ### Value A list with two components, both of which are unit object in inches (unless `valueOnly` is `TRUE` in which case both components are numeric). ### Warning The conversion is only valid for the current device size. If the device is resized then at least some conversions will become invalid. Furthermore, the returned value only makes sense with respect to the entire device (i.e., within the context of the root viewport). ### Author(s) Paul Murrell ### See Also `<unit>` ### Examples ``` ## A tautology grid.newpage() pushViewport(viewport()) deviceLoc(unit(1, "inches"), unit(1, "inches")) ## Something less obvious grid.newpage() pushViewport(viewport(width=.5, height=.5)) grid.rect() x <- unit(1, "in") y <- unit(1, "in") grid.circle(x, y, r=unit(2, "mm")) loc <- deviceLoc(x, y) loc upViewport() grid.circle(loc$x, loc$y, r=unit(1, "mm"), gp=gpar(fill="black")) ## Something even less obvious grid.newpage() pushViewport(viewport(width=.5, height=.5, angle=30)) grid.rect() x <- unit(.2, "npc") y <- unit(2, "in") grid.circle(x, y, r=unit(2, "mm")) loc <- deviceLoc(x, y) loc upViewport() grid.circle(loc$x, loc$y, r=unit(1, "mm"), gp=gpar(fill="black")) ``` r None `grid.delay` Encapsulate calculations and generating a grob ------------------------------------------------------------ ### Description Evaluates an expression that includes both calculations and generating a grob that depends on the calculations so that both the calculations and the grob generation will be rerun when the scene is redrawn (e.g., device resize or editing). Intended *only* for expert use. ### Usage ``` delayGrob(expr, list, name=NULL, gp=NULL, vp=NULL) grid.delay(expr, list, name=NULL, gp=NULL, vp=NULL) ``` ### Arguments | | | | --- | --- | | `expr` | object of mode `[expression](../../base/html/expression)` or `call` or an unevaluated expression. | | `list` | a list defining the environment in which `expr` is to be evaluated. | | `name` | A character identifier. | | `gp` | An object of class `"gpar"`, typically the output from a call to the function `<gpar>`. This is basically a list of graphical parameter settings. | | `vp` | A Grid viewport object (or NULL). | ### Details A grob is created of special class `"delayedgrob"` (and drawn, in the case of `grid.delay`). The `makeContent` method for this class evaluates the expression with the list as the evaluation environment (and the grid Namespace as the parent of that environment). The `expr` argument should return a grob as its result. These functions are analogues of the `grid.record()` and `recordGrob()` functions; the difference is that these functions are based on the `makeContent()` hook, while those functions are based on the `drawDetails()` hook. ### Note This function *must* be used instead of the function `recordGraphics`; all of the dire warnings about using `recordGraphics` responsibly also apply here. ### Author(s) Paul Murrell ### See Also `[recordGraphics](../../grdevices/html/recordgraphics)` ### Examples ``` grid.delay({ w <- convertWidth(unit(1, "inches"), "npc") rectGrob(width=w) }, list()) ``` r None `grid.force` Force a grob into its components ---------------------------------------------- ### Description Some grobs only generate their content to draw at drawing time; this function replaces such grobs with their at-drawing-time content. ### Usage ``` grid.force(x, ...) ## Default S3 method: grid.force(x, redraw = FALSE, ...) ## S3 method for class 'gPath' grid.force(x, strict = FALSE, grep = FALSE, global = FALSE, redraw = FALSE, ...) ## S3 method for class 'grob' grid.force(x, draw = FALSE, ...) forceGrob(x) grid.revert(x, ...) ## S3 method for class 'gPath' grid.revert(x, strict = FALSE, grep = FALSE, global = FALSE, redraw = FALSE, ...) ## S3 method for class 'grob' grid.revert(x, draw = FALSE, ...) ``` ### Arguments | | | | --- | --- | | `x` | For the default method, `x` should not be specified. Otherwise, `x` should be a grob or a gPath. If `x` is character, it is assumed to be a gPath. | | `strict` | A boolean indicating whether the `path` must be matched exactly. | | `grep` | Whether the `path` should be treated as a regular expression. | | `global` | A boolean indicating whether the function should affect just the first match of the `path`, or whether all matches should be affected. | | `draw` | logical value indicating whether a grob should be drawn after it is forced. | | `redraw` | logical value indicating whether to redraw the grid scene after the forcing operation. | | `...` | Further arguments for use by methods. | ### Details Some grobs wait until drawing time to generate what content will actually be drawn (an axis, as produced by `grid.xaxis()`, with an `at` or `NULL` is a good example because it has to see what viewport it is going to be drawn in before it can decide what tick marks to draw). The content of such grobs (e.g., the tick marks) are not usually visible to `grid.ls()` or accessible to `grid.edit()`. The `grid.force()` function *replaces* a grob with its at-drawing-time contents. For example, an axis will be replaced by a vanilla gTree with lines and text representing the axis tick marks that were actually drawn. This makes the tick marks visible to `grid.ls()` and accessible to `grid.edit()`. The `forceGrob()` function is the internal work horse for `grid.force()`, so will not normally be called directly by the user. It is exported so that methods can be written for custom grob classes if necessary. The `grid.revert()` function reverses the effect of `grid.force()`, replacing forced content with the original grob. ### Warning Forcing an explicit grob produces a result as if the grob were drawn in the *current* drawing context. It may not make sense to draw the result in a different drawing context. ### Note These functions only have an effect for grobs that generate their content at drawing time using `makeContext()` and `makeContent()` methods (*not* for grobs that generate their content at drawing time using `preDrawDetails()` and `drawDetails()` methods). ### Author(s) Paul Murrell ### Examples ``` grid.newpage() pushViewport(viewport(width=.5, height=.5)) # Draw xaxis grid.xaxis(name="xax") grid.ls() # Force xaxis grid.force() grid.ls() # Revert xaxis grid.revert() grid.ls() # Draw and force yaxis grid.force(yaxisGrob(), draw=TRUE) grid.ls() # Revert yaxis grid.revert() grid.ls() # Force JUST xaxis grid.force("xax") grid.ls() # Force ALL grid.force() grid.ls() # Revert JUST xaxis grid.revert("xax") grid.ls() ``` r None `patterns` Define Gradient and Pattern Fills --------------------------------------------- ### Description Functions to define gradient fills and pattern fills. ### Usage ``` linearGradient(colours = c("black", "white"), stops = seq(0, 1, length.out = length(colours)), x1 = unit(0, "npc"), y1 = unit(0, "npc"), x2 = unit(1, "npc"), y2 = unit(1, "npc"), default.units = "npc", extend = c("pad", "repeat", "reflect", "none")) radialGradient(colours = c("black", "white"), stops = seq(0, 1, length.out = length(colours)), cx1 = unit(.5, "npc"), cy1 = unit(.5, "npc"), r1 = unit(0, "npc"), cx2 = unit(.5, "npc"), cy2 = unit(.5, "npc"), r2 = unit(.5, "npc"), default.units = "npc", extend = c("pad", "repeat", "reflect", "none")) pattern(grob, x = 0.5, y = 0.5, width = 1, height = 1, default.units = "npc", just="centre", hjust=NULL, vjust=NULL, extend = c("pad", "repeat", "reflect", "none"), gp = gpar(fill="transparent")) ``` ### Arguments | | | | --- | --- | | `colours` | Two or more colours for the gradient to transition between. | | `stops` | Locations of the gradient colours between the start and end points of the gradient (as a proportion of the distance from the start point to the end point). | | `x1, y1, x2, y2` | The start and end points for a linear gradient. | | `default.units` | The coordinate system to use if any location or dimension is specified as just a numeric value. | | `extend` | What happens outside the start and end of the gradient (see Details). | | `cx1, cy1, r1, cx2, cy2, r2` | The centre and radius of the start and end circles for a radial gradient. | | `grob` | A grob (or a gTree) that will be drawn as the tile in a pattern fill. | | `x, y, width, height` | The size of the tile for a pattern fill. | | `just, hjust, vjust` | The justification of the tile relative to its location. | | `gp` | Default graphical parameter settings for the tile. | ### Details Use these functions to define a gradient fill or pattern fill and then use the resulting object as the value for `fill` in a call to the `gpar()` function. The possible values of extend, and their meanings, are: * [`pad`:] propagate the value of the gradient at its boundary. * [`none`:] produce no fill beyond the limits of the gradient. * [`repeat`:] repeat the fill. * [`reflect`:] repeat the fill in reverse. To create a tiling pattern, provide a simple grob (like a circle), specify the location and size of the pattern to include the simple grob, and specify `extend="repeat"`. ### Value A linear gradient or radial gradient or pattern object. ### Warning Gradient fills and pattern fills are not supported on all graphics devices. Where they are not supported, closed shapes will be rendered with a transparent fill. Where they are supported, not all values of `extend` are supported. ### Author(s) Paul Murrell ### See Also `<gpar>` r None `grid.lines` Draw Lines in a Grid Viewport ------------------------------------------- ### Description These functions create and draw a series of lines. ### Usage ``` grid.lines(x = unit(c(0, 1), "npc"), y = unit(c(0, 1), "npc"), default.units = "npc", arrow = NULL, name = NULL, gp=gpar(), draw = TRUE, vp = NULL) linesGrob(x = unit(c(0, 1), "npc"), y = unit(c(0, 1), "npc"), default.units = "npc", arrow = NULL, name = NULL, gp=gpar(), vp = NULL) grid.polyline(...) polylineGrob(x = unit(c(0, 1), "npc"), y = unit(c(0, 1), "npc"), id=NULL, id.lengths=NULL, default.units = "npc", arrow = NULL, name = NULL, gp=gpar(), vp = NULL) ``` ### Arguments | | | | --- | --- | | `x` | A numeric vector or unit object specifying x-values. | | `y` | A numeric vector or unit object specifying y-values. | | `default.units` | A string indicating the default units to use if `x` or `y` are only given as numeric vectors. | | `arrow` | A list describing arrow heads to place at either end of the line, as produced by the `arrow` function. | | `name` | A character identifier. | | `gp` | An object of class `"gpar"`, typically the output from a call to the function `<gpar>`. This is basically a list of graphical parameter settings. | | `draw` | A logical value indicating whether graphics output should be produced. | | `vp` | A Grid viewport object (or NULL). | | `id` | A numeric vector used to separate locations in `x` and `y` into multiple lines. All locations with the same `id` belong to the same line. | | `id.lengths` | A numeric vector used to separate locations in `x` and `y` into multiple lines. Specifies consecutive blocks of locations which make up separate lines. | | `...` | Arguments passed to `polylineGrob`. | ### Details The first two functions create a lines grob (a graphical object describing lines), and `grid.lines` draws the lines (if `draw` is `TRUE`). The second two functions create or draw a polyline grob, which is just like a lines grob, except that there can be multiple distinct lines drawn. ### Value A lines grob or a polyline grob. `grid.lines` returns a lines grob invisibly. ### Author(s) Paul Murrell ### See Also [Grid](grid), `<viewport>`, `<arrow>` ### Examples ``` grid.lines() # Using id (NOTE: locations are not in consecutive blocks) grid.newpage() grid.polyline(x=c((0:4)/10, rep(.5, 5), (10:6)/10, rep(.5, 5)), y=c(rep(.5, 5), (10:6/10), rep(.5, 5), (0:4)/10), id=rep(1:5, 4), gp=gpar(col=1:5, lwd=3)) # Using id.lengths grid.newpage() grid.polyline(x=outer(c(0, .5, 1, .5), 5:1/5), y=outer(c(.5, 1, .5, 0), 5:1/5), id.lengths=rep(4, 5), gp=gpar(col=1:5, lwd=3)) ```
programming_docs